# FAQ

These are the core questions I get most often about leadership, execution, and building engineering organizations. The answers reflect my background across startups and enterprise and are grounded in first principles: customer value, risk, time, and cost.

## Performance, leadership, and growth

### Does anyone get out of a PIP?
Yes, when the plan is a real improvement plan, not a paper trail. A PIP should be clear, fair, and supported. People can and do recover when expectations are explicit, coaching is real, and progress is measured weekly.
- Principles: no surprises, role-based expectations, timeboxed checkpoints, and documented support.
- What I do: define specific gaps and examples, agree on a 30/60/90 plan, and provide coaching or training. If progress is not sustained, I move to a respectful exit quickly.

### What does "impact" mean at Staff/Principal vs Manager/Director?
Staff/Principal impact is technical leverage: solving system-level problems, unblocking multiple teams, and creating durable improvements. Manager/Director impact is organizational: building teams, aligning priorities, and delivering outcomes through others.
- Staff/Principal: architecture decisions, reliability and performance gains, platform enablement, and cross-team standards.
- Manager/Director: hiring and retention, execution cadence, cross-functional alignment, and business results.

### How do I get noticed without playing politics?
Deliver outcomes that matter, and make the impact legible. Visibility is about clarity, not politics.
- Principles: ship value, write it down, and share it with the right people.
- What I do: tie work to business metrics, publish concise updates, and build trust through reliability. I also invest in relationships before I need them.

### Is management always right?
No. But decisions must be made and aligned on. I expect healthy dissent, then "disagree and commit" once a decision is made.
- Principles: challenge ideas, respect the role, and align on the goal.
- What I do: push for data, present options, and document decisions so we learn and adjust.

### How do I disagree with my manager without tanking my career?
Disagree on the problem, not the person. Bring options, evidence, and a clear recommendation.
- Tactics: ask for the goal, show tradeoffs, and propose a safer or faster path.
- If misalignment is repeated: request a reset on expectations, or explore a different team where your strengths fit.

### How do you decide who gets the "good" projects?
I use transparent criteria: business priority, learning value, and risk profile. "Good" projects are distributed with intent, not favoritism.
- Principles: align with growth plans, rotate opportunities, and balance risk across the team.
- What I do: map projects to skill gaps, pair emerging leaders with mentors, and track who gets stretch work.

### How do you handle a low performer who is popular?
Popularity is not a performance standard. I address expectations early and privately, with clear evidence.
- Principles: protect team morale, be fair, and offer support.
- What I do: set a clear bar, provide coaching, and timebox improvement. If results do not change, I exit respectfully to protect the team.

### What do you do when someone is toxic but productive?
Toxic behavior is a long-term tax. I set explicit behavioral expectations and act quickly.
- Principles: culture is a product, and productivity without trust is fragile.
- What I do: give direct feedback, define non-negotiables, and if behavior persists, remove them regardless of output.

### How do you decide when to hire vs reorganize vs cut scope?
Start with the constraint. If the constraint is capacity, hire. If it is coordination, reorganize. If it is ROI, cut scope.
- Signals to hire: sustained demand, stable roadmap, and clear ownership gaps.
- Signals to reorganize: duplicated work, unclear ownership, or conflicting priorities.
- Signals to cut scope: low ROI, weak customer pull, or high risk for marginal value.

### What do you look for in a Tech Lead?
A Tech Lead is a force multiplier. I look for technical depth, strong communication, and calm execution under ambiguity.
- Core traits: systems thinking, mentorship, product sense, and quality bar ownership.
- Behaviors: keeps teams aligned, makes tradeoffs explicit, and pushes for simple, durable solutions.

### How do you coach someone who is stuck at mid-level?
First diagnose the gap: scope, depth, or influence. Then create a concrete growth plan.
- Tactics: assign a cross-team project, define clear success metrics, and provide regular feedback.
- I also teach business context so they can make better tradeoffs and move from tasks to outcomes.

### How do I recover from a mistake that hit production?
Own it fast, fix it cleanly, and learn out loud. Credibility comes from transparency and improved systems.
- What I do: communicate impact, stabilize quickly, run a blameless postmortem, and implement prevention.
- Then I deliver consistently to rebuild trust.

## 1:1s, communication, and team dynamics

### What are 1:1s?
1:1s are for the person, not for status. They are a protected space to build trust, remove blockers, and coach growth.
- Purpose: alignment, feedback, and development.
- Outcome: fewer surprises and faster execution.

### What makes a good 1:1 agenda?
The agenda should be mostly theirs, with a consistent structure.
- Baseline format: wins, blockers, priorities, feedback, and growth.
- What I add: context from leadership, upcoming changes, and a clear ask if needed.

### How do you handle conflict between two strong engineers?
I anchor on the shared goal and make the decision process explicit.
- Tactics: define the problem, compare options with data, timebox debate, and decide.
- If it is about ego, I reset expectations and focus on outcomes, not personal ownership.

### How do you keep morale up during a messy quarter?
People can handle bad news, not confusion. I reduce noise and create short, visible wins.
- Actions: cut non-essential work, celebrate progress weekly, and provide clear priorities.
- I also protect team energy by minimizing thrash and meeting overload.

### How do you communicate bad news upward?
Early, clearly, and with options. I never surprise executives.
- What I include: impact, root cause, options with tradeoffs, and a recommendation.
- Tone: factual, accountable, and focused on recovery.

### How do you manage up when priorities change weekly?
I push for a decision cadence and a small, stable planning window.
- Tactics: document decisions, ask for explicit tradeoffs, and propose a 2-4 week freeze.
- If change is real, I reflect it in scope, not by burning out the team.

### How do you balance being liked vs being effective?
I aim to be respected and trusted. Kindness without clarity is not leadership.
- Behaviors: be direct, keep promises, and explain the why.
- The goal is psychological safety plus high standards.

## Execution, roadmaps, and delivery

### How do you handle scope creep?
Scope creep is a prioritization failure. I reset on outcomes and tradeoffs.
- Tactics: change control in plain language, explicit tradeoffs, and a "if we add X, we remove Y" policy.
- I also keep a visible backlog of deferred work to avoid silent bloat.

### How do you say "no" to Product politely but firmly?
I say yes to the goal and no to the risk or scope. The conversation is about options.
- Example: "We can hit the launch if we cut these two features or move the date." Then I pick a recommendation.

### How do you estimate work when everything is ambiguous?
Break it down, run a spike, and estimate ranges, not points.
- Tactics: timebox discovery, build thin slices, and update estimates as facts change.
- I use confidence levels so leadership understands risk.

### What is the difference between a roadmap and a plan?
A roadmap is intent and sequencing of outcomes. A plan is a commitment with dates, dependencies, and owners.
- Roadmap: strategy and priorities.
- Plan: execution details and accountability.

### What makes a good engineering roadmap?
It is outcome-driven, capacity-aware, and explicit about tradeoffs.
- Must include: business goals, technical enablers, and risk reduction work.
- I keep it short, current, and revised on a predictable cadence.

### How do you choose what not to do this quarter?
I maintain a kill list based on ROI, urgency, and risk.
- I cut low-impact work first, then reduce scope before moving dates.
- I also protect foundational work that reduces long-term cost.

### How do you handle missed deadlines without blaming people?
I focus on system gaps, not individuals.
- Actions: re-estimate, identify the bottleneck, adjust scope, and communicate early.
- I treat it as a learning loop for planning and capacity.

### How do you keep teams aligned?
Alignment is a system: shared goals, shared context, and consistent cadence.
- Tactics: clear OKRs, weekly syncs, a written plan, and visible decision logs.
- I also invest in cross-team architecture and dependency reviews.

### How do you keep a program on track when dependencies slip?
I protect the critical path and reduce coupling.
- Actions: re-sequence work, build temporary shims, and escalate early on blocked dependencies.
- If needed, I trade scope for time to preserve quality.

### How do you measure delivery without creating a metric game?
Use a balanced scorecard and focus on trends, not targets.
- Metrics: cycle time, deploy frequency, change failure rate, and customer impact.
- I pair metrics with qualitative reviews to avoid gaming.

### When is it worth taking on tech debt to ship faster?
When the debt is explicit, timeboxed, and paid back by measurable value.
- Principles: track debt like financial liability and cap interest.
- If debt threatens reliability, security, or team velocity, it is not worth it.

## Architecture, tech choices, and standards

### How do you choose languages?
I start with the problem constraints: performance, ecosystem, hiring market, and existing team skills.
- Principles: optimize for maintainability and speed of delivery, not novelty.
- If the team cannot operate it at 2 a.m., it is the wrong choice.

### How do you choose frameworks without chasing hype?
I use a decision matrix: maturity, community, security posture, and long-term cost.
- Tactics: pick a default stack, require a written justification to deviate, and review annually.

### When should you refactor?
When complexity is slowing delivery or increasing risk.
- Signals: change failure rate rising, PRs getting larger, or onboarding time increasing.
- I timebox refactors and tie them to outcomes like speed or reliability.

### When is a rewrite actually the right call?
When the architecture blocks key business outcomes and incremental change is too slow.
- Criteria: severe reliability gaps, security gaps, or cost that cannot be reduced.
- I still prefer strangler patterns and staged migrations.

### How do you decide between a monolith and microservices?
Start monolith unless you have clear service boundaries and operational maturity.
- Monolith: faster iteration and simpler operations.
- Microservices: only when team autonomy and scaling needs outweigh coordination cost.

### When does microservices become a tax instead of a benefit?
When you spend more time on platform plumbing than on product value.
- Signals: high on-call load, slow deployments due to dependencies, and duplicated logic.
- Fix: consolidate services or invest in strong platform tooling.

### How do you prevent architecture from drifting over time?
Make the desired architecture the easiest path.
- Tactics: reference architectures, automated linting, and lightweight design reviews.
- I also document "why" so teams understand tradeoffs.

### How do you set standards without becoming the bottleneck?
Create paved roads and automate enforcement.
- Standards live in templates, CI checks, and self-serve docs.
- Exceptions require a short written rationale, not a committee.

### How do you handle "platform vs product" tension?
I tie platform work to product outcomes and timebox shared investments.
- I allocate a fixed capacity for platform health and make the ROI visible.
- When conflict happens, I prioritize customer impact and long-term cost reduction.

## Code quality, reviews, and testing

### What does "good code" actually mean?
Good code is readable, correct, testable, and cheap to change.
- Principles: clarity over cleverness, small units, and explicit tradeoffs.
- I value code that a new engineer can understand in one sitting.

### How strict should code reviews be?
Strict on correctness and clarity, light on style.
- I automate style with linters and focus human review on logic, security, and edge cases.
- Reviews should be fast, kind, and specific.

### How do you handle a senior who refuses feedback on their code?
I address it directly in private. Seniority raises the bar for collaboration.
- Tactics: explain the impact, set expectations, and ask for change.
- If behavior persists, it becomes a performance issue.

### When should you add tests vs ship and monitor?
Use risk-based testing.
- High risk or high impact: write tests before shipping.
- Low risk: ship with strong monitoring and fast rollback.
- I also invest in integration tests for critical paths.

### What is your take on TDD in real-world teams?
TDD is a tool, not a religion. It is great for complex logic and stable requirements.
- I do not require it for exploratory work or UI-heavy tasks.
- The goal is confidence, not dogma.

### How do you reduce flaky tests?
Treat flakiness as a reliability bug.
- Actions: quarantine flaky tests, remove randomness, stabilize test data, and fix timing issues.
- I track flake rate and assign owners.

### How do you keep PRs small without slowing delivery?
Use feature flags, vertical slices, and short-lived branches.
- I encourage stacked PRs and clear review SLAs.
- Small PRs reduce risk and speed feedback.

## Reliability, on-call, and incident response

### What is an acceptable on-call load?
On-call should be boring. If it is not, the system is broken.
- My target: no more than 1-2 pages per week and no chronic after-hours work.
- If the load is higher, I prioritize reliability work immediately.

### How do you handle recurring incidents that never get prioritized?
Make the cost visible and tie it to business outcomes.
- Tactics: quantify downtime, show customer impact, and tie fixes to an error budget.
- I also schedule reliability work as first-class roadmap items.

### What is your approach to postmortems that do not turn into blame?
Blameless does not mean consequence-free. It means system-focused.
- Format: timeline, contributing factors, and prevention actions.
- I look for systemic fixes, not individual scapegoats.

### How do you decide SLOs when Product wants everything fast?
SLOs are user promises. I negotiate them with data and impact.
- I align on which user journeys matter most and set SLOs around those.
- If Product wants faster, I ask what we will trade off.

### When is it okay to wake people up at night?
Only for high-severity, user-impacting incidents with no safe alternative.
- If it happens often, the problem is the system, not the people.
- I invest in automation, runbooks, and daylight fixes.

## AI use, safety, and productivity

### How should engineers use AI without shipping garbage?
Use AI for draft and exploration, but keep humans accountable for correctness.
- Rules: verify outputs, add tests, and understand the code before merging.
- I treat AI like a junior engineer: helpful but not authoritative.

### How do you review AI-generated code safely?
The same way I review any code, with extra attention to edge cases and security.
- Require an explanation of the logic and tests that demonstrate behavior.
- If the author cannot explain it, it does not ship.

### What is a good policy for AI use in production code?
Clear guardrails plus room for experimentation.
- Require disclosure when AI is used, and keep audit trails in commits.
- Prohibit use of sensitive data in public models unless approved.
- Use vetted tools and approved models in regulated environments.

### How do you prevent secret leaks when using AI tools?
Assume prompts are data exfiltration unless proven otherwise.
- Use redaction, tokenization, or internal models.
- Block secrets in prompts with scanning tools and policy enforcement.

### How do you measure whether AI is actually improving productivity?
Measure outcomes, not novelty.
- Metrics: cycle time, defect rates, and developer sentiment.
- Compare before and after on similar work, then adjust policy.
