Exit criteria: system passes smoke tests
You just spent three weeks rebuilding the invoicing pipeline. Your team says it's done. You deploy on Monday, and by Wednesday, Revenue Recognition is delayed because the new system silently drops line items over $10,000. Nobody defined what 'done' meant before work started - so nobody checked.
Exit criteria are the specific, testable conditions a piece of work must satisfy before it moves to the next stage. Without them, 'done' is just an opinion - and opinions don't protect your P&L.
Exit criteria are a predefined checklist of conditions that must be true before work leaves one stage and enters the next. Think of them as a Quality Gate with teeth.
In software, you already know this instinctively: a pull request isn't merged until tests pass. Exit criteria generalize that pattern to any operational process - hiring, Vendor Negotiations, product launches, Cost Reduction projects, even budgeting cycles.
The key properties:
If you can't write it as a yes/no check, it's not an exit criterion - it's a wish.
Every stage transition in your Operations that lacks exit criteria is a place where defects leak downstream. And defects get more expensive the further they travel.
Consider the P&L impact:
Exit criteria convert vague milestones into enforceable checkpoints. For P&L ownership, this is how you make quality a system property instead of relying on individual heroics - which is to say, instead of relying on Tribal Knowledge that walks out the door when someone quits.
Step 1: Identify the stage boundaries.
Map your process into discrete stages. For a product launch, that might be: Design → Build → Test → Deploy → Monitor. For Full-Cycle Recruiting: Source → Screen → Interview → Offer → Onboard.
Step 2: Write binary conditions for each boundary.
Good exit criteria look like:
Bad exit criteria look like:
Step 3: Assign ownership.
Each gate needs a human who can say no. In a Quality Systems framework, this is the person accountable for the Execution standard at that boundary. Without an owner, the gate becomes decoration.
Step 4: Define what happens on failure.
If exit criteria aren't met, what's the decision rule? Rework? Escalation? Kill the project? This is where exit criteria connect to your risk appetite - some failures are 'fix and resubmit,' others are 'stop everything.'
Always use exit criteria when:
You can skip formal exit criteria when:
Scale the rigor to the stakes. A Spot-Check might be sufficient exit criteria for a low-risk internal tool. A full auditing pass with sign-off is appropriate before a system touching Revenue Recognition goes live. Match the cost of the gate to the cost of the failure mode it prevents.
Your company processes $2M/month in Revenue through its billing system. You're migrating to a new platform. Engineering estimates 6 weeks of work. The old system stays running in parallel during migration.
Identify failure modes. What can go wrong? Invoices could be wrong (amount errors), missing (dropped customers), or late (delayed processing). Each failure mode maps to a P&L line: wrong amounts hit Revenue accuracy, missing invoices hit Cash Flow, late invoices hit Collections and Payment History with vendors.
Write exit criteria against each failure mode. (1) Reconciliation check: new system output matches old system output to the penny for 1,000 randomly sampled invoices across 3 billing cycles. (2) Completeness check: new system generates invoices for 100% of active accounts - zero gaps. (3) Timing check: batch processing completes within 4 hours (contractual deadline is 6 hours, giving 2-hour buffer). (4) Rollback check: team demonstrates full rollback to old system in under 30 minutes.
Assign ownership and set the decision rule. Finance lead owns criteria 1 and 2 (they verify the numbers). Engineering lead owns criteria 3 and 4 (they verify performance). Rule: all four must pass. Any single failure means no cutover - rework and retest. The Error Cost of a billing failure ($2M/month at risk) vastly exceeds the cost of an extra week of testing.
Insight: Exit criteria should be derived from failure modes, not from the feature list. Engineers naturally write criteria around 'does the feature work.' Operators write criteria around 'what breaks if it doesn't work, and how would we know.'
You're hiring 3 backend engineers this quarter. Your current Time-to-Fill is 45 days. Last quarter, 2 out of 5 hires churned within 90 days - a defect rate of 40%. Each failed hire costs roughly $30,000 in recruiting, onboarding, and lost productivity (the Implementation Cost of a bad stage transition).
Current state: no exit criteria between stages. Recruiters pass candidates to hiring managers based on gut feel. Hiring managers make offers based on unstructured impressions. There's no consistent standard, so quality varies by who's involved - classic Tribal Knowledge dependency.
Define exit criteria for each stage. Screen → Interview: candidate has relevant project experience verified against resume (not self-reported), and completes a 30-minute technical screen scoring 70%+. Interview → Offer: minimum 3 interviewers, all score 3/5 or above on a standardized rubric covering technical skill and team collaboration. Offer → Onboard: reference check confirms tenure and role claims, background clear.
Expected impact. If exit criteria cut the defect rate from 40% to 25% (optimistic but plausible for adding structured gates to an unstructured process), you avoid roughly 1 failed hire per quarter. That's $30,000 saved per quarter in direct costs, plus the opportunity cost of the seat sitting empty for another 45-day Time-to-Fill cycle. Annual savings: ~$120,000 plus recovered capacity. Note the caveat: 40% churn within 90 days often signals deeper structural problems - compensation misalignment, role misrepresentation, management gaps - that exit criteria alone won't fix. But structured gates are a necessary starting point, and the ROI holds even at modest improvement rates.
Insight: Exit criteria in hiring aren't bureaucracy - they're Unit Economics. Each gate you add costs maybe 2 hours of structured evaluation. Each defect you prevent saves $30,000 and 45 days. The ROI on the gate is positive even if it only catches one bad hire per year.
Exit criteria are defined before work starts and are binary - pass or fail, no 'mostly done'
Derive exit criteria from failure modes (what breaks downstream), not from feature lists (what was built)
The rigor of the gate should be proportional to the Error Cost of the failure mode it prevents - match the cost of checking to the cost of missing
Writing exit criteria after the work is finished. This turns criteria into a rubber stamp. The whole point is to define 'done' when you're thinking clearly, before the Implementation Cost already spent tempts you to declare victory prematurely. The effort invested so far is irrelevant to whether the work meets the standard - only the exit criteria answer that question.
Making criteria so exhaustive that nothing ever ships. Exit criteria should target high-impact failure modes, not catalog every conceivable risk. If your gate costs more than the defect it prevents, you've over-engineered the checkpoint. Think break-even: the cost of the gate should be less than the Expected Value of the defects it catches.
You run a 4-person customer support team. You're rolling out a new ticketing system next month. Write 3 exit criteria for the go-live decision. For each, identify the failure mode it prevents and estimate the cost if that failure mode hits production.
Hint: Think about what would make you regret going live. Common failure modes in system cutovers: data loss, workflow breaks, and team readiness gaps. Connect each to a P&L impact - lost Revenue from dropped tickets, increased Error Cost from misrouted issues, or increased Time-to-Fill on CSAT recovery.
Example criteria: (1) Data migration complete and verified - 100% of open tickets appear in new system with correct status and assignment. Failure mode: lost tickets → missed commitments → Churn risk on affected accounts. Cost estimate: if 5% of tickets are lost and each represents $500 in at-risk Revenue, that's $2,500/month. (2) All 4 team members complete workflow certification - each agent resolves 5 test tickets end-to-end in the new system without assistance. Failure mode: untrained team → slower resolution → lower CSAT. Cost: Service Recovery on degraded support could cost 2x normal resolution time for 2-4 weeks = ~$3,000 in extra Labor. (3) Rollback tested - team demonstrates reverting to old system in under 15 minutes. Failure mode: catastrophic failure with no escape route → extended outage. Cost: a full day of support downtime on a team handling 50 tickets/day at $50 average value = $2,500 in delayed resolution.
Your engineering team says a Cost Reduction project is 'done' - they've optimized a data pipeline that was costing $8,000/month in cloud compute. They claim it now costs $2,000/month. What exit criteria would you have wanted defined at the start, and what would you check right now before declaring the savings real?
Hint: Think about Revenue Recognition principles applied to cost savings - when is a saving real vs. projected? Consider: did they test at production scale? Did they measure over a full billing cycle? Are there hidden costs they shifted rather than eliminated (like increased Labor for manual monitoring)?
Exit criteria you'd want upfront: (1) Cost measured over a full billing cycle at production volume - not a dev environment estimate. Run both systems in parallel for one billing period and compare actual invoices. (2) No degradation in Throughput or defect rate - the pipeline still processes the same volume within the same time window. Cheaper but broken isn't a Cost Reduction, it's a new failure mode. (3) No hidden cost transfers - verify that compute savings didn't come from shifting work to a more expensive resource (e.g., now requires 4 hours/week of engineer monitoring that wasn't needed before, at $75/hr = $1,200/month, reducing true savings from $6,000 to $4,800). Right now, without these predefined, you'd need to retroactively verify all three before reporting the savings to your P&L. The lesson: exit criteria on cost projects prevent you from booking phantom savings.
Exit criteria are the atomic unit of Quality Gates - they're what make a gate enforceable rather than aspirational. When you define a decision rule, exit criteria supply the binary inputs that rule evaluates. Downstream, well-functioning exit criteria enable Graduated Autonomy: as team members consistently pass stage gates, you reduce oversight and shift toward Exception Review instead of checking everything.
Disclaimer: This content is for educational and informational purposes only and does not constitute financial, investment, tax, or legal advice. It is not a recommendation to buy, sell, or hold any security or financial product. You should consult a qualified financial advisor, tax professional, or attorney before making financial decisions. Past performance is not indicative of future results. The author is not a registered investment advisor, broker-dealer, or financial planner.