Business Finance

decision rule

Risk & Decision ScienceDifficulty: ★★☆☆☆

the key takeaway (decision rule: reject/fail to reject based on p vs α or z_obs vs z_α) is not clearly stated

Prerequisites (1)

Your team runs a two-week A/B test on a new checkout flow. Variant B shows a 2.1% lift in conversion rate. The PM says ship it. The engineer says that is noise. You are the operator - what is your call? If you are deciding now, after seeing the numbers, you have already failed. A decision rule is the threshold you commit to before the data arrives, so this argument never happens.

TL;DR:

A decision rule is a pre-committed threshold that maps an observed result to an action: change course if the result clears the bar, stay the course if it does not. You set it before you see data, anchored to your base case and risk appetite, so you do not rationalize your way into a bad call after the fact.

What It Is

A decision rule is a simple if-then statement you write down before you run a test, launch a pilot, or make an investment decision:

If [metric] exceeds [threshold], then [action A]. Otherwise, [action B].

The threshold is the line between "change course" and "stay the course." Two important properties:

  1. 1)You set it before you see results. This prevents post-hoc rationalization - the human tendency to move the goalposts once you are emotionally invested in an outcome.
  2. 2)"Stay the course" is not the same as "the current approach is good." It means you did not find enough evidence to justify the cost and risk of changing. Your base case might still be mediocre - you just do not have grounds to bet on the alternative yet.

In formal terms, this maps to the logic of [UNDEFINED: hypothesis testing]: you start by assuming the status quo holds (your base case), define a threshold that reflects how much evidence you need before abandoning it, then compare your observed result against that threshold. If the result clears the bar, you reject the status quo and act. If it does not, you fail to reject - meaning you lack sufficient evidence, not that the status quo is proven correct.

Why Operators Care

Every operator makes dozens of go/no-go calls per quarter: ship a feature or kill it, hire or hold, expand into a market or wait. Without a decision rule, three things go wrong:

  • Meetings become debates. Without a pre-agreed threshold, every stakeholder interprets the same data differently. The optimist sees signal, the pessimist sees noise, and you burn an hour reaching no conclusion.
  • Sunk costs creep in. If you spent $30K on a pilot and the results are ambiguous, the temptation to "give it one more month" is enormous - unless your Exit Criteria were locked before the spend.
  • P&L impact compounds. A bad $5K/month decision left running for six months because nobody defined "when do we stop" costs $30K. A decision rule would have killed it at month two.

The rule itself is cheap. Writing "we pull this campaign if Cost Per Unit acquisition exceeds $18 after 1,000 impressions" takes five minutes. Not writing it can cost months of wasted Budget.

How It Works

Building a decision rule takes four steps:

1. State your base case

What does the world look like if you do nothing? This is the benchmark from your prerequisite work. For example: "Our current checkout converts at 3.4% and generates $12.50 Revenue per visitor."

2. Pick your metric and threshold

Choose one metric that captures the outcome you care about, and set a number that justifies the cost of changing. The threshold should reflect:

  • Implementation Cost of the change (if it costs $10K to rebuild the checkout, a 0.5% lift on $12.50/visitor needs a LOT of volume to pay back)
  • Risk appetite (how much downside can you absorb if the change backfires?)
  • Time Horizon (a small lift matters more if you have 100K visitors/month than 1K)

3. Define the sample or observation window

How much data do you need before the result is meaningful? Running a test for two days on low traffic tells you almost nothing. You need enough observations that random variation will not fool you. Common mistake: stopping a test early because it looks good. That is exactly the rationalization the rule prevents.

4. Commit to the action

Write the full rule: "If Revenue per visitor on Variant B exceeds $13.75 (a 10% lift over our $12.50 base case) after 2,000 visitors per variant, ship Variant B. Otherwise, keep the current checkout and revisit in Q3."

The rule has two branches - one for each outcome. Both branches are actions, not open questions.

When to Use It

Use a decision rule whenever you are about to spend money or time to learn something:

  • A/B tests and pilots - the classic case. Define the lift threshold and sample size before launch.
  • Hiring decisions - "Hire a second sales rep if Pipeline Volume exceeds 2x current capacity for 3 consecutive months." Otherwise you are hiring on vibes.
  • Capital Investment gates - "Proceed to Phase 2 if Phase 1 ROI exceeds the Hurdle Rate of 15% on $50K deployed."
  • Kill decisions - "Shut down this product line if it fails to reach break-even within 9 months of launch." This is the hardest rule to honor, and the most valuable.

You do not need a formal decision rule for every small call. Use it when the cost of being wrong is material to your P&L, or when multiple stakeholders need to agree on what "good enough" looks like.

Worked Examples (2)

A/B test on checkout pricing

You operate an e-commerce site doing $85K/month in Revenue. Current checkout converts at 3.4% with average order value of $62, yielding $2.11 Revenue per visitor. You want to test a simplified one-page checkout that costs $8K to build. Monthly traffic: 40,000 visitors.

  1. Set the base case. Status quo: $2.11 Revenue per visitor, or $84,400/month on 40K visitors.

  2. Calculate the threshold. The $8K Implementation Cost needs to pay back within 6 months. That means you need $1,333/month in incremental Revenue. On 40K visitors, that is $0.033 more per visitor - a 1.6% lift to $2.14. Round up to a 2% lift ($2.15) to give yourself a margin against noise.

  3. Define the observation window. At 40K visitors/month, split 50/50, each variant gets 20K visitors. Run for one full month to capture weekly buying patterns.

  4. Write the rule. If Variant B Revenue per visitor exceeds $2.15 after 20K visitors per variant, ship Variant B. If it falls below $2.00 (a 5% decline), kill the test early. Otherwise, keep current checkout.

  5. Results come in. Variant B: $2.09/visitor. That is a $0.02 decline from the $2.11 base case - nowhere near the $2.15 threshold.

  6. Apply the rule. Fail to reject - keep the current checkout. Note: this does NOT mean the new checkout is bad. It means the observed lift did not justify the $8K you spent building it. Log the result and revisit if traffic grows.

Insight: The 2% threshold was not arbitrary - it was derived from the Implementation Cost and payback period. Your decision rule is only as good as the math behind the threshold. Also notice: the team does not argue. The number spoke.

Hiring trigger for a sales role

Your single sales rep closes $42K/month with a Close Rate of 18% on a $233K monthly Pipeline Volume. A second rep costs $7,500/month fully loaded (salary plus Commissions). You want a decision rule for when to hire.

  1. Set the base case. One rep, $42K/month closed, pipeline at $233K. Rep is at roughly 85% of practical capacity (estimated 20 qualified conversations/month, currently handling 17).

  2. Calculate the threshold. The second rep needs to cover their $7,500/month cost. At an 18% Close Rate, they need $41,667 in pipeline to close $7,500. But you need surplus pipeline - volume your current rep cannot handle. So the trigger is: total Pipeline Volume must exceed current rep capacity ($233K / 0.85 = $274K) by at least $42K, meaning pipeline must hit $316K.

  3. Define the observation window. Pipeline fluctuates. Require 3 consecutive months above $316K to filter out one-time spikes.

  4. Write the rule. If Pipeline Volume exceeds $316K for 3 consecutive months, open the req. If it exceeds $316K for only 1-2 months, hold and reassess next month.

  5. Results. Month 1: $330K. Month 2: $345K. Month 3: $289K.

  6. Apply the rule. Fail to reject - do not hire. Month 3 broke the streak. The pipeline spike may have been seasonal. Reset the counter and watch.

Insight: Without this rule, a manager seeing two strong months in a row would have already posted the job listing. The decision rule saved you from a $90K/year commitment based on two data points. "Fail to reject" protected you from a premature hire - but it does not mean you will never hire. It means not yet.

Key Takeaways

  • A decision rule has two parts: a threshold (the number) and an action for each side of it (what you do if you clear it, what you do if you do not). Both parts must be defined before you see data.

  • "Fail to reject" means you lack evidence to change, not that the status quo is good. This asymmetry matters - do not confuse "we did not prove the new thing works" with "the old thing is fine."

  • The threshold is not a guess - derive it from your base case, the Implementation Cost of the change, your risk appetite, and the Time Horizon for payback. A number without economic reasoning behind it is just a number.

Common Mistakes

  • Moving the goalposts after seeing data. You set a 10% lift threshold, observe 7%, and suddenly decide 7% is "close enough." This is exactly the bias the rule exists to prevent. If 7% truly is acceptable, you set the wrong threshold - fix it for the next test, do not retroactively change this one.

  • Confusing "fail to reject" with "reject the alternative." Your test showed a 1% lift instead of the 5% you needed. That does not mean the new approach is bad - it means you do not have enough evidence that it is worth the cost of switching. Do not use a failed test to permanently kill an idea; log it and revisit when conditions change (more traffic, lower Implementation Cost, different variant).

Practice

easy

You run a paid ad campaign spending $4,500/month. Your base case Cost Per Unit acquisition is $22. You want to test a new ad creative. Write a decision rule that includes: (a) the metric, (b) the threshold with economic justification, (c) the observation window, and (d) both branches of the action.

Hint: Think about what "better" means here. A new creative that lowers Cost Per Unit acquisition to $21 saves $1 per acquisition - is that enough to justify the effort of creating and managing the new creative? What if you acquire 200 units/month?

Show solution

Metric: Cost Per Unit acquisition. Threshold: The new creative took $1,200 in design and copywriting time. At 200 acquisitions/month, a $1 improvement saves $200/month - payback in 6 months. Set threshold at $20 (a $2 improvement) for a 3-month payback. Window: Run both creatives for 30 days at equal Budget ($2,250 each), requiring at least 100 acquisitions per creative. Rule: If the new creative's Cost Per Unit acquisition is $20 or lower after 100+ acquisitions, replace the old creative. If it is above $22 (worse than base case), kill it immediately. If between $20 and $22, keep running the old creative and test a different variant next month.

medium

Your SaaS product has a Churn Rate of 6% monthly. You invest $15K in a customer success initiative (dedicated onboarding calls for the first 30 days). After 3 months, churn among onboarded customers is 5.1%. Did the initiative work? Before answering, write the decision rule you should have set before the initiative launched.

Hint: What is the dollar value of each percentage point of Churn reduction? If your average customer pays $200/month and you have 500 customers, a 1pp Churn reduction saves 5 customers/month - that is $1,000/month in retained Revenue. Now compare that to the $15K spend.

Show solution

Pre-launch rule: Average customer value is $200/month, 500 customers. Each 1pp churn reduction retains 5 customers/month = $1,000/month = $12,000/year. The $15K initiative needs to save at least $15K/year to break even in 12 months, which requires a 1.25pp reduction (from 6% to 4.75%). Threshold: Churn must drop to 4.75% or lower among onboarded cohort after 3 months. Observation window: 3 months post-onboarding, minimum 100 customers in the onboarded cohort. Result: 5.1% churn - a 0.9pp improvement, not the 1.25pp needed. Decision: Fail to reject. The initiative improved churn but not enough to justify $15K. Options: reduce the cost of the initiative (automate parts of onboarding), or accept a longer payback period if you believe the effect compounds. Do not simply declare victory because "it helped."

Connections

A decision rule is the natural next step after building a base case. The base case gives you the benchmark - what does the world look like if you change nothing? The decision rule tells you how far reality must deviate from that benchmark before you act. Without a base case, your threshold is arbitrary; without a decision rule, your base case is just a number sitting in a spreadsheet.

Downstream, decision rules feed directly into Sensitivity Analysis (what happens to your threshold if your assumptions shift?), Exit Criteria (decision rules applied to ongoing commitments - when do you walk away?), and Expected Value calculations (where the threshold is derived from the probability-weighted outcomes of acting versus not acting). They also connect to risk appetite - two operators looking at the same data might set different thresholds because one can absorb more downside than the other. The rule makes that difference explicit rather than leaving it as an unspoken disagreement in a meeting room.

Disclaimer: This content is for educational and informational purposes only and does not constitute financial, investment, tax, or legal advice. It is not a recommendation to buy, sell, or hold any security or financial product. You should consult a qualified financial advisor, tax professional, or attorney before making financial decisions. Past performance is not indicative of future results. The author is not a registered investment advisor, broker-dealer, or financial planner.