Business Finance

Sensitivity Analysis

Risk & Decision ScienceDifficulty: ★★☆☆☆

Sensitivity analysis: how changes in b or c affect the optimal solution

You built a base case for hiring two engineers at $150K each to automate a manual process that costs $400K/year in Labor. Your decision rule says go if NPV over 3 years exceeds $200K. It does - barely, at $210K. Your CFO asks: 'What if those engineers cost $170K? What if the Labor savings are only $350K?' You realize you have no idea which assumption, if wrong, would flip your decision from go to no-go.

TL;DR:

Sensitivity analysis systematically changes one assumption at a time in your base case to find which inputs have the power to flip your decision rule. It tells you where to spend your next hour of research and where uncertainty actually matters for your P&L.

What It Is

Sensitivity analysis takes your base case and asks: what happens to the outcome if I change this one input by some amount?

You hold everything else constant, move one variable - Revenue growth, Cost Per Unit, Time-to-Fill for a key hire, Churn Rate - and watch what happens to your decision metric (NPV, Profit, break-even timeline, whatever your decision rule references).

The output is a map of which assumptions matter and which don't. Some inputs can swing wildly without changing your decision. Others, if off by 5%, flip your answer entirely. The first kind you can estimate loosely. The second kind you need to nail down with evidence before committing Capital Investment.

Why Operators Care

Every P&L decision rests on assumptions. When you approve a Budget for a new product line, you're implicitly betting on a Revenue forecast, a Cost Structure, a Time Horizon to break-even, and a dozen other numbers.

Sensitivity analysis does three things for an Operator:

  1. 1)Identifies which assumptions carry real risk. If your decision survives a 30% swing in Marketing Spend but flips on a 10% change in Churn Rate, you know where your Execution Risk lives.
  1. 2)Focuses your research budget. Information has a cost (your time, your team's time). Sensitivity analysis tells you the Value of Information - if an input doesn't change the decision no matter what, stop researching it.
  1. 3)Gives your decision rule teeth. A decision rule without sensitivity analysis is fragile. You committed to a threshold, but you don't know how close you are to the edge. Sensitivity analysis measures the distance between your base case and that threshold - showing you whether your 'yes' has room to absorb a miss, or whether it collapses on the first wrong assumption.

How It Works

Step 1: Start from your base case.

You need a working model with a clear output metric - NPV, Expected Value, Profit, Payback Period - and a decision rule tied to it.

Step 2: List your assumptions.

Every input that you estimated or forecasted is a candidate. Revenue growth, Cost Per Unit, Time-to-Fill, Close Rate, Churn Rate, Implementation Cost, Discount Rate - anything you typed in as a number.

Step 3: Vary one input at a time.

Pick a range for each input. A common approach: move it +/- 10%, +/- 20%, or to its plausible worst case and best case. Recalculate your output metric each time. Keep all other inputs at their base case values.

Step 4: Record which inputs change the decision.

If your decision rule says 'invest if NPV > $100K,' flag every input that can push NPV below $100K within a plausible range. These are your sensitive inputs.

Step 5: Rank by impact.

The input that causes the biggest swing in your output per percentage change is the one that matters most. Focus your diligence, your monitoring, and your risk appetite there.

Reading a Sensitivity Table

Here is a concrete model so you can verify every number. A 12-month project: $200K Implementation Cost upfront, $50K/month in Revenue, variable costs at 20% of Revenue ($10K/month), fixed costs of $15K/month. Monthly Profit: $25K. At a 10% annual Discount Rate, the present value of $25K/month for 12 months is approximately $285K. NPV = $285K - $200K = $85K. Decision rule: invest if NPV > $0.

Now vary one input at a time and recalculate:

InputBase-20%-10%+10%+20%Flips?
Revenue/mo$50K$40K → NPV -$6K$45K → NPV $39K$55K → NPV $131K$60K → NPV $176KYes, at -19%
Impl. Cost$200K$160K → NPV $125K$180K → NPV $105K$220K → NPV $65K$240K → NPV $45KNo (needs +43%)
Fixed Costs/mo$15K$12K → NPV $119K$13.5K → NPV $102K$16.5K → NPV $68K$18K → NPV $51KNo (needs +50%)

Every cell follows the same formula: change the input, recalculate monthly Profit, multiply by 11.4 (the present value factor for this stream), subtract Implementation Cost. The deltas are constant within each row - about $46K per 10% swing in Revenue, exactly $20K per 10% swing in Implementation Cost - because NPV is a linear function of each input when the others are held constant.

Revenue is the most sensitive input here. A 19% miss - $50K dropping to roughly $41K/month - pushes NPV below zero. Implementation Cost would need to increase 43% to flip the decision, and fixed costs 50%. Revenue is where your research and monitoring should focus.

When to Use It

Before any Capital Investment decision. If you're committing Budget that can't easily be reversed - new hires, infrastructure builds, long-term contracts - run sensitivity on your base case first.

When your decision rule passes, but barely. A project that clears your Hurdle Rate by 2% is really a coin flip dressed up as a yes. Sensitivity analysis tells you how thin the ice is.

When stakeholders disagree on assumptions. Instead of arguing about whether Churn Rate will be 4% or 6%, show both scenarios. If the decision is the same either way, the argument is moot.

When you need to set monitoring triggers. After launching, sensitivity analysis tells you which metrics to watch. If Revenue per unit was the sensitive input, that's what goes on your weekly dashboard with a decision rule for course correction.

Skip it when: The decision is easily reversible, the stakes are low relative to your Budget, or you have so little data that even the ranges are guesses on top of guesses. In that case, a small pilot with real Feedback Loop data beats a spreadsheet.

Worked Examples (2)

Build vs. Buy for an internal tool

You're evaluating whether to build an internal scheduling tool or buy a SaaS product at $4,000/month. Building costs $60K upfront (engineering Labor plus setup). Maintenance after launch: $1,000/month. Your base case uses a 3-year Time Horizon and a 10% Discount Rate. Decision rule: build if NPV advantage over buying exceeds $15K (to justify the Execution Risk of a custom build).

  1. Compute the base case advantage. Monthly savings from building: $4,000 - $1,000 = $3,000. Annual savings: $36,000. Present value of $36K/year for 3 years at 10%: $36K × 2.487 = $89,500. Subtract the $60K build cost: NPV advantage = $29,500. Passes the $15K threshold.

  2. Vary build cost. At $75K (+25%), advantage drops to $14,500 - just below the $15K threshold. Flips at approximately $74,500 (+24%). You have room, but a scope increase during the build could eat it.

  3. Vary SaaS cost. If the SaaS drops to $3,500/month (-12.5%), annual savings fall to $30K, present value = $74,600, advantage = $14,600 - below threshold. At $3,000/month (-25%), building actually loses by $300. Pricing trends in the SaaS market are a real risk. The same math applies in reverse to maintenance: if maintenance rises to $1,500/month (+50%), monthly savings drop the same way and the decision flips.

  4. Vary Time Horizon. At 2 years, present value of savings = $62,500, advantage = $2,500 - well below threshold. At 4 years, advantage = $54,100. The crossover is roughly 2.5 years. Time Horizon is the most sensitive input.

Insight: Two inputs dominate: how long you'll use the tool and whether SaaS Pricing stays at current levels. If there's any chance you'll replace this tool or pivot within 2.5 years, buy. If SaaS prices in this category are falling, the math shifts even faster. Your next research task: estimate the realistic lifespan of this tool. That's where the Value of Information is highest.

Hiring a sales rep to expand Revenue

A new sales rep costs $120K/year (base salary plus Commissions). Your base case assumes they close $350K in new ARR within 12 months, with a 15% Churn Rate on those accounts - so net Revenue is $350K × 0.85 = $297,500. Variable costs are 30% of Revenue, leaving 70% after variable costs. Decision rule: hire if first-year Profit exceeds $50K.

Base case: $297,500 × 0.70 - $120K = $88,250. Passes.

  1. Vary the rep's close amount. At $250K ARR (-29%), net Revenue after churn = $212,500. Profit = $212,500 × 0.70 - $120K = $28,750. Fails. Crossover: approximately $286K ARR (-18%). You have an 18% margin on close amount.

  2. Vary Churn Rate. At 30% churn (double the base case), net Revenue = $245,000. Profit = $245,000 × 0.70 - $120K = $51,500. Barely passes. At 35%, Profit = $39,250 - fails. Crossover: about 31%. Churn can more than double before flipping the decision.

  3. Factor in Time-to-Fill and the months before the rep reaches full productivity. If it takes 3 months to hire and another 3 months before the rep is closing at full capacity, effective first-year close drops to roughly $175K. Net Revenue = $148,750. Profit = $148,750 × 0.70 - $120K = -$15,875. Fails badly.

  4. The headline numbers (close amount, Churn Rate) have wide margins. But the time between starting the search and the rep actually producing Revenue - which the base case implicitly assumed was negligible - is what kills the case.

Insight: Close amount and Churn Rate look safe, but the hidden assumption is time. The base case quietly assumed a short Time-to-Fill and immediate productivity - both optimistic for a new hire. Sensitivity analysis caught an input that wasn't even on the original assumption list. Always test time-dependent inputs, not just dollar amounts.

Key Takeaways

  • Sensitivity analysis answers 'which assumptions, if wrong, would change my decision?' - not 'what is the right number for each assumption.'

  • The most valuable output is knowing where NOT to spend research time: if an input can swing 30% without flipping your decision rule, stop worrying about it and focus on the inputs that flip at 5-10%.

  • A base case that passes your decision rule by a thin margin is not a green light - it's a signal to run sensitivity before committing resources.

Common Mistakes

  • Moving multiple inputs at once. If you change Revenue and Cost Structure simultaneously, you can't tell which one caused the outcome to shift. Vary one at a time. (The exception is when two inputs are genuinely linked - but that's a more advanced technique.)

  • Only testing small ranges like +/- 5% because bigger swings 'seem unlikely.' Plausible worst cases are often 20-40% off the base case, especially for Revenue forecasts and Time-to-Fill estimates. Test the range that reality might actually deliver, not the range that makes the spreadsheet look comfortable.

Practice

medium

You're deciding whether to invest $80K in a marketing campaign. Base case: the campaign generates 400 new customers, each with a Lifetime Value of $500 and variable costs of $200 per customer. Decision rule: invest if Profit exceeds $40K. Identify the two most sensitive inputs and find the exact value where each one flips the decision.

Hint: Base case Profit = 400 × ($500 - $200) - $80K = $120K - $80K = $40K. That's exactly on the threshold. What does that tell you about sensitivity? Try varying customer count and Lifetime Value independently.

Show solution

Base case: 400 × ($500 - $200) - $80K = $120K - $80K = $40K. This is right at the decision rule threshold, which means ANY downward move in any input flips it.

Customer count: At 399 customers, Profit = 399 × $300 - $80K = $119,700 - $80K = $39,700. Fails. The crossover is exactly 400 - zero margin.

Lifetime Value: At $499, Profit = 400 × ($499 - $200) - $80K = 400 × $299 - $80K = $119,600 - $80K = $39,600. Fails. Also zero margin.

Variable cost per customer: At $201, Profit = 400 × $299 - $80K = $119,600 - $80K = $39,600. Same result.

All three inputs are maximally sensitive because the base case sits exactly on the threshold. This is the worst position: your 'yes' depends on every single assumption being exactly right. No input has any room to be wrong.

The correct Operator response: either reduce the $80K Implementation Cost to create a cushion above the threshold, negotiate a higher expected customer count through better targeting, or raise the decision threshold so you only proceed with a genuine buffer. A base case that equals the threshold is not a green light - it's a coin flip.

hard

Your SaaS product has 100 customers paying $1,000/month each ($100K monthly Revenue). Monthly fixed costs are $30K. Variable cost is $200 per customer per month. Monthly Profit: $50K. You're considering a 10% price increase to $1,100/customer. Your base case assumes this will increase monthly Churn Rate from 5% to 7%. Model each scenario as a decaying customer base over 12 months. At what post-increase Churn Rate does the price hike become a net negative compared to the status quo?

Hint: Per-customer contribution changes: $1,000 - $200 = $800 (before) vs. $1,100 - $200 = $900 (after). Customer count decays each month by the Churn Rate. Use the geometric series: starting contribution × (1 - retention^12) / (1 - retention), where retention = 1 - churn rate. Compare cumulative Profit across both paths, not cumulative Revenue - the crossover depends on contribution per customer, not just price.

Show solution

Status quo (5% monthly churn, $1,000/customer):

Per-customer contribution: $1,000 - $200 = $800/month.

Starting monthly contribution: 100 × $800 = $80,000.

12-month cumulative: $80K × (1 - 0.95^12) / 0.05. Since 0.95^12 ≈ 0.5404, this is $80K × (1 - 0.5404) / 0.05 = $80K × 9.19 = $735K.

Fixed costs: $30K × 12 = $360K.

12-month Profit: $735K - $360K = $375K.

Price increase at 7% churn ($1,100/customer):

Per-customer contribution: $1,100 - $200 = $900/month.

Starting monthly contribution: $90K.

12-month cumulative: $90K × (1 - 0.93^12) / 0.07. Since 0.93^12 ≈ 0.4186, this is $90K × (1 - 0.4186) / 0.07 = $90K × 8.31 = $748K.

Fixed costs: $360K.

12-month Profit: $748K - $360K = $388K. Better than status quo by $13K.

Find the crossover: Set price-increase contribution equal to status quo:

$90K × (1 - (1-c)^12) / c = $735K.

Solving: (1 - (1-c)^12) / c = 8.17.

At c = 7.0%: left side = 8.31. Above 8.17 - still profitable.

At c = 7.3%: left side ≈ 8.18. Barely above.

At c = 7.5%: left side ≈ 8.06. Below.

Crossover: approximately 7.3% monthly Churn Rate. You have about 0.3 percentage points of margin on the churn assumption - essentially zero room for error. If your churn estimate is off by half a percent, the price increase destroys value. This is a decision where a small Pricing pilot on one customer segment would give you high Value of Information before committing to a company-wide rollout.

Connections

Sensitivity analysis is the natural next step after building a base case and setting a decision rule. It tells you whether your 'yes' is robust or fragile. From here, two paths matter most. Value of Information tells you how much it's worth to reduce uncertainty on a sensitive input - if research can narrow the range on Revenue from ±20% to ±5%, sensitivity analysis tells you whether that precision is worth the time. A decision tree lets you branch on the sensitive inputs directly, assigning probabilities to each scenario rather than just asking 'what if' one variable at a time.

Disclaimer: This content is for educational and informational purposes only and does not constitute financial, investment, tax, or legal advice. It is not a recommendation to buy, sell, or hold any security or financial product. You should consult a qualified financial advisor, tax professional, or attorney before making financial decisions. Past performance is not indicative of future results. The author is not a registered investment advisor, broker-dealer, or financial planner.