Business Finance

Scoring Model

Risk & Decision ScienceDifficulty: ★★☆☆☆

I built the scoring model and use it to decide which AI bets to make

Your CEO gives you $400K in annual Budget for AI initiatives. Your team pitches eight projects - automated pricing, demand forecasting, churn prediction, inventory optimization, and four more. Each pitch includes an Expected Value estimate and a timeline. You cannot fund them all. You need a system that turns eight messy comparisons into a ranked list you can defend in a Budget review. That system is a Scoring Model.

TL;DR:

A Scoring Model translates your Utility Function into a weighted rubric, scores competing options against it, and produces a ranked list so you can make Capital Allocation decisions systematically instead of by gut feel.

What It Is

A Scoring Model is a decision rule that does three things:

  1. 1)Decomposes your Utility Function into measurable criteria (Expected Return, Execution Risk, Time Horizon, strategic fit)
  2. 2)Weights each criterion to reflect how much it matters to your P&L goals
  3. 3)Scores every competing option against those criteria and sums the weighted scores into a single number

The output is a ranked list. Option A scores 7.8, Option B scores 6.1, Option C scores 8.3 - now you know where to Allocate first.

You already know Expected Value gives you a probability-weighted dollar figure. You already know your Utility Function encodes what you actually value beyond dollars. The Scoring Model is the instrument that makes both operational - something you run every quarter when deciding where Budget goes.

Why Operators Care

Operators face a specific problem: you always have more plausible investments than Budget to fund them. This creates opportunity cost pressure - every dollar allocated to Project A is a dollar not allocated to Project B.

Without a Scoring Model, allocation decisions happen one of two ways:

  • Loudest voice wins. Whoever pitches most passionately gets funded. This optimizes for presentation skill, not P&L impact.
  • Recency bias. The last project you heard about feels most urgent. Your Allocation drifts toward whatever crossed your desk this morning.

A Scoring Model forces you to evaluate every option against the same criteria with the same weights. It makes your decision rule explicit and auditable. When your CFO asks why you funded the churn model over the pricing model, you can point to the scores instead of saying "it felt right."

This matters even more in PE-Backed environments where Capital Allocation is scrutinized quarterly. Your Scoring Model is how you demonstrate capital discipline.

How It Works

Step 1: Choose criteria from your Utility Function

Your Utility Function tells you what you value. Translate that into 4-6 measurable criteria. For an AI investment portfolio, common ones are:

CriterionWhat it captures
Expected Return (annualized)The Expected Value of the project divided by Implementation Cost
Execution RiskProbability the team can actually ship it (data availability, talent, technical feasibility)
Time to ValueHow many months until the project hits the P&L
Strategic fitDoes it build a Competitive Advantage or Data Moat?
Revenue impact vs. Cost ReductionDoes it grow the top Revenue Line or shrink Cost Structure?

Step 2: Assign weights that sum to 1.0

Weights reflect your risk appetite and Time Horizon. An Operator focused on near-term EBITDA Optimization might weight like this:

  • Expected Return: 0.30
  • Execution Risk: 0.25
  • Time to Value: 0.25
  • Strategic fit: 0.10
  • Revenue vs. Cost Reduction: 0.10

An Operator building a Compounder with a longer Investment Horizon might put 0.25 on strategic fit and 0.10 on Time to Value - the inverse.

Your weights ARE your Utility Function made numeric. Two Operators with different weights will rank the same projects differently, and both can be rational.

Step 3: Score each option 1-10 on every criterion

Use anchors to keep scores consistent:

  • 10: Best realistic outcome (e.g., 5x ROI, near-zero Execution Risk)
  • 7: Strong (e.g., 2-3x ROI, team has done this before)
  • 4: Mediocre (e.g., 1x ROI, significant unknowns)
  • 1: Poor (e.g., negative Expected Value, unproven technology)

Score each project before looking at the weighted totals. This prevents you from reverse-engineering scores to justify a decision you already made.

Step 4: Multiply and sum

For each project: Total Score = Σ (weight_i × score_i)

Rank by total score. Fund from the top until Budget runs out.

When to Use It

Use a Scoring Model when:

  • You have 3+ competing options for the same pool of Budget or capacity. Two options you can compare directly. Eight options need a system.
  • The decision involves multiple criteria that trade off against each other. If only Expected Return mattered, just rank by Expected Value and you are done.
  • You need to defend the decision to someone else - your CFO, your board, your PE sponsors. The model is the audit trail.
  • You will repeat this decision quarterly or annually. The upfront cost of building the model pays off when you reuse it every cycle.

Skip it when:

  • One option clearly dominates on every criterion (Dominant Strategy - no model needed)
  • The decision is trivially small relative to your P&L
  • You lack data to score meaningfully - in that case, invest in the Value of Information first

Worked Examples (2)

Scoring four AI initiatives for a $400K annual budget

You run Operations at a PE-Backed retailer. You have $400K in Budget for AI Capital Investment this year. Four projects survived initial Triage:

  • A: Demand Forecasting - Expected Value $280K/yr in Cost Reduction from better Inventory Control. Implementation Cost $180K. Team has done similar work. Ships in 4 months.
  • B: Dynamic Pricing - Expected Value $500K/yr in incremental Revenue. Implementation Cost $220K. Requires new data pipelines. Ships in 9 months.
  • C: Churn Prediction - Expected Value $150K/yr from reduced Churn. Implementation Cost $90K. Well-understood problem. Ships in 2 months.
  • D: AI-Powered Quality Control - Expected Value $120K/yr in defect rate reduction. Implementation Cost $160K. Novel approach, high Execution Risk. Ships in 12 months.
  1. Define weights (this Operator is PE-Backed, so near-term EBITDA matters most): Expected Return = 0.30, Execution Risk = 0.25, Time to Value = 0.25, Strategic fit = 0.10, Revenue vs. Cost Reduction = 0.10

  2. Score each project:

    Criterion (weight)A: DemandB: PricingC: ChurnD: QC
    Expected Return (0.30)7 (1.6x)9 (2.3x)7 (1.7x)4 (0.75x)
    Execution Risk (0.25)8 (done before)5 (new pipes)9 (well-known)3 (novel)
    Time to Value (0.25)7 (4 mo)4 (9 mo)9 (2 mo)2 (12 mo)
    Strategic fit (0.10)6857
    Revenue vs Cost (0.10)5 (cost)9 (revenue)6 (retention)4 (cost)
  3. Calculate weighted totals:

    • A: (0.30×7)+(0.25×8)+(0.25×7)+(0.10×6)+(0.10×5) = 2.10+2.00+1.75+0.60+0.50 = 6.95
    • B: (0.30×9)+(0.25×5)+(0.25×4)+(0.10×8)+(0.10×9) = 2.70+1.25+1.00+0.80+0.90 = 6.65
    • C: (0.30×7)+(0.25×9)+(0.25×9)+(0.10×5)+(0.10×6) = 2.10+2.25+2.25+0.50+0.60 = 7.70
    • D: (0.30×4)+(0.25×3)+(0.25×2)+(0.10×7)+(0.10×4) = 1.20+0.75+0.50+0.70+0.40 = 3.55
  4. Rank and allocate: C (7.70, $90K) + A (6.95, $180K) + B (6.65, $220K) = $490K. That exceeds $400K. Fund C + A ($270K), then you have $130K left - not enough for B ($220K). You either negotiate B's scope down or hold the $130K for mid-year opportunities.

Insight: The Scoring Model flipped the naive ranking. Project B had the highest raw Expected Value ($500K/yr) but scored second because its Execution Risk and Time to Value dragged it down. Project C - the smallest dollar opportunity - ranked first because it was fast, low-risk, and cheap. The model surfaced that a quick win plus a solid mid-tier bet ($270K total) beats swinging for the fence on a single $220K project you might not ship on time.

Sensitivity Analysis - what if your weights are wrong?

Using the same four projects, you wonder: does the ranking change if you shift weight from Execution Risk toward Expected Return? This tests whether your conclusion is robust or fragile.

  1. Reweight: Move 0.10 from Execution Risk to Expected Return. New weights: Expected Return = 0.40, Execution Risk = 0.15, Time to Value = 0.25, Strategic fit = 0.10, Revenue vs. Cost = 0.10

  2. Recalculate:

    • A: (0.40×7)+(0.15×8)+(0.25×7)+(0.10×6)+(0.10×5) = 2.80+1.20+1.75+0.60+0.50 = 6.85
    • B: (0.40×9)+(0.15×5)+(0.25×4)+(0.10×8)+(0.10×9) = 3.60+0.75+1.00+0.80+0.90 = 7.05
    • C: (0.40×7)+(0.15×9)+(0.25×9)+(0.10×5)+(0.10×6) = 2.80+1.35+2.25+0.50+0.60 = 7.50
    • D: (0.40×4)+(0.15×3)+(0.25×2)+(0.10×7)+(0.10×4) = 1.60+0.45+0.50+0.70+0.40 = 3.65
  3. Compare rankings: C stays #1 (7.50). But B jumps to #2 (7.05) and A drops to #3 (6.85). Now B+C = $310K, which fits your $400K Budget comfortably. The Sensitivity Analysis reveals that if you are more risk-tolerant than you thought, the portfolio shifts toward the bigger Revenue play.

Insight: Run Sensitivity Analysis on your weights before committing Budget. If the top 2 stay the same across reasonable weight shifts, your decision is robust. If the ranking flips, you need to get more precise about what you actually value - go back to your Utility Function and pressure-test it.

Key Takeaways

  • A Scoring Model is your Utility Function made operational - it turns subjective preferences into a repeatable, auditable decision rule for Capital Allocation.

  • Weights matter more than scores. Two Operators with different weights will rationally fund different projects from the same list. Know why your weights are what they are.

  • Always run Sensitivity Analysis on your weights before committing Budget. If a 0.10 shift in one weight flips your ranking, the model is telling you to get sharper on your actual risk appetite and Time Horizon.

Common Mistakes

  • Equal-weighting all criteria. If everything matters equally, nothing matters more - and your Scoring Model degenerates into a simple average that obscures the tradeoffs your Utility Function is supposed to capture. If you catch yourself setting all weights to 0.20, you have not done the hard work of deciding what you value.

  • Scoring after you already know what you want to fund. The whole point is to score before you see totals. If you adjust a score from 6 to 8 because the total 'feels too low,' you have replaced a systematic decision rule with motivated reasoning wearing a spreadsheet as a costume. Score blind, then trust the output - or change the weights explicitly and re-run.

Practice

medium

You manage a software team with capacity for 2 new projects this quarter. Score these three options using your own weights:

  • Automated onboarding flow: Expected Value $80K/yr in Cost Reduction (less manual support), Implementation Cost $40K, ships in 6 weeks, low Execution Risk.
  • Upsell recommendation engine: Expected Value $200K/yr in Expansion Revenue, Implementation Cost $120K, ships in 4 months, moderate Execution Risk (needs ML pipeline).
  • Internal reporting dashboard: Expected Value $30K/yr (time savings), Implementation Cost $25K, ships in 3 weeks, near-zero Execution Risk.

Define 4 criteria, assign weights, score each project, and calculate totals. Which two do you fund?

Hint: Start by asking: is your P&L pressure on Revenue growth or Cost Reduction right now? That should drive whether Expected Return or Time to Value gets the highest weight.

Show solution

One valid approach: Criteria = Expected Return (0.35), Execution Risk (0.20), Time to Value (0.25), Strategic fit (0.20).

Scores: Onboarding = (0.35×7)+(0.20×9)+(0.25×8)+(0.20×5) = 2.45+1.80+2.00+1.00 = 7.25. Upsell = (0.35×8)+(0.20×5)+(0.25×5)+(0.20×8) = 2.80+1.00+1.25+1.60 = 6.65. Dashboard = (0.35×5)+(0.20×10)+(0.25×9)+(0.20×3) = 1.75+2.00+2.25+0.60 = 6.60.

Fund Onboarding (#1) + Upsell (#2). Total Implementation Cost = $160K. The dashboard scores last despite being cheap and fast because its Expected Return is low and it builds no Competitive Advantage. If you weighted Time to Value at 0.40, the dashboard would jump to #1 - which is why your weights encode your actual priorities.

easy

Your Scoring Model ranked Project X at 7.2 and Project Y at 7.0. Your VP insists Project Y is the better bet. Instead of arguing, what should you do with the model?

Hint: The gap between 7.2 and 7.0 is small. Think about what Sensitivity Analysis would reveal.

Show solution

Run Sensitivity Analysis. A 0.2-point gap is within noise. Shift each weight by +/- 0.05 and see if Y ever overtakes X. If a small, defensible weight change flips the ranking, the model is telling you these two projects are effectively tied - and your VP's domain knowledge becomes the tiebreaker. If X wins across all reasonable weight shifts, show the VP the analysis and explain that the model is robust. Either way, the Scoring Model gives you a structured way to resolve the disagreement instead of a political one.

Connections

A Scoring Model sits at the junction of Expected Value and Utility Function - it takes the dollar estimates from Expected Value calculations and filters them through the preference weights your Utility Function defines. Without Expected Value, you have no inputs to score. Without a Utility Function, you have no basis for setting weights. Downstream, your Scoring Model feeds directly into Capital Allocation and Capital Budgeting decisions - it is the mechanism by which you turn a pile of proposals into a funded Portfolio. It also connects to Sensitivity Analysis, which stress-tests whether your ranking is robust to uncertainty in the weights themselves. When your Scoring Model produces tight rankings (projects within 0.5 points of each other), that is a signal to invest in Value of Information - gather more data before committing Budget rather than trusting a razor-thin margin.

Disclaimer: This content is for educational and informational purposes only and does not constitute financial, investment, tax, or legal advice. It is not a recommendation to buy, sell, or hold any security or financial product. You should consult a qualified financial advisor, tax professional, or attorney before making financial decisions. Past performance is not indicative of future results. The author is not a registered investment advisor, broker-dealer, or financial planner.