Conjoint Analysis (the "sacrament") - Design experiments that directly probe your preferences with maximum information per question.
Your SaaS product has 200 customers at $49/month. You have Budget for two features next quarter but five candidates on the list. You send a survey asking customers to rate each feature's importance on a 1-10 scale. Every feature comes back between 7.2 and 8.4. You now know exactly nothing about what to build.
Conjoint Analysis reveals the weights inside a Utility Function by forcing trade-offs between options - not by asking people to rate things independently. The output is a set of part-worth utilities (the value each level of each dimension contributes) and derived importance weights. It works on customers, teams, and - with honest labeling - yourself.
Conjoint analysis is an experimental design that reveals the hidden weights inside a Utility Function. Instead of asking "how important is X?" (everyone says "very"), you present forced choices between options that differ on multiple dimensions simultaneously.
The core mechanic: when someone picks Option A (cheaper, fewer integrations) over Option B (pricier, more integrations), they are revealing the relative weight of price vs. integrations in their Utility Function. Repeat this across enough well-designed choice sets and a regression model can estimate the utility contribution of every level of every dimension - isolating each effect even when attributes are correlated across choice sets.
The reason rating scales fail is that nothing is at stake. When a customer rates "advanced reporting" at 8/10 and "API access" at 7.5/10, you cannot tell if reporting is slightly preferred or overwhelmingly preferred. Conjoint forces the respondent to give something up to get something else - which is how every real P&L decision works anyway.
This is a direct application of Value of Information: the experiment costs very little relative to the Budget you are about to commit, and it sharply reduces the probability of building the wrong thing. Getting priority right costs you a survey. Getting priority wrong costs you a quarter of capacity pointed at a feature that does not move Revenue.
Step 1: Pick 3-5 dimensions that matter for the decision. More than 5 creates noise - respondents cannot hold that many variables simultaneously.
Step 2: Define 2-4 levels for each dimension. For a Pricing decision, the dimensions might be: price ($29 / $49 / $79), support level (email / chat / phone), storage (10GB / 50GB / 100GB), and integrations (5 / 15 / unlimited).
Step 3: Generate choice sets. Each question shows two or three hypothetical configurations that differ on at least two dimensions. You do not need every possible combination - a fractional factorial design covers the space with 8-12 well-chosen choice sets. Conjoint tools (see next section) generate these designs for you.
Step 4: Present the choices. To customers for product decisions. To your leadership team for resource allocation. Delivery mechanisms: email survey, in-app prompt, or structured interview.
Step 5: Estimate part-worth utilities via regression. This is the step most informal descriptions get wrong. You do not count how often each attribute level "wins." Naive tallying confounds attribute effects and produces misleading weights when levels are correlated across choice sets. The actual method is a logistic regression where each attribute level is a predictor variable and the observed choice is the outcome. The model decomposes every choice into the contribution of each attribute level, controlling for what else was present in that choice set. This is what makes conjoint rigorous rather than a straw poll. You do not run this regression by hand - the tools in the next section handle it.
Step 6: Read the two layers of output. The model produces (a) part-worth utilities - a numerical score for each level of each dimension - and (b) importance weights derived from the range of part-worths within each dimension, normalized to sum to 1.0.
The part-worth utilities are the more actionable layer. Example: price might show a linear decline ($29: +1.4, $49: +0.2, $79: -1.6) while integrations shows a cliff (5: -0.9, 15: +0.7, unlimited: +0.9 - almost no gain from 15 to unlimited). Two dimensions with identical importance weights can have completely different level profiles. If you only look at importance weights, you lose the information that drives Pricing tier design, feature bundling, and Allocation decisions.
Tools by scale:
logitr or Python pylogit for estimation.Sample size rule of thumb: You need roughly 200-300 observed choices (not respondents) for stable aggregate-level estimates. With 8 choice sets per respondent, that is 25-40 respondents at minimum. More respondents let you detect segments via hierarchical Bayes.
Response rates: For SaaS email surveys, expect 10-30%. A 40% response rate is a good outcome, not a baseline expectation - do not design your outreach sample around it. In-app prompts typically run higher. If you need 40 respondents and expect a 20% response rate, email 200 people.
Cost and time: A Conjointly study on the free tier costs nothing but your time to design choice sets and interpret the output - half a day for a first-time user, a few hours once you have done it before. Sawtooth licenses start around $20K/year, which is overkill unless conjoint is a regular part of your Operations.
Before major Pricing decisions. Which features justify a higher tier? Which bundle configuration delivers the most marginal value per Cost Per Unit of delivery? Conjoint on 50-200 customers gives you part-worth utilities you can plug directly into tier design.
Before resource allocation across competing initiatives. You have three possible Cost Reduction projects and Budget for one. Conjoint on your operations team reveals which cost dimension (Labor, material cost, tooling) they believe has the highest marginal value.
When customer segmentation reveals groups with different priorities. Running conjoint per segment lets you build differentiation into your tiers rather than guessing what separates a $49 customer from a $99 customer.
When Sensitivity Analysis shows the outcome hinges on preference weights you have not measured. If your decision tree flips depending on whether customers weight price at 0.3 or 0.5, that is a signal to measure the number instead of guessing it.
When you personally face a decision with 3+ dimensions. A structured preference exercise - six forced choices about job offers, capital investments, or rent-vs-buy decisions - is not statistically valid conjoint. You are one respondent generating six observations. But it is a disciplined way to surface your own preference pattern and cut through weeks of circular pros-and-cons thinking. Use the directional output, not precise weights.
Your SaaS product has 200 customers at $49/month ($117,600 ARR). You can ship 2 of 4 features next quarter. Engineering estimates roughly equal Implementation Cost for each. Features: (A) Slack integration, (B) advanced reporting dashboard, (C) API access, (D) priority support chat.
Set up a study in Conjointly's free tier. Define four dimensions (features A-D, each present or absent) and one price dimension ($49 / $69 / $89). The tool generates 8 balanced choice sets, each showing two hypothetical product packages. Example: 'Would you prefer Package 1 (Slack + API, $69/mo) or Package 2 (Reporting + Support, $49/mo)?'
Email the survey link to all 200 customers. 52 respond (26% response rate - typical for SaaS email surveys; always plan outreach assuming the low end of the 10-30% range). Each respondent completes 8 choice sets, giving you 416 observed choices - well above the 200-300 minimum for stable aggregate estimates.
Conjointly runs a multinomial logit model and produces part-worth utilities. Slack integration: +1.2 utils when present vs. absent. API access: +0.8 utils. Reporting dashboard: -0.1 utils (customers are roughly indifferent). Priority support: -0.4 utils when bundled with a price increase (actively avoided).
The derived importance weights: Slack (0.35), API access (0.28), reporting (0.21), priority support (0.16). But the part-worth utilities tell a richer story. Reporting is not merely 'less valued' - customers genuinely do not care whether it is there. Priority support hurts when it raises the price. These distinctions are invisible in importance weights alone.
Ship Slack integration and API access. If 25% of your base upgrades from $49 to $69 because of these features, that is 50 customers x $20/mo x 12 = $12,000 incremental ARR. The conjoint cost you one email and a few hours of setup in a free tool.
Insight: A rating survey would have shown all four features at 7-9 out of 10. The conjoint revealed a 2:1 gap between the most and least valued features - and that priority support actively destroys value when bundled with a price increase. That second insight only surfaces from part-worth utilities, not importance weights.
You are evaluating two offers. You have identified four dimensions from your personal Utility Function: base salary, Equity Compensation, scope of P&L ownership, and Time Horizon to a Liquidity event. Offer X: $180K base, 0.5% equity, full P&L ownership, 3-year horizon. Offer Y: $215K base, 0.1% equity, team lead (no P&L), 1-year horizon.
Important caveat: This is not a statistically valid conjoint. You are one respondent generating six data points. Treat the output as structured intuition that surfaces your own preference pattern - not as measured utility weights.
Create 6 forced-choice pairs that remix the dimension levels. Example: 'Would you prefer $180K / 0.5% equity / P&L ownership / 3-year horizon OR $215K / 0.1% equity / no P&L / 1-year horizon?' Then vary: '$195K / 0.3% / P&L / 1-year OR $215K / 0.5% / no P&L / 3-year?'
Answer honestly and quickly. The gut reaction IS the data. Do not deliberate - this exercise surfaces revealed preference, not calculated preference.
Tally the directional pattern. You chose the lower-salary, higher-scope option in 5 of 6 cases where P&L ownership appeared. You chose the longer Time Horizon in 4 of 6 cases where higher equity appeared alongside it.
Read the pattern as a rank order, not precise weights. Scope of role dominated your choices. Equity Compensation mattered next. Base salary and Time Horizon rarely swung the decision. Resist the temptation to compute weights to two decimal places - six observations do not support that precision.
Your preference pattern is telling you something your spreadsheet did not: you value the ability to drive P&L outcomes far more than guaranteed cash. Offer X wins despite being $35K/year lower in base salary.
Insight: When you cannot decide between two options, it is usually because you have not surfaced your own weights. Six forced choices in ten minutes will not give you a valid Utility Function - but they surface a directional pattern that weeks of pros-and-cons lists cannot. Use the direction, not the decimals.
Conjoint analysis reveals preference weights by forcing trade-offs - rating scales produce useless data because nothing is at stake when every option gets an 8 out of 10.
The output has two layers: part-worth utilities (the value contribution of each level of each dimension) and derived importance weights (which dimensions matter most overall). The part-worth layer is more actionable because it shows where within each dimension the value lives - two dimensions with the same importance weight can have completely different level profiles.
For 30+ respondents, use a real tool (Conjointly free tier, Sawtooth, R or Python packages) that runs the regression properly. For smaller groups or personal decisions, a structured preference exercise gives useful directional input - but label it as a decision heuristic, not a measured Utility Function.
Using independent rating scales instead of forced choices. When nothing is at stake, respondents rate everything as important. You get a flat distribution that tells you nothing about relative priority - which is the only thing that matters for resource allocation.
Counting wins instead of running a regression. Tallying 'Feature A won 71% of the time it appeared' confounds attribute effects when levels are correlated across choice sets. A logistic regression isolates each dimension's contribution by controlling for what else was present. Use a tool that does this - the free tier of Conjointly handles it automatically.
Reporting only importance weights and ignoring part-worth utilities. Two dimensions with identical importance weights can have completely different level profiles - one might decline linearly in price while the other has a cliff between two levels. The level profiles drive Pricing tier design and feature bundling. Importance weights alone are not enough.
Including too many dimensions (more than 5) or too many levels per dimension. Respondents experience cognitive overload and their choices become inconsistent. Three to four dimensions with two to three levels each is the sweet spot for most operator decisions.
Treating a personal preference exercise (one respondent, six observations) as statistically valid conjoint. It is a useful decision heuristic - but computing weights to two decimal places implies precision that does not exist with six data points. Use the directional pattern, not the exact numbers.
Your team is debating three Cost Reduction initiatives: (A) automate invoice processing (saves $4,200/month, 8 weeks to build), (B) renegotiate a vendor contract (saves $2,800/month, 2 weeks of effort), (C) reduce cloud infrastructure waste (saves $3,500/month, 5 weeks to build). Design three conjoint-style forced-choice questions that would reveal whether your CFO weights dollar savings, speed of implementation, or disruption to current Operations most heavily.
Hint: Each question should present two of the three initiatives but frame them so the CFO must sacrifice one dimension to gain another. For example, pair the highest-savings option against the fastest option.
Question 1: 'Would you prefer Initiative A ($4,200/mo savings, 8-week timeline) or Initiative B ($2,800/mo savings, 2-week timeline)?' This forces a trade-off between dollar savings and speed. Question 2: 'Would you prefer Initiative C ($3,500/mo savings, moderate disruption to DevOps team) or Initiative B ($2,800/mo savings, zero disruption - just a contract negotiation)?' This forces a trade-off between savings and operational disruption. Question 3: 'Would you prefer Initiative A ($4,200/mo savings, significant build effort) or Initiative C ($3,500/mo savings, moderate build effort)?' This tests whether the CFO is sensitive to magnitude of disruption. If the CFO picks B in Q1 and Q2, speed and low disruption dominate - meaning you should lead with the vendor renegotiation regardless of its lower dollar value. Note: with one respondent and three questions, this is a structured preference exercise, not a conjoint. Read the pattern directionally.
You run a conjoint survey with 100 customers on four Pricing dimensions. The part-worth utilities show: feature breadth has a range of 2.8 utils across levels, price has a range of 2.1 utils, support level has a range of 0.9 utils, and brand identity has a range of 0.6 utils. The derived importance weights: feature breadth (0.44), price (0.33), support level (0.14), brand identity (0.09). Your current Pricing strategy is built around premium support as the key differentiation between your $49 and $89 tiers. Given these results, propose a revised tier structure and estimate the Revenue impact if 20% of your $49 customers upgrade.
Hint: Support level carries only 0.14 of the total utility weight - your current strategy optimizes for a low-weight dimension. The biggest lever is feature breadth at 0.44. Restructure tiers so the premium tier gates features, not support.
Current state: 100 customers, 70 at $49 (support: email, all features), 30 at $89 (support: priority chat, all features). Current ARR: (70 x $49 x 12) + (30 x $89 x 12) = $41,160 + $32,040 = $73,200. The problem: both tiers have identical features, so the 0.44-weight dimension creates zero incentive to upgrade. The part-worth utilities reinforce this: full feature breadth is worth +1.4 utils over the basic set, while priority support is only +0.45 utils over email - a 3:1 ratio in utility contribution. Revised structure: $49 tier gets core features + email support. $89 tier gets full feature breadth (advanced reporting, API, integrations) + email support. Support upgrade available as a $15/mo add-on for either tier. Now the upgrade decision is driven by the 0.44-weight dimension (features) instead of the 0.14-weight dimension (support). If 20% of the 70 lower-tier customers upgrade: 14 customers x ($89 - $49) x 12 = $6,720 incremental ARR, a 9.2% Revenue increase from restructuring alone - no new customers required.
Your conjoint survey on 150 customers reveals two distinct segments. Segment 1 (90 customers): importance weight 0.55 on integrations, 0.15 on price. Part-worth utilities show integrations has a cliff between 5 and 15 (most of the utility gain happens there). Segment 2 (60 customers): importance weight 0.10 on integrations, 0.50 on price. Part-worth utilities show price declines linearly. Design a two-tier Pricing structure that captures more total Revenue than a single $59/month plan. Calculate the Expected Value of your structure assuming a 70% Close Rate for each segment choosing its preferred tier.
Hint: Segment 1 does not care about price and wants integrations above the cliff point (15+). Segment 2 is price-sensitive and does not care about integrations. Set the premium tier high enough to capture Segment 1's surplus without losing Segment 2.
Single-tier baseline: 150 customers x $59/mo x 12 = $106,200 ARR (assuming all 150 choose the plan). Two-tier design: Basic tier at $39/mo (5 integrations - below the cliff, core features) targets Segment 2. Pro tier at $79/mo (unlimited integrations - above the cliff, full features) targets Segment 1. The part-worth cliff insight drives the design: Segment 1 gets a large utility jump from 5 to 15+ integrations, making $79 feel like strong value. Segment 2 barely notices integrations, so the stripped-down $39 tier matches their Utility Function. Expected Value calculation at 70% Close Rate: Segment 1 on Pro: 90 x 0.70 = 63 customers x $79 x 12 = $59,724. Segment 2 on Basic: 60 x 0.70 = 42 customers x $39 x 12 = $19,656. Total expected ARR: $79,380. Remaining 45 unconverted customers: assume 50% choose Basic as a fallback = 22.5 x $39 x 12 = $10,530. Adjusted total: $89,910. This looks lower than the single-tier baseline - but the single-tier at 70% Close Rate would be 105 x $59 x 12 = $74,340. The two-tier structure beats that by $15,570 (21% more Revenue). The key insight: tiered Pricing only wins when you have real conjoint data proving segments exist and showing where the part-worth cliffs are. Without it, you are splitting your market on gut feel.
Conjoint Analysis is the measurement instrument for the Utility Function you learned earlier. Where that lesson taught you that people weight outcomes differently, conjoint gives you the method to discover those weights empirically rather than guessing. The output feeds directly into Expected Value calculations (now you have real part-worth utilities instead of assumed weights), Sensitivity Analysis (you know which variables to stress-test because you have measured which ones your customers weight heavily), and resource allocation decisions across your P&L. Combined with customer segmentation, conjoint becomes a Pricing instrument - the part-worth utilities per segment tell you exactly where to draw tier boundaries and what to gate behind each price point. It connects to Value of Information: the cost of running a conjoint (a free tool, a survey, a few hours of analysis) is almost always trivial compared to the Budget you are about to commit based on its results. The distinction between a full conjoint (30+ respondents, regression-based estimation, quantitative weights) and a structured preference exercise (small group or personal use, directional signal only) maps directly to your Risk Tolerance - the higher the stakes of the Allocation decision, the more you should invest in the rigor of the measurement.
Disclaimer: This content is for educational and informational purposes only and does not constitute financial, investment, tax, or legal advice. It is not a recommendation to buy, sell, or hold any security or financial product. You should consult a qualified financial advisor, tax professional, or attorney before making financial decisions. Past performance is not indicative of future results. The author is not a registered investment advisor, broker-dealer, or financial planner.