Mechanism Design

Game TheoryDifficulty: ████Depth: 8Unlocks: 3

Designing games to achieve desired outcomes. Incentive compatibility.

Interactive Visualization

t=0s

Core Concepts

  • Private type (agent's privately known information/preferences)
  • Mechanism (game form: strategy spaces and outcome rule)
  • Social choice function (desired mapping from type profiles to outcomes)
  • Incentive compatibility (truthful reporting or prescribed strategy is an equilibrium)
  • Individual rationality (participation constraint: agents prefer participating to their outside option)

Key Symbols & Notation

t_i (agent i's private type; T is the type profile space)M = (S, g) (mechanism as strategy spaces S and outcome function g)f: T -> O (social choice function mapping type profiles to outcomes)

Essential Relationships

  • Implementation: M implements f if equilibrium play of M yields outcome f(t) for every type profile t
  • Revelation principle: any f implementable by some mechanism is implementable by a truthful direct mechanism (truth-telling is an equilibrium)
  • Feasibility requires both Incentive Compatibility (IC) and Individual Rationality (IR) to hold for all agents
▶ Advanced Learning Details

Graph Position

106
Depth Cost
3
Fan-Out (ROI)
3
Bottleneck Score
8
Chain Length

Cognitive Load

11
Atomic Elements
55
Total Elements
L4
Percentile Level
L4
Atomic Level

All Concepts (24)

  • Mechanism: a designer-specified game defined by rules that map players' reported types/strategies to outcomes and payments
  • Designer/Principal: the actor who chooses the mechanism to implement desired outcomes
  • Private information / types: each player has a private type (valuation, signal) unknown to others
  • Type space and type profile: the set of possible types Θ_i for each player and the vector of all players' types θ
  • Allocation/outcome space: the possible allocations or outcomes the mechanism can select
  • Transfer/payment rule: monetary transfers (payments) from/to players that the mechanism can impose
  • Allocation (or allocation rule): the mapping from reported types to who receives which outcome
  • Social choice function: the desired mapping from true type profiles to outcomes (designer's objective)
  • Direct mechanism: mechanism where players directly report their types and outcome/payment rules depend on reports
  • Revelation principle: the idea that any outcome achievable by some mechanism can be achieved by a truthful direct mechanism
  • Incentive compatibility (IC): property that truthful reporting is a best response under the mechanism (in the relevant sense)
  • Dominant-strategy incentive compatibility (DSIC) / strategy-proofness: truthful reporting is a best response regardless of others' reports
  • Bayesian incentive compatibility (BIC): truthful reporting is a best response in expectation over other players' types/beliefs
  • Bayes-Nash equilibrium (BNE): equilibrium concept for games of incomplete information where strategies are type-contingent
  • Truth-telling / truthful reporting as a strategy concept (reporting true type)
  • Individual rationality / participation constraint (IR): players' expected utility from participating must meet a reservation level
  • Ex-post, interim, and ex-ante distinctions: the timing/conditioning of expectations (after types realized, conditional on own type, before types realized)
  • Quasi-linear utility and utility with transfers: utility expressed as value of outcome minus payment (u_i = v_i(outcome) - payment)
  • Budget-balance constraint: requirement on aggregate transfers (e.g., sum of payments = 0) or allowable deficit/surplus
  • Implementability: whether a social choice function can be achieved by some mechanism satisfying IC and IR
  • Allocation monotonicity (single-parameter environments): a structural condition on allocation rules necessary for implementability
  • VCG/Groves family as canonical truthful, efficient mechanisms (examples of mechanisms that implement efficiency with transfers)
  • Designer objective tradeoffs (e.g., efficiency vs revenue) under IC and IR constraints
  • Interim expected utility: a player's expected utility conditional on her own type (used in BIC/IR conditions)

Teaching Strategy

Multi-session curriculum - substantial prior knowledge and complex material. Use mastery gates and deliberate practice.

Mechanism design flips game theory around: instead of analyzing a game you’re given, you design the game—rules, messages, and payments—so that self-interested agents, acting strategically, produce outcomes you want.

TL;DR:

A mechanism M = (S, g) specifies what agents can report/do (strategy spaces S) and how reports map to outcomes (and possibly payments) via g. A social choice function f: T → O is the outcome rule we wish we could implement if we knew everyone’s private types tᵢ. Mechanism design asks: can we build M so that when agents play a Nash equilibrium (often truthful reporting), the resulting outcome equals f(t)? Key constraints are incentive compatibility (IC: agents prefer to be truthful/obedient) and individual rationality (IR: agents prefer participating over their outside option). The workhorse example is Vickrey/VCG: choose the efficient outcome and set payments so truth-telling is a dominant strategy.

What Is Mechanism Design?

Why this idea exists (motivation first)

In many real systems, outcomes depend on information that is private:

  • A buyer knows how much they value an item.
  • A worker knows how hard a task will be for them.
  • A hospital knows its capacity constraints.
  • A firm knows its costs.

A central planner (the “designer”) would like to pick an outcome that depends on these private facts—e.g., allocate resources to whoever values them most, or assign tasks to those who can do them cheapest.

But the designer doesn’t observe private information directly. So the designer must elicit it using rules that make honesty (or some intended behavior) strategically optimal.

That is the core of mechanism design: designing games to achieve desired outcomes.

Core objects and notation

We will use the node’s symbols explicitly:

  • Agents: i ∈ {1, …, n}
  • Private type of agent i: tᵢ (their privately known information, e.g., value, cost, preference parameter)
  • Type profile space: T (so t = (t₁, …, tₙ) ∈ T)
  • Outcomes: O (could include allocations, decisions, plus payments)
  • Desired rule: a social choice function f: T → O
  • This is what we wish to do if we knew everyone’s types.
  • Mechanism: M = (S, g)
  • S = S₁ × … × Sₙ is the set of strategy profiles (messages, reports, bids, actions)
  • g: S → O maps strategies into an outcome

Mechanism design asks whether we can create M = (S, g) such that when agents play strategically (e.g., at Nash equilibrium), the resulting outcome matches f(t) (or at least some desirable property of it).

The “revelation principle” intuition (why reports matter)

A common design pattern is a direct-revelation mechanism, where each agent reports a type (possibly lying):

  • Strategy space: Sᵢ = Tᵢ
  • A report is rᵢ ∈ Tᵢ (sometimes written \hat{t}ᵢ)
  • Outcome rule uses reports: g(r₁, …, rₙ)

Why focus on reporting mechanisms? Because of a powerful idea (informal here):

If some mechanism implements a social choice function in equilibrium, then there exists a direct mechanism that implements it with truthful reporting in equilibrium.

So we often aim for truthful mechanisms (incentive compatible) where “tell the truth” is an equilibrium strategy.

Two central constraints: IC and IR

1) Incentive compatibility (IC)

Informally: each agent prefers to report their true type (or follow the prescribed strategy) given others do so.

  • In dominant-strategy IC (DSIC), truth-telling is best regardless of others.
  • In Bayesian IC (BIC), truth-telling is best in expectation given beliefs about others’ types.

2) Individual rationality (IR) (participation constraint)

Agents can opt out. IR requires each agent’s utility from participating is at least their outside option (often normalized to 0).

Mechanism design is largely about the tension:

  • The designer wants efficiency/fairness/revenue/etc.
  • Agents want to maximize their own utility.
  • Private information means agents may manipulate.

The designer “programs” incentives so that self-interest aligns with the intended outcome.

Core Mechanic 1: Types, Outcomes, and Utility Models

Why you need a utility model

To talk about incentives, you need to formalize what each agent cares about. Mechanisms are not just maps from messages to outcomes; they are incentive environments. Utilities convert outcomes into payoffs, and payoffs drive strategy.

A common baseline is quasi-linear utility:

  • Outcome o ∈ O often decomposes into (x, p)
  • x: an allocation/decision
  • p = (p₁, …, pₙ): payments (positive means agent pays; negative means agent receives)
  • Agent i’s valuation for allocation x given type tᵢ: vᵢ(x, tᵢ)
  • Utility:

uᵢ((x, p), tᵢ) = vᵢ(x, tᵢ) − pᵢ

Quasi-linearity is popular because payments can be used cleanly to shape incentives.

Social choice function vs. mechanism

Keep these conceptually separate:

  • Social choice function f: T → O
  • “If we magically knew types, we would choose f(t).”
  • Mechanism M = (S, g)
  • “Given what people do (send messages), we output g(s).”

Implementation is about aligning equilibrium behavior with f.

Direct mechanisms and truthful reporting

In a direct mechanism:

  • Each agent reports rᵢ ∈ Tᵢ
  • The mechanism outputs o = g(r₁, …, rₙ)

Truthful reporting means rᵢ = tᵢ.

The ideal is:

g(t₁, …, tₙ) = f(t₁, …, tₙ)

and truth-telling is an equilibrium.

Expected utility and uncertainty about others

Often each agent knows only their own type tᵢ and has beliefs about t₋ᵢ (others’ types). Then incentives are about expected utility.

Let T₋ᵢ be the space of other agents’ types and let the agent have a belief distribution over t₋ᵢ. A Bayesian incentive constraint compares:

E[uᵢ(g(tᵢ, t₋ᵢ), tᵢ)] vs. E[uᵢ(g(rᵢ, t₋ᵢ), tᵢ)]

where the expectation is over t₋ᵢ.

A compact taxonomy (to keep your bearings)

Mechanism design can feel like a forest of definitions. This table helps anchor the most common “axes”.

AxisOptionsWhat changes conceptually?
InformationComplete vs incomplete (types)Whether incentives depend on beliefs/expectations
Mechanism typeDirect vs indirectWhether strategies are type reports or richer actions
Incentive notionDSIC vs BICRobustness of truthfulness to others’ behavior
ParticipationIR ex post vs IR in expectationWhether each realized outcome is acceptable or only on average
ObjectiveEfficiency, revenue, fairness, etc.What the designer tries to optimize

Pacing note: why we linger on “types”

Most errors in mechanism design come from confusing:

  • Type tᵢ (private parameter)
  • Report rᵢ (strategic message)
  • Valuation vᵢ(·, tᵢ) (how type maps to preferences)
  • Utility uᵢ (valuation minus payment)

If you keep these separate, the IC and IR constraints become much easier to write—and to reason about.

Core Mechanic 2: Incentive Compatibility and Individual Rationality (as inequalities)

Why IC is the heart of design

A social choice rule f may be “perfect” from the designer’s perspective (efficient, fair), but useless if agents can profitably misreport. IC is the set of constraints that prevent profitable manipulation.

Mechanism design often looks like this:

  • Choose an outcome rule and payments.
  • Prove a set of inequalities (IC + IR).
  • Then optimize your objective subject to those inequalities.

Dominant-strategy incentive compatibility (DSIC)

Consider a direct mechanism where reports rᵢ ∈ Tᵢ and output is o = g(r).

DSIC requires that for every agent i, every true type tᵢ, every possible misreport rᵢ, and every reports of others r₋ᵢ:

uᵢ(g(tᵢ, r₋ᵢ), tᵢ) ≥ uᵢ(g(rᵢ, r₋ᵢ), tᵢ)

Interpretation: even if others report arbitrarily, telling the truth is best.

This is a strong guarantee and leads to very robust mechanisms (e.g., Vickrey auction).

Bayesian incentive compatibility (BIC)

BIC relaxes “regardless of others’ reports” to “in expectation over others’ types.”

Assume a common prior over types and that others report truthfully. Then BIC requires:

E_{t₋ᵢ}[uᵢ(g(tᵢ, t₋ᵢ), tᵢ)] ≥ E_{t₋ᵢ}[uᵢ(g(rᵢ, t₋ᵢ), tᵢ)]

BIC is weaker than DSIC, but can enable better objectives in some settings.

Individual rationality (IR)

Agents can refuse to participate and take an outside option (often 0 utility). IR ensures participation is (weakly) beneficial.

Common variants:

  • Ex post IR (stronger): for every realized type profile t,

uᵢ(g(t), tᵢ) ≥ 0

  • Interim IR (Bayesian): for each tᵢ,

E_{t₋ᵢ}[uᵢ(g(tᵢ, t₋ᵢ), tᵢ)] ≥ 0

The choice depends on whether you want guarantees pointwise or only in expectation.

Implementation: connecting equilibrium to f

A mechanism implements f if equilibrium play leads to outcome f(t).

In a direct truthful mechanism, implementation is:

  • (Truthful equilibrium) rᵢ = tᵢ is an equilibrium under your IC notion.
  • (Correct outcome) g(t) = f(t) for all t.

The second line is “easy” (just set g = f on truthful reports). The hard part is IC.

How payments create incentives (the key lever)

In quasi-linear environments, you typically:

1) Choose allocation rule x(r)

2) Choose payment rule p(r)

so that utility becomes:

uᵢ(r, tᵢ) = vᵢ(x(r), tᵢ) − pᵢ(r)

Then IC constraints become inequalities involving vᵢ and pᵢ.

This is where mechanism design becomes “engineering”: payments are like control inputs.

A small but important conceptual pause

IC and IR are not “features”; they are constraints.

  • If you demand DSIC + ex post IR + budget balance + efficiency in some environments, you may hit impossibility theorems.
  • Many real designs choose tradeoffs: e.g., accept BIC rather than DSIC, or accept approximate efficiency.

Mechanism design is not only about finding a clever rule—it’s about navigating what is possible.

Application/Connection: VCG Intuition, Auctions, and Task Decomposition

Why VCG keeps showing up

In many environments, the designer’s desired social choice function is efficient allocation:

x*(t) ∈ argmaxₓ ∑ᵢ vᵢ(x, tᵢ)

This says: pick the allocation maximizing total reported value (social welfare).

But welfare-maximizing allocations invite manipulation: “If I exaggerate my value, I might get the item.”

VCG mechanisms solve this by adding payments that make the agent internalize the externality they impose on others.

VCG in one sentence (the intuition)

Allocate efficiently, and charge each agent i the harm they cause to everyone else by being present.

Formally, choose x*(r) maximizing ∑ᵢ vᵢ(x, rᵢ). Then set payment pᵢ(r) roughly as:

pᵢ(r) = (max welfare achievable by others without i)

− (welfare achieved by others with i in chosen allocation)

This often yields DSIC.

How this connects to Auction Theory

Auction theory is a specialized branch of mechanism design where:

  • Outcome is an allocation of goods + payments
  • Types are values (private valuations)
  • Objective might be welfare (efficiency) or revenue

Many canonical auctions (Vickrey second-price, Myerson optimal auction) are mechanisms designed for IC/IR.

Unlock connection: Auction Theory

How this connects to Task Discretization (LLM systems)

Task discretization is about turning a complex task into verifiable subtasks. Mechanism design enters because:

  • Workers/agents have private costs, effort levels, or capabilities (types)
  • The system designer wants truthful reporting of:
  • how long something takes
  • expected quality
  • confidence/calibration
  • Payments/rewards need to align incentives with truthful reports and high-quality work

In a multi-agent LLM pipeline, the “mechanism” can be:

  • What each agent is asked to output (strategy space)
  • How outputs are scored/verified (outcome rule)
  • How credit/reward is assigned (payments)

Incentive compatibility becomes: “Is it optimal to be honest about uncertainty?” IR becomes: “Is it worth participating given compute/time costs?”

Unlock connection: Task Discretization

Practical design checklist (conceptual, not a proof)

When you encounter a real design problem, ask:

1) What are the agents’ private types tᵢ?

2) What outcomes O matter (allocation, payment, reputation, access)?

3) What is your desired f: T → O (efficiency, revenue, fairness)?

4) What strategic behavior do you fear (misreporting, shirking, collusion)?

5) What IC notion fits (DSIC vs BIC) given your environment?

6) What participation constraints (IR) matter in practice?

This turns mechanism design from “symbol soup” into a structured process.

Worked Examples (3)

Example 1: A tiny direct mechanism and checking DSIC as inequalities

Single item, two bidders i ∈ {1,2}. Each has private value tᵢ ∈ ℝ₊ for receiving the item. Utility is quasi-linear: uᵢ = tᵢ·xᵢ − pᵢ where xᵢ ∈ {0,1} indicates if i gets the item.

Mechanism M (not Vickrey):

  • Reports r₁, r₂.
  • Allocate item to highest report (ties break arbitrarily).
  • Winner pays their own report: p_winner = r_winner (a first-price rule).
  • Loser pays 0.

Goal: test whether truthful reporting is DSIC.

  1. Fix agent 1 with true type t₁ and fix the other’s report r₂.

    We compare truthful report r₁ = t₁ to some deviation r₁ = d.

  2. Case A: If t₁ < r₂.

    • Truthful: agent 1 loses, so u₁ = 0.
    • Deviate to d > r₂ to win: u₁ = t₁ − d.

    Since d > r₂ > t₁, we have t₁ − d < 0.

    So in this case, deviating to win is worse than truthfulness (0 ≥ negative).

  3. Case B: If t₁ > r₂.

    • Truthful: agent 1 wins and pays r₁ = t₁, so

    u₁(truth) = t₁ − t₁ = 0.

    • Deviate to d such that d still > r₂ (still wins):

    u₁(deviate) = t₁ − d.

    If we choose d < t₁ (but still > r₂), then t₁ − d > 0.

    So deviating by underbidding (but remaining the highest) strictly increases utility.

  4. Therefore truthful reporting is NOT a dominant strategy.

    We found a profitable deviation whenever r₂ < d < t₁ and t₁ > r₂.

Insight: This shows what IC really is: a family of inequalities. A mechanism can have the ‘right’ allocation rule (highest value should win) but still fail IC because payments don’t align incentives. First-price auctions are strategic and typically not DSIC; Vickrey’s payment rule is designed specifically to remove the profitable underbidding deviation.

Example 2: Vickrey (second-price) auction is DSIC (single-item DSIC proof sketch)

Single item, n bidders. Type tᵢ is bidder i’s private value. Reports rᵢ.

Vickrey mechanism:

  • Allocation: give item to highest report.
  • Payment: winner pays the second-highest report; everyone else pays 0.

We show that reporting rᵢ = tᵢ is a dominant strategy.

  1. Fix an agent i with true value tᵢ and fix all other reports r₋ᵢ.

    Let m = max_{j≠i} rⱼ be the highest report among others.

  2. Consider truthful report rᵢ = tᵢ.

    There are two main cases depending on tᵢ vs m.

  3. Case 1: tᵢ < m.

    • If i reports truthfully (tᵢ), then rᵢ = tᵢ < m, so i loses.
    • Utility uᵢ = 0.

    If i deviates to some rᵢ' > m to win, then i pays price m (second price) and gets value tᵢ.

    So uᵢ(deviate) = tᵢ − m < 0.

    Thus deviation is worse than truthfulness.

  4. Case 2: tᵢ > m.

    • If i reports truthfully, i wins and pays m.

    So uᵢ(truth) = tᵢ − m > 0.

    If i deviates but still reports rᵢ' > m, i still wins and still pays m.

    So uᵢ(deviate, still win) = tᵢ − m = uᵢ(truth).

    If i deviates to rᵢ' < m (so i loses), uᵢ = 0 < tᵢ − m.

    Thus deviation cannot improve utility.

  5. In all cases, truthful reporting yields utility ≥ any deviation, for any r₋ᵢ.

    Therefore Vickrey is DSIC.

Insight: The payment is decoupled from your own report (except through whether you win). That removes the incentive to shade your bid. This ‘pay the external price’ idea is the seed of VCG: align incentives by charging the cost you impose on others.

Example 3: A simple VCG-style payment computation (externality view)

Two agents compete for one identical resource. Allocation x ∈ {give to 1, give to 2, give to nobody}.

Types are values t₁, t₂. Valuations: v₁(x,t₁)=t₁ if 1 receives resource else 0; similarly for agent 2.

Efficient allocation chooses the higher value.

We compute VCG payments using the externality formula.

  1. Let reports be r₁, r₂. Suppose r₁ ≥ r₂ so the mechanism allocates to agent 1.

    Then others’ welfare with agent 1 present (i.e., agent 2’s value under chosen allocation) is:

    W_{-1}(with 1) = 0

    because agent 2 gets nothing.

  2. Now compute others’ maximum welfare if agent 1 were absent.

    If agent 1 is removed, the best allocation for agent 2 is to give the item to agent 2, yielding:

    W_{-1}(without 1) = r₂

  3. VCG payment for agent 1 is:

    p₁ = W_{-1}(without 1) − W_{-1}(with 1)

    = r₂ − 0

    = r₂

  4. Similarly, if agent 2 loses, p₂ = 0 because removing agent 2 doesn’t change others’ achievable welfare when agent 2 was already not receiving the item.

Insight: In the single-item case, VCG reduces exactly to second price: the winner pays the highest losing report. Framing it as ‘externality on others’ generalizes cleanly to many-allocation problems.

Key Takeaways

  • Mechanism design is “reverse game theory”: design M = (S, g) so strategic behavior yields desired outcomes.

  • Types tᵢ are private information; reports/actions are strategic. Keep type, report, valuation, and utility conceptually separate.

  • A social choice function f: T → O is the target rule; a mechanism is what you can actually run with messages and an outcome function.

  • Incentive compatibility (DSIC or BIC) is a set of inequalities ensuring truth-telling/obedience is an equilibrium.

  • Individual rationality (IR) ensures participation: agents get utility ≥ outside option (ex post or in expectation).

  • With quasi-linear utilities, payments are the main lever to enforce IC while achieving allocations like efficiency.

  • Vickrey/VCG mechanisms implement welfare-maximizing allocations by charging externalities, making truth-telling optimal.

  • Many real systems (auctions, marketplaces, multi-agent pipelines) are mechanisms; designing scoring/reward rules is mechanism design.

Common Mistakes

  • Confusing the social choice function f (what you want if types were known) with the mechanism outcome rule g (what happens given strategic reports).

  • Writing an allocation rule that looks right (e.g., ‘highest value wins’) and assuming it is IC without checking payment-driven incentives.

  • Forgetting participation constraints: a mechanism can be IC but fail IR, causing agents to opt out.

  • Mixing DSIC and BIC reasoning: DSIC must hold for all other reports, while BIC is only in expectation under beliefs.

Practice

medium

Single-item auction: Consider a mechanism where the highest bidder wins and pays a fixed fee F (independent of bids), losers pay 0. Assume quasi-linear utility and values tᵢ ≥ 0. Is truthful bidding DSIC? Under what conditions on F is the mechanism ex post IR for the winner?

Hint: Fix others’ bids and compare winning vs losing utilities. Since payment doesn’t depend on your bid, the only incentive issue is whether you want to win.

Show solution

DSIC: Yes (in the direct sense of reporting a value), because your report only affects whether you win; payment is fixed. For fixed others’ reports with maximum m, you can choose to win by bidding > m or lose by bidding < m. Winning gives utility tᵢ − F; losing gives 0. So your best action is: win iff tᵢ ≥ F, lose otherwise. That implies truthful bidding is DSIC in the sense that reporting tᵢ makes the outcome consistent with that threshold (you win when tᵢ is high enough relative to others).

Ex post IR for the winner requires that whenever the mechanism assigns the item to i, they have nonnegative utility: tᵢ − F ≥ 0 ⇒ tᵢ ≥ F. If the mechanism can assign the item to an agent with tᵢ < F (because they outbid others), then ex post IR fails. So to guarantee ex post IR regardless of types, you would need F = 0; otherwise IR only holds for winners whose values exceed F.

easy

Write the DSIC constraint explicitly: For a direct mechanism g with quasi-linear utility uᵢ = vᵢ(x(r), tᵢ) − pᵢ(r), write the DSIC inequality for agent i comparing truthful report tᵢ to a deviation rᵢ, holding r₋ᵢ fixed.

Hint: DSIC means truthful is best for every r₋ᵢ, not just in expectation.

Show solution

For all i, for all true types tᵢ, for all deviations rᵢ, and for all others’ reports r₋ᵢ:

vᵢ(x(tᵢ, r₋ᵢ), tᵢ) − pᵢ(tᵢ, r₋ᵢ)

≥ vᵢ(x(rᵢ, r₋ᵢ), tᵢ) − pᵢ(rᵢ, r₋ᵢ).

hard

VCG externality payment (conceptual): In a welfare-maximizing allocation problem with reported valuations vᵢ(x, rᵢ), define x(r) ∈ argmaxₓ ∑ⱼ vⱼ(x, rⱼ). Show the standard VCG payment form: pᵢ(r) = hᵢ(r₋ᵢ) − ∑_{j≠i} vⱼ(x(r), rⱼ). What does hᵢ represent, and what common choice of hᵢ yields the ‘externality’ interpretation?

Hint: hᵢ can be any function of others’ reports; it shifts i’s utility without affecting incentives as long as it doesn’t depend on rᵢ.

Show solution

VCG payments have the form:

pᵢ(r) = hᵢ(r₋ᵢ) − ∑_{j≠i} vⱼ(x*(r), rⱼ)

where hᵢ is any function independent of rᵢ. It acts like a constant from agent i’s perspective (given r₋ᵢ), so it does not change which report maximizes i’s utility—hence it preserves IC.

A common choice is:

hᵢ(r₋ᵢ) = maxₓ ∑_{j≠i} vⱼ(x, rⱼ)

(the maximum welfare others could achieve if i were absent). Then:

pᵢ(r) = (max welfare achievable by others without i)

− (welfare achieved by others under the chosen outcome)

which is exactly the externality i imposes on others.

Connections

Next nodes:

  • Auction Theory — applies IC/IR to selling goods, compares first-price vs second-price, and develops revenue-optimal design.
  • Task Discretization — uses mechanism design thinking to structure incentives for truthful reporting, verification, and reward allocation in multi-step LLM workflows.

Related background refreshers:

  • Nash equilibrium (implementation as equilibrium outcome)
  • Expected value (Bayesian IC and interim IR)
Quality: B (4.0/5)