Designed Convergence
There is a fundamental difference between designing an experiment and designing a research program. An experiment can fail. A well-designed research program cannot - it can only take longer. Designed Convergence is the principle that turns this mathematical fact into an engineering discipline.
The Distinction
| Experiment | Research Program | |
|---|---|---|
| Example | A/B test: does this CTA color increase conversion? | CRO program: systematically increase conversion over 12 months |
| Outcome | Binary: yes/no | Continuous: how much, how fast |
| Failure mode | Wrong hypothesis | None - the program finds the right hypothesis |
| P(success) | p | 1 - (1-p)n |
The key insight: if you design the system, not the experiment, success becomes a mathematical inevitability. The only variable is time.
The Math
An individual experiment has probability p of success. If p = 0.15, you have an 85% chance of failure. Most experiments fail. This is fine.
A program runs n independent experiments. The probability that at least one succeeds:
This is not a trick. It is the complement rule applied to independent trials. But the organizational implication is profound: the program-level probability of success is a design choice, not a hope.
With Bayesian updating between trials, the experiments are not truly independent - each one is informed by prior failures. The effective p increases over time because you are not randomly sampling the hypothesis space; you are doing informed search. The formula above is a lower bound on the true program success probability.
The Four Conditions
A system exhibits designed convergence when four conditions hold:
The search space is finite
Or can be made finite by reasonable discretization. If the space is unbounded, you need to bound it before you can guarantee convergence.
Each trial eliminates at least one hypothesis
No wasted experiments. Every trial either finds success or narrows the remaining space. Testing blue vs. slightly-different blue teaches you nothing about copy, placement, or timing.
Trial generation is informed by prior results
Bayesian, not random. Each failed trial updates your posterior on the remaining hypothesis space. The next trial is chosen from the updated belief. Informed search is O(log N); random search is O(N).
The success predicate is well-defined
You know it when you see it. Not “improve conversion” but “find configuration C such that conversion(C) > conversion(baseline) with p < 0.05 significance.”
The Designer's Seat
The four conditions above describe single-agent convergence - your own systematic search. But most real systems have multiple agents: vendors, operators, models, customers. Each has different objectives.
Every multi-agent system is a game. You can either:
Most engineering is (1). Mechanism design is (2). The CTO's job is (2).
Backward mechanism design asks: “What game produces the desired equilibrium?” It produces a system - a set of rules under which the desired outcome is the natural resting state. Systems are robust because they do not depend on any particular path. They depend on the incentive structure, which is invariant to the specific sequence of events.
The Composition
Designed Convergence = Mechanism Design + Bayesian Ratchet Search
If you use mechanism design to construct a multi-agent game, and that game has the four conditions - finite state space, hypothesis elimination, informed search, well-defined success predicate - then convergence is a theorem across the entire system, not just your own actions.
Every agent acting in self-interest produces trials. The ratchet locks in improvements. The search space shrinks. The system converges to the desired outcome because the incentive structure makes it the only stable equilibrium.
The difference between a good CTO and a great one is not making better decisions. It is designing systems where the quality of any individual decision matters less, because the system converges to the right answer regardless of the path.
Why Organizations Fail at This
Connection to Other Frameworks
Quality Hillclimb is the single-agent instance of Designed Convergence - quality gates create ascent without a plan.
The Promotion Protocol is mechanism design applied to AI autonomy - statistical graduation criteria are the incentive structure.
The Performance Frontier defines the success predicate - where does excellence live in the space you are searching?
The AI Operations Tools are the evaluation instruments for each trial in the program.