← AI Operations Tools

TaskVector

Status: field-tested at scale across PE portfolio companies. Formal validation pending.

Before automating a task, score it across 9 dimensions. Each dimension measures a structural property that determines whether automation is viable. If any single dimension scores 1, it is a landmine - a hard no regardless of the composite score.

Radar chart showing 9 automation dimensions with a landmine indicator on bias risk
0/9 scored
D1Determinism

How predictable is the correct output given the input?

Chaotic, judgment-heavy, creativePerfectly deterministic, rule-based
D2Data Completeness

Does the system have access to everything it needs?

Requires tribal knowledge, undocumented contextAll inputs available, fully documented
D3Frequency

How often does this task occur?

Rare, ad-hoc, one-offHigh volume, continuous, daily
D4Verifiability

How easy is it to check whether the output is correct?

Impossible to verify without redoing the workTrivially verifiable - pass/fail, compiles-or-not
D5Reversibility

If the AI gets it wrong, how bad is the damage?

Irreversible - sent email, deleted data, harmed patientTrivially reversible - draft, suggestion, undo
D6User Acceptability

Will the people affected accept AI doing this?

Users will revolt - emotional, cultural, political weightUsers prefer automation - faster, less tedious
D7Bias Risk

Could automation introduce or amplify systematic bias?

High bias potential - hiring, lending, content moderationNo bias concern - data formatting, calculations
D8Context Dependency

How much surrounding context is needed?

Deep context required - "you had to be there"Context-free - input fully determines output
D9Consequence Severity

What is the worst-case outcome of an error?

Catastrophic - medical, legal, safety, financial ruinTrivial - wrong playlist, bad suggestion, cosmetic

The Landmine Rule

A task with 5s across the board but a 1 on Consequence Severity - say, “AI decides whether to administer medication” - is not automatable. Period. The composite score is irrelevant when a single dimension represents a structural blocker.

The landmine rule exists because automation failures are not normally distributed. They are fat-tailed. The expected cost of a worst-case error on a landmine dimension dominates all other considerations.

What Comes Next

TaskVector tells you whether to automate. The rest of the toolkit tells you how:

Verification Quadrant - plot the task by calculation vs. verification difficulty. Calculate the Templeton Ratio.

Dollarized Confusion Matrix - price the error costs. Compute the optimal threshold.

The Promotion Protocol - deploy in HITL, gather evidence, promote to autonomous on statistical proof.