TaskVector
Status: field-tested at scale across PE portfolio companies. Formal validation pending.
Before automating a task, score it across 9 dimensions. Each dimension measures a structural property that determines whether automation is viable. If any single dimension scores 1, it is a landmine - a hard no regardless of the composite score.
How predictable is the correct output given the input?
Does the system have access to everything it needs?
How often does this task occur?
How easy is it to check whether the output is correct?
If the AI gets it wrong, how bad is the damage?
Will the people affected accept AI doing this?
Could automation introduce or amplify systematic bias?
How much surrounding context is needed?
What is the worst-case outcome of an error?
The Landmine Rule
A task with 5s across the board but a 1 on Consequence Severity - say, “AI decides whether to administer medication” - is not automatable. Period. The composite score is irrelevant when a single dimension represents a structural blocker.
The landmine rule exists because automation failures are not normally distributed. They are fat-tailed. The expected cost of a worst-case error on a landmine dimension dominates all other considerations.
What Comes Next
TaskVector tells you whether to automate. The rest of the toolkit tells you how:
Verification Quadrant - plot the task by calculation vs. verification difficulty. Calculate the Templeton Ratio.
Dollarized Confusion Matrix - price the error costs. Compute the optimal threshold.
The Promotion Protocol - deploy in HITL, gather evidence, promote to autonomous on statistical proof.