Drift Detector
A posterior predictive check that detects when the operator's preferences have drifted from the learned model. Computed as the fraction of recent decisions the model predicted incorrectly. When the drift score exceeds a threshold, the agent triggers re-elicitation.
Why It Exists
Preferences are not static. People change their minds, priorities shift, new constraints emerge. The drift detector is how the agent notices that the operator changed their mind - and responds proportionally.
Rosetta Stone
Four circles, four readings of the same object. Each role reads the artifact through its own lens.
A risk sensor. Preferences drift; so does every input assumption. The detector is how you learn the model is stale before the bad trades pile up.
The mechanism that notices you changed your mind, without you having to announce it. "I have been thumbs-downing the last ten in a row" becomes a signal the agent acts on.
A monitoring rule on a moving window of rejections or corrections. Trip a threshold, trigger re-elicitation.
A posterior predictive p-value on recent outcomes under the current model. When the p-value is small enough for long enough, reject the current preference model and re-fit.
Related Terms
Autonomy State Machine - A graduated trust system for AI deployments with three states: Disabled, HITL (human verifies every output), and Autonomous (spot-check only).
Structured Elicitation - A controlled experiment designed to learn the operator's preferences.