← ~/visualizations

meta-learning

Visualizes MAML-style meta-learning as a repeating cycle: sample a task from a task distribution, perform a few inner-loop gradient steps that adapt θ to θ′ using only a small task dataset (few-shot), then run an outer-loop meta-update that nudges shared meta-parameters θ to minimize the post-adaptation loss L_T(θ′) across tasks.

canvasclick to interact
t=0s

practical uses

  • 01.Few-shot personalization (adapt quickly to a new user/device with little data)
  • 02.Fast adaptation for robotics/control tasks with varying dynamics
  • 03.Hyperparameter/initialization learning for rapid fine-tuning across many domains

technical notes

Blocky green-on-black panels show (left) task distribution, (center) parameter space with θ→θ′ step trail, and (right) loss before/after adaptation. Animation cycles every ~4.2s with discrete inner steps and a smooth outer meta-update; θ is stored in a closure and updated only during the outer-loop segment.