← ~/visualizations

ensemble-methods

Visualizes how an ensemble combines multiple base learners hᵢ(x) into a single prediction H(x) using voting or averaging. The animation cycles through Bagging (bootstrap resampling and parallel learners), Boosting (sequential learners with increasing weights focusing on errors), and Aggregation (majority vote vs average), showing how H(x)=Σ wᵢ·hᵢ(x) emerges from the learners’ outputs.

canvasclick to interact
t=0s

practical uses

  • 01.Random forests: bagging + feature subsampling to reduce variance and improve robustness
  • 02.Gradient boosting (XGBoost/LightGBM/CatBoost): sequentially reduce residual errors for strong predictive performance
  • 03.Model ensembling in production/competitions: average/vote across diverse models for better generalization

technical notes

Pure Canvas2D, green-on-black blocky UI with grid snapping. Uses time-based 3-stage cycle (4.5s) with ease() for smooth transitions. Learner predictions are deterministic via a small hash() for stable per-learner diversity; boosting uses normalized weights and a staged highlight to convey sequential training.