← ~/visualizations

regularization

Visualizes regularization as an augmented objective (loss_total = loss_data + λ·penalty) and how different penalties change parameters and generalization: L2 smoothly shrinks weights, L1 drives many weights to exact zeros (sparsity), and Dropout randomly masks units during training to reduce co-adaptation (implicit model averaging).

canvasclick to interact
t=0s

practical uses

  • 01.Reduce overfitting in linear/logistic regression with L2 weight decay
  • 02.Feature selection / sparse models with L1 (LASSO) regularization
  • 03.Improve neural network generalization with dropout during training

technical notes

Time-cycled modes (3s each) animate λ and update a toy weight vector. L2 uses multiplicative shrink; L1 uses soft-thresholding with visual zero snapping; Dropout uses a deterministic per-step RNG mask. All geometry is snapped to a small grid for a blocky green-on-black aesthetic and scales with min(w,h)/240.