← ~/visualizations

sequence-to-sequence-modeling

Visualizes an encoder-decoder seq2seq model where the encoder produces hidden states H=(h1..hS). At each decoder step t (cycling automatically), attention weights α_t are computed over source positions, forming a context vector c_t=Σ_s α_{t,s} h_s. The active decoder box uses c_t (and prior outputs) to shape an illustrated output distribution P(y_t | y_<t, x).

canvasclick to interact
t=0s

practical uses

  • 01.Machine translation (source sentence → target sentence)
  • 02.Abstractive summarization (document → summary)
  • 03.Speech-to-text / transcription (audio frames → tokens)

technical notes

Time-cycled decode steps (3.6s loop). Attention weights are generated from a drifting peak and normalized via softmax, then rendered as weighted connections + a bar chart. Context magnitude is shown as a context bar and modulates a small output-probability panel. Uses snapped pixel grid, green-on-black palette, and only Canvas 2D API.