Neural Nets

2 May 2026

05 — Blame It on the Weights

Loss functions, gradient descent, and backpropagation — how a neural network looks at its mistakes and figures out exactly who to blame.

2 May 2026

06 — Watch It Learn

Forward pass, backprop, gradient descent — assembled into a training loop. Watch a network learn to separate XOR, circles, and spirals in real time.

2 May 2026

07 — The Vanishing Act

Why deep networks went dark in the 90s — vanishing gradients, exploding gradients, and the tricks that finally made depth work: weight init, batch norm, and residual connections.

2 May 2026

08 — Just NumPy, No Magic

Stop using the concepts, start writing the code. A full neural network in pure Python and NumPy — same thing PyTorch does internally, just slower.

2 May 2026

09 — Too Good to Be True

When your network aces the training data and fails at everything else — overfitting, regularisation, dropout, and how to actually tell if your model is learning.

2 May 2026

10 — Seeing with Filters

How convolutional neural networks see images — kernels, feature maps, pooling, and why a sliding 3×3 window beats a million fully-connected weights.

1 May 2026

01 — The Numbers That Run Everything

Before we touch a single neuron, we need to speak its language. Vectors, matrices, derivatives, and the chain rule — the math that makes neural networks tick.

1 May 2026

02 — Meet the World's Dumbest Brain Cell

One neuron. A handful of weights. A rule so simple it fits in one line. And yet — it learns. This is the perceptron, and it’s where everything begins.

1 May 2026

03 — The Switch That Isn't Really a Switch

Sigmoid, ReLU, tanh, softmax — the nonlinear magic that makes depth actually mean something.

1 May 2026

04 — Dominos All the Way Down

The forward pass: how a number enters one end of a neural network and a prediction falls out the other, layer by layer.