6

Neural Networks — Perceptrons, Backpropagation & MLPs

Summary

From single perceptrons to multi-layer networks: understanding how neurons compute (weights, input functions, sigmoid activation), why random weight initialization matters, and how backpropagation actually works through a manual 4-step process. Builds a feedforward neural network from scratch to solve XOR — the classic problem that single-layer networks cannot handle — then scales up to an MLP classifier on the Iris dataset, systematically varying hidden neuron counts (1, 2, 4, 8, 16, 32) to see exactly how capacity affects learning.