A two-layer neural network solves classification by learning a geometric transformation. It maps input points into a hidden representation where the two classes become linearly separable — then draws a straight line to divide them.
This makes the learned algorithm unusually legible. Rather than opaque weight matrices, you get a concrete picture: the network is finding a way to bend space so that a single hyperplane can separate what wasn’t directly separable before.
Choose a dataset below, then drag the neuron weight sliders or run gradient descent to watch the transformation take shape.
Input space shows the raw data and the decision boundary the network draws — the curve where class predictions flip. Hidden space projects each data point’s hidden-layer activations into 3D, revealing the shape of the representation the network has learned. Drag the 3D view to rotate it.
Switch datasets to see how the geometry changes — circles require a qualitatively different transformation than XOR. Change the activation function or reduce to a single neuron to watch representational capacity shrink. Open the training section to step through gradient descent and watch the loss landscape guide weights toward a solution.