The Neuron & Forward Pass Explainer

by Ancil Cleetus

Every neural network, no matter how large, is built from a single repeating unit: the neuron. Understand one neuron and you understand the whole stack.

Biological vs. artificial neuron

The artificial neuron is loosely inspired by biology. Dendrites → inputs, synaptic strength → weights, cell body → weighted sum + activation, axon → output.

Biological neuron
Biological neuron diagram Dendrites Cell body Axon Terminals
Artificial neuron
Artificial neuron diagram x₁ x₂ x₃ w₁ w₂ w₃ Σ + b f( ) b y Output Inputs Σ + activation
Each input x corresponds to a dendrite, each weight w corresponds to synaptic strength, and the summation node is the cell body deciding whether to "fire."

Interactive: adjust weights & inputs

Drag the sliders to change input values and weights. Watch the weighted sum z and final output a update in real time.

Inputs (x)
x₁ 1.5
x₂ −2.0
x₃ 0.8
Weights (w) & bias (b)
w₁ 0.40
w₂ −0.50
w₃ 0.30
b 0.10
Activation function
Live computation
Weighted sum z
1.94
Output a = f(z)
1.94
Neuron
fired

The forward pass — step by step

Data flows left to right through the neuron. Click through each stage using the example values (x₁=1.5, x₂=−2.0, x₃=0.8 with ReLU).

Key equations

Weighted sum
z = w₁x₁ + w₂x₂ + … + wₙxₙ + b = Σᵢ(wᵢ · xᵢ) + b
Each input multiplied by its weight, summed, then shifted by the bias.
ReLU activation
a = max(0, z)
Returns z if positive, 0 otherwise. Most widely used in deep networks.
Sigmoid activation
a = 1 / (1 + e⁻ᶻ)
Squashes output to (0, 1). Used in binary classification output layers.
Tanh activation
a = (eᶻ − e⁻ᶻ) / (eᶻ + e⁻ᶻ)
Squashes output to (−1, 1). Zero-centred — often preferred over sigmoid in hidden layers.
Intuition: Think of weights as "importance dials" and the bias as a "default activation level" that lets the neuron fire even when all inputs are zero.