Dissecting Forward Propagation in Neural Networks
Saurav Prateek
Engineer @ Google | Ex-SWE @ GeeksForGeeks | Authoring engineering newsletter with 30K+ Subs | 60K+ Linkedin | Content Creator | Mentor
Introduction
Forward Propagation is the process where the input parameters are passed through the Layers present in the Neural Network to generate an output. We have explained the Layers in a neural net in the above section (Structure of a Neural Network).
Every Neuron has some Weights and Biases and they assign their weights to each and every input parameter directed to that Neuron. Suppose if a Neuron accepts 5 input parameters then they will assign 5 different weights to those parameters and add a bias to the calculation.
Note: The inputs can be the input data which is received by the Neurons present in the Input Layer or it can also be the outputs from the Neurons present in the preceding layer which is further accepted by the Neurons present in the current layer.
Understanding the Mathematics involved
The output generated by a Neuron is computed via a mathematical equation where the weights are assigned to every input parameter and a bias is added and then the final result is passed through an Activation function to get the output of the Neuron.
Assigning the weights to every input parameter can look somewhat like this:
z = (x1.w1 + x2.w2 + x3.w3 + x4.w4 + x5.w5) + b
The resulting output z is then passed through an activation function say f to get the output of the Neuron.
y = f(z)
This looks somewhat like this.
Activation Functions
The Activation Function in an artificial Neural network is a function that calculates the output of the Neuron. It takes the combination of weights and inputs along with the bias as an input and returns the final output.
The input given to an activation function can lie anywhere in the range from -Infinity to +Infinity. The activation function restricts the range of the output of the neuron to a much smaller window (Eg. from 0 to 1 or from -1 to 1).
Hence, each layer is effectively learning a more complex, higher-level function over the raw inputs and this lets us model very complicated relationships between the inputs and the predicted outputs.
In this repo we have used Tanh as an Activation Function which is as follows.
The growth of the activation function with the inputs looks somewhat like this.
We can visualize the tanh activation function that distributes the output of a neuron in a decent range of [-1 to 1].
Forward Pass in action
We will use the Value class to witness the entire Forward Pass procedure in action. The below code snippet performs the Forward Pass in a Neural Network.
The above Forward pass equation can be visualized in this way.
The Forward Pass happens for all the Neurons in the similar way and the final output is generated by the neurons present in the Output Layer.
We further compute the loss by comparing the final output of the Neural net with the actual output and move our way back to reduce the Loss via Backpropagation and update the weights and biases of every neuron using the gradients calculated in the backpropagation process.
Conclusion
You can check my Youtube channel for more relatable technical content.
Meanwhile what you all can also do is to Like and Share this edition among your peers and also Subscribe to this newsletter so that you all can get notified when I come up with more content in future.
Until next time, Dive Deep and Keep Learning!
Software Engineer ? Python ? React ? Agentic Gen AI ? 4xGCP ? 1xAWS ? 1xDatabricks ? 2xNeo4j Graph DB Certified
2 周Nice, Clearly understood! I am eager to read the next blog (may be on calculating gradients for back propagation)