Perceptron in ML
Dr.A.Sumithra Gavaskar
Associate Professor at Sns College of Technology , Research Co-ordinator of Dept of CSE
#snsinstitutions #snsdesignthinkers #designthinking
A perceptron, the basic unit of a neural network, comprises essential components that collaborate in information processing.
Input Features: The perceptron takes multiple input features, each input feature represents a characteristic or attribute of the input data.
Weights: Each input feature is associated with a weight, determining the significance of each input feature in influencing the perceptron’s output. During training, these weights are adjusted to learn the optimal values.
Summation Function: The perceptron calculates the weighted sum of its inputs using the summation function. The summation function combines the inputs with their respective weights to produce a weighted sum.
Activation Function: The weighted sum is then passed through an activation function. Perceptron uses Heaviside step function functions. which take the summed values as input and compare with the threshold and provide the output as 0 or 1.
领英推荐
Output: The final output of the perceptron, is determined by the activation function’s result. For example, in binary classification problems, the output might represent a predicted class (0 or 1).
Bias: A bias term is often included in the perceptron model. The bias allows the model to make adjustments that are independent of the input. It is an additional parameter that is learned during training.
Learning Algorithm (Weight Update Rule): During training, the perceptron learns by adjusting its weights and bias based on a learning algorithm. A common approach is the perceptron learning algorithm, which updates weights based on the difference between the predicted output and the true output.
Build the single Layer Perceptron Model