Rectified linear unit in deep learning
#snsinstitutions #snsdesignthinkers #designthinking

Rectified linear unit in deep learning

#snsinstitutions #snsdesignthinkers #designthinking


ANN are inspired by the biological neurons within the human body which activate under certain circumstances resulting in a related action performed by the body in response. Artificial neural nets consist of various layers of interconnected artificial neurons powered by activation functions which help in switching them ON/OFF. Like traditional ML algorithms, here too, there are certain values that neural nets learn in the training phase.

Briefly, each neuron receives a multiplied version of inputs and random weights which is then added with static bias value (unique to each neuron layer), this is then passed to an appropriate activation function which decides the final value to be given out of the neuron. There are various activation functions available as per the nature of input values. Once the output is generated from the final neural net layer, loss function (input vs output)is calculated and backpropagation is performed where the weights are adjusted to make the loss minimum. Finding optimal values of weights is what the overall operation is focusing around.

What is activation function?

As mentioned above, activation functions give out the final value given out from a neuron, but what is activation function and why do we need it??

So, an activation function is basically just a simple function that transforms its inputs into outputs that have a certain range. There are various types of activation functions that perform this task in a different manner, For example, the sigmoid activation function takes input and maps the resulting values in between 0 to 1.

One of the reasons that this function is added into an artificial neural network in order to help the network learn complex patterns in the data. These functions introduce nonlinear real-world properties to artificial neural networks. Basically, in a simple neural network, x is defined as inputs, w weights, and we pass f (x) that is the value passed to the output of the network. This will then be the final output or the input of another layer.

If the activation function is not applied, the output signal becomes a simple linear function. A neural network without activation function will act as a linear regression with limited learning power. But we also want our neural network to learn non-linear states as we give it complex real-world information such as image, video, text, and sound.

What is ReLU Activation Function?

ReLU stands for rectified linear activation unit and is considered one of the few milestones in the deep learning revolution. It is simple yet really better than its predecessor activation functions such as sigmoid or tanh.

ReLU function is its derivative both are monotonic. The function returns 0 if it receives any negative input, but for any positive value x, it returns that value back. Thus it gives an output that has a range from 0 to infinity.

Now let us give some inputs to the ReLU activation function and see how it transforms them and then we will plot them also.?


要查看或添加评论,请登录

Dr.A.Sumithra Gavaskar的更多文章

  • Dr.A.Sumithra Engages as a Resource Person on Next-Generation Firewalls and Network Security Tools

    Dr.A.Sumithra Engages as a Resource Person on Next-Generation Firewalls and Network Security Tools

    #snsinstitutions #snsdesignthinkers #designthinking In a world where cybersecurity threats continue to evolve, the need…

  • "Effective Mentoring Strategies for Student Placement Success"

    "Effective Mentoring Strategies for Student Placement Success"

    A day of mentoring students for placement starts with a resume review session, focusing on tailoring their CVs to…

  • Joy of Course instructor for OOPS

    Joy of Course instructor for OOPS

    As the Course Instructor for Object-Oriented Programming (OOP) for second-year Electronics and Communication…

  • Serve as a member of the IQAC audit committee

    Serve as a member of the IQAC audit committee

    A happy IQAC audit in a college is the result of diligent preparation, teamwork, and a commitment to quality. The…

  • Journey of Placement Mentor for Accenture

    Journey of Placement Mentor for Accenture

    The journey of a Placement Mentor in college begins with a passion for helping students succeed in their careers. Often…

  • Nesterov Accelerated Gradient Descent

    Nesterov Accelerated Gradient Descent

    Gradient descent It is essential to understand, before we look at Nesterov Accelerated Algorithm. Gradient descent is…

  • Recursive Neural Networks

    Recursive Neural Networks

    ? They are yet another generalization of recurrent networks with a different kind of computational graph ? It is…

  • CNN Architecture

    CNN Architecture

    Introduction A convolutional neural network (CNN), is a network architecture for deep learning which learns directly…

  • Deep Recurrent Network

    Deep Recurrent Network

    Machine learning techniques have been widely applied in various areas such as pattern recognition, natural language…

  • Standard in deep learning architecture

    Standard in deep learning architecture

    #snsinstitutions #snsdesignthinkers #designthinking Now that we’ve seen some of the components of deep networks, let’s…

社区洞察

其他会员也浏览了