20 Deep Learning Terminologies
Deep Learning Terminologies

20 Deep Learning Terminologies

 Introduction to Deep Learning Terminologies

a. Recurrent Neuron

It’s one of the best from the Deep Learning Terminologies. Basically, in this output is sent back to the neuron for t timestamps. After looking at the diagram, we can say output is back as input t times. Also, we have to connect different together that will look like an unrolled neuron. Although, an important thing is that it provides us a more generalized output.

b. RNN (Recurrent Neural Network)

We use recurrent neural network especially for sequential data. As in this, we use the previous output to predict the next one. Also, in this case, loops have a network within them. In a hidden neuron, loops have the capability to store information. As it stores previous words to predict the output.

Again, we have to send an output of hidden layer for t timestamps. Moreover, you can see that unfolded neuron looks like. Once the neuron completes it all timestamps then it goes to the next layer. As a result, we can say that the output is more generalize. Although, the before fetched information is retained after a long time.

Let us see 10 Best Machine Learning Software | Machine Learning Framework- 2018

Moreover, to update the weight of unfolded network, we have to propagate error once again. Hence, called as back propagation through time(BPTT).

c. Vanishing Gradient Problem

It’s one of the best from the Deep Learning Terminologies. Where the activation function is very small, this problem arises. At the time of backpropagation, we have to multiply weights with low gradients. Although, they are small and vanish if they go further deep in the network. As for this reason, neural network forgets the long-range dependence. Also, it becomes a problem of neural networks. As a result, dependence is very important for the network to remember.

We use activation function to solve problems like ReLu which do not have small gradients.

Learn more about Gradient Boosting Algorithm

d. Exploding Gradient Problem

We can say this is the opposite of the vanishing gradient problem. It is different as the activation function is too large. Also, it makes the weight of particular node very high. Although, we can solve it by clipping the gradient. So that it doesn’t exceed a certain value.

e. Pooling

It’s one of the best from the Deep Learning Terminologies. We can introduce pooling layers in between the convolution layers. Basically, use this to reduce the number of parameters. Although, prevent over-fitting. Although, the size of the most common type of pooling layer of filter size(2,2) using the MAX operation. Further, we can say what it would do is, it would take the maximum of each 4*4 matrix of the original image.

We can also use other applications of pooling such as average pooling etc.

Read more in detail Audio Analysis Using Deep Learning

f. Padding

In this process, we have to add an extra layer of zeros across the images. So, output image has the same size as the input. Hence, called as padding. If pixels of the image are actual or valid, we can say it’s a valid padding.

g. Data Augmentation

It refers to the addition of new data that come from the given data, which might prove to be beneficial for prediction.

For example:

Let us assume we have a digit “ 9 “. We can also change its recognition. But if it’s rotating or tilting. Thus, rotation help to increase the accuracy of our model. Although, we increase the quality of data by rotating. Hence, called for Data Augmentation.

Let us see Comparison between Deep Learning vs Machine Learning

h. Softmax

We use softmax activation function in the output layer for classification problems. It’s like sigmoid function. Also, the difference is that outputs are normalized, to sum up to 1.

It is like the sigmoid function, with the only difference being that the outputs are normalized, to sum up to 1. The sigmoid function would work in case we have a binary output. But we also have a multiclass classification problem. In this process softmax makes it easy to assign values to each class. Also, that can be interpreted as probabilities.

It’s very easy to see it this way – Suppose you’re trying to identify a 6 which might also look a bit like 8. The function would assign values to each number as below. We can easily see that the highest probability is assigned to 6, with the next highest assigned to 8 and so on…

i. Neural Network

Neural Network form the backbone of deep learning. The goal of it is to find an approximation of an unknown function. It is a combination of interconnected neurons. These neurons have weights. Also, have a bias that needs to be updated during the network training depending upon the error. The activation function puts a nonlinear transformation to the linear combination. Thus, generates the output. The combinations of the activated neurons give the output.

j. Input layer/ Output layer / Hidden layer

It’s one of the best from the Deep Learning Terminologies. The input layer is the one which receives the input. Also, it’s the first layer of the network. The output layer is the final layer of the network. These layers are the hidden layers of the network. We use these hidden layers to perform tasks on incoming data. Hence, pass generated output to the next layer. Although, both layers are visible but the intermediate layers are hidden.

Also learn the comparison between deep learning vs machine learning vs AI vs Data Science.

Read Complete Article >>


要查看或添加评论,请登录

Malini Shukla的更多文章

社区洞察

其他会员也浏览了