Why should you learn 'Deep Learning'?
https://news.filehippo.com/2013/11/googles-deep-learning-computers-smart-creators/

Why should you learn 'Deep Learning'?

Recently, the term ‘Deep Learning’ has been quite in vogue and it is rightfully justified. Why so? Well, this is going to be the much sought skill in near future. What is ‘Deep Learning’? Deep learning is just another name for artificial neural networks (ANN for short) but in a more refined and easier avatar. They have been existing for more than 40 years. In 2000’s, NVIDIA accelerated ANN by bringing their chips for scientific computing and from then onwards, use of neural networks has picked up.

In machine learning and cognitive science, artificial neural networks (ANNs) are a family of models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected "neurons" which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning.

Why ‘Deep Learning’ is called deep? It is because of the structure of ANNs. Earlier 40 years back, neural networks were only 2 layers deep as it was not computationally feasible to build larger networks. Now it is common to have neural networks with 10+ layers and even 100+ layer ANNs are being tried upon.

You can essentially stack layers of neurons on top of each other. The lowest layer takes the raw data like images, text, sound, etc. and then each neurons stores some information about the data they encounter. Each neuron in the layer sends information up to the next layers of neurons which learn a more abstract version of the data below it. So the higher you go up, the more abstract features you learn. You can see in the picture below has 5 layers in which 3 are hidden layers.

Why feature engineering may turn obsolete?

Today as a data scientist, you may be learning feature engineering as a part of machine learning skill. In that, you need to transform your data to the computer in a form that it can understand. For that you may use R or Python or spreadsheet software to translate your data. You may be converting your data into a large spreadsheet of numbers containing rows and columns of instances and features. Then you feed this data into the machine learning algorithm and it tries to learn from this data. As engineering the features is a time consuming task, we need to extract only the relevant features that improve our model. But as you are unaware of the usefulness of these features until you train and test your model, you are caught into the vicious cycle of developing new features, rebuilding the model, measure results, and repeat until you are satisfied with the results. This is very time consuming task and takes lot of your time.

How deep learning may save your time?

In deep learning, ANNs are automatically extracting features instead of manual extraction in feature engineering. Take example of image as input. Instead of us taking an image and hand compute features like distribution of colors, image histograms, distinct color count, etc., we just have to feed the raw images in ANN. ANNs have already proved their worth in handling images, but now they are being applied to all kinds of other datasets like raw text, numbers etc. This helps the data scientist to concentrate more on building deep learning algorithms.  

Big Data is required for deep learning

Soon, feature engineering may turn obsolete but deep learning algorithm will require massive data for feeding into our models. Fortunately, we now have big data sources not available two decades back – facebook, twitter, Wikipedia, project Guttenberg etc. However, the bottleneck remains in cleaning and processing these data into required format for powering the machine learning models. More and more big data will be made available for public consumption in near future.

Where can you get help in deep learning?

Open sourcing is predominant in deep learning, Tensorflow, Torch, keras, Big Sur hardware, DIGITS and Caffe are some of the massive deep learning projects. In academic research, there are lots of papers with algorithm source code along with their findings. Arvix.org has open access to over 1 million papers in deep learning.

Do you need a computer expertise to learn deep learning?

You do not need a rigorous computer expertise to learn deep learning. Your domain knowledge of your discipline will help you in building deep learning models. If you have learnt any data science language like ‘R’ or ‘Python’, spread sheets like ‘Excel’ or any other basic programming and studied ‘STEM’ then you are proficient to delve into deep learning.

What are deep learning advantages?

One advantage we have already explained that you don’t have to figure out the features ahead of time. Another advantage is that we can use the same neural net approach for many different problems like Support Vector Machines, Linear Classifier, Regression, Bayesian, Decision Trees, Clustering and Association Rules. It is fault tolerant and scales well. Many perceptual tasks have been performed with help of CNNs. Some case studies are given below:

How classical ML using feature engineering can be compared with deep learning using CNN? Well it can be seen with the help of these pictures.

From these pictures, you can see the wide applicability of deep learning in all aspects of life. I think this is enough motivating for you to start deep learning and excel in data science and machine learning skills. 

要查看或添加评论,请登录

vipin tyagi的更多文章

社区洞察

其他会员也浏览了