Machine Learning vs Deep Learning
Credit: Semiconductor Engineering

Machine Learning vs Deep Learning

Machine learning and deep learning are two subsets of artificial intelligence which have garnered a lot of attention over the past two years. While solving data science problems it is often a very daunting task to choose between either machine learning algorithms or deep learning algorithms. We are confused and often apply both the algorithms which takes a lot of time.

When to apply Machine Learning

Machine Learning is a set of algorithms that parse data, learn from that data, and then apply what they’ve learned to make informed decisions.

An easy example of a machine learning algorithm is an on-demand video streaming services such as Netflix or Amazon Prime Video. 

For the service to make a decision about which new movies to recommend to a viewer, machine learning algorithms associate the viewer's preferences with other viewers who have a similar movie taste. This technique, which is often simply touted as AI, is used in many services that offer automated recommendations.

No alt text provided for this image

Hence, machine learning is generally used when we have fewer features in the dataset and data can be easily parsed and patterns can be learned.

When to apply Deep Learning

In practical terms, deep learning is just a subset of machine learning. Deep learning is technically similar to machine learning and functions. However, its capabilities are different.

While basic machine learning models do become progressively better at whatever their function is, but they still need some guidance. 

If an Artificial Intelligence algorithm returns an inaccurate or wrong prediction, then the designer has to step in and make some adjustments. 

With a deep learning model, an algorithm can determine on its own if a prediction is accurate or not through its hidden layers.


No alt text provided for this image

Deep Learning is generally used when we have numerous features or multidimensional data and extracting features from them is very tedious and time taking.

Let us take an example to understand.

Say we have a colored image of a cat which is 1080 * 720 pixels (accounting for 3 channels RGB of the image thereby having 

1080 * 720 * 3 = 2,332,800) and we are building up a cat vs dog classifier.

No alt text provided for this image

For training purposes, we have taken 10000 images initially.

So, total features included in the training process is 

1080 * 720 * 3 *10000 = 2332800 * 10000 = 23 billion features

An algorithm with nearly 23 billion features may take an enormous amount of time to train or may even fail due to huge dimensions of data.

So, we can apply deep learning models to automatically extract useful features from these 23 billion features which can help in classifying dog vs cat. 

We generally used Convolution Neural Networks or CNN for processing these huge features as CNN proceeds by reducing the number of features and thereby finding only relevant distinctive features which can classify whether the image is of cat or a dog.

No alt text provided for this image

Sometimes, features can be reduced by using PCA or Principal Component Analysis by combining relevant features.

Like number of bedrooms, carpet area, number of floors, garden area, etc can all be merged and reduced to features of a house. 

So here we use PCA to reduce features and then apply machine learning algorithms. The key point is that PCA can only be applied when features are correlated and cannot be applied to anything like in the example taken above where we are dealing with images with several features. Applying PCA to this case will be of no significance and can produce incorrect results.

Thus, if amount of data increases and desired accuracy cannot be achieved even after applying dimensionality reduction then we should move towards Deep Learning.

For lesser amount of data deep learning or machine learning can be used but when volume of data keeps on increasing then deep learning can be useful.

No alt text provided for this image

Therefore, we can conclude that if we have less features of data or a lesser volume of data then machine learning algorithms can be easily used and when the volume of data increases and extracting features may become nearly arduous even after applying dimensionality reduction algorithms such as PCA(Principal Component Analysis) then deep learning can be used where the designer doesn't require to fine-tune the parameters.

要查看或添加评论,请登录

Harpreet Singh Sachdev的更多文章

社区洞察

其他会员也浏览了