FACE RECOGNITION USING TRANSFER LEARNING (task-4)

FACE RECOGNITION USING TRANSFER LEARNING (task-4)

We know that Convolutional Neural Network (CNN) could be used in almost all the areas having big data and it helps a lot in making the system AI-enabled. However, with an introduction to every new technology in the market, there lies some limitations/challenges that need to be resolved.

ResNet, VGG, Inception are some of the tools given by the new CNN in order to meet the challenges of the old CNN. These three different tools or architecture are the ones that people got after different trials on filters/CRP combinations to get better accuracy. These tools use the standard dataset from "imagenet". These are pre-trained models that give us the weight and bias and using this weight and bias, we perform TRANSFER LEARNING

WHAT IS TRANSFER LEARNING?

The pre-trained models given above are trained for their own categories. There are many different categories for which the model isn't trained. For example, we can say that the image category provided by imagenet will not be having any image with your name tag so if you give any of your images to the model for prediction, it won't be able to identify it.

Now, if we have any requirement to create a model using CNN for images, there are many issues that we need to face such as a big dataset of the images, creation of a deep network that contains 10-18 CRP, fully connected layers which requires a lot of computing power. So, we need to find a solution to meet the needs for the creation of this model.

--> WHAT CAN WE DO?

We'll use the pre-trained network stated above and utilize their network. We know that all the objects in the world share a basic similarity. So, we'll use the learning of these models, their ability to differentiate between objects, and transfer it to the detection of the new images.

This knowledge that the new model acquires and uses in the prediciton of the new image is called "TRANSFER LEARNING".

CONCEPT OF FREEZING AND FINE-TUNING

We already know the problems faced in training a model that would require a lot of computing power. This is the reason why we'll not run the complete convolutional layer. For this, we'll freeze the layers that we don't want to run and add one more fully connected layer whose job will be to find out the main features of the new image category you want to add. Freezing doesn't mean that the model will be trained bypassing the layers that are freezed. It means that the layers already will have the weight and the bias and will know the features that need to be eliminated, i.e., the major part of feedforward and backpropagation will no more be a headache.

Now, in order to get perfect accuracy for a new image, instead of freezing all the layers as we did in transfer learning, we'll just freeze a couple of layers and use the initial weight of any of Resnet, VGG or Inception to train the model with better accuracy than transfer Learning. This concept of transfer Learning is called FINE-TUNING

Below is a task that shows the code of face recognition using TRANSFER LEARNING using the MobileNet model, similar to the ones given above.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

And then finally with an accuracy of approximately 99%, the model gave correct results.

Happy to help you?? Keep learning, keep growing

要查看或添加评论,请登录

Nupur Agarwal的更多文章

社区洞察

其他会员也浏览了