Transfer learning for Face Recognition
Our task is to create face recognition model using transfer learning.
First of all i have imported the pre trained model mobilenet. This model is using 'imagenet ' images dataset. In this pre created model we have so many layers. We are freezing all the layers that's what we do in transfer learning and we are not including the last layer of model which having softmax activation function.
After this we are adding additional layers to the pre created architecture. First we add avgpooling layer after that 3 dense layer and after that we add output layer with softmax activation function for prediction. We are adding this layer because when we imported the model we did not include this layer and this is the new layer having capability for predicting our faces.
Now from keras we are importing the model , layers like dense , flatten, avgpooling. Then we initialize the num of classes we have after this we add the new output layer and input layer to model and print the model summary.
After all this now we are loading our dataset for face recognition. Then we do augmentation on the dataset. Augmentation means tilt the images, zoom the images , rotating with some angle. After the augmentation done we again load our dataset containing augmented images.
We found that after augmentation done we have around 280 images for training the model and 64 images for validation purpose.
It's time to train the model with the the changes we have done. We are using RMSprop optimizer loss function= categorical_crossentropy .
we initialize the number of epoch we want with 3 batch_size=16 .Batch size means in a single go it will train 16 images after this again 16 until the epoch completed. I have got around 87.5% val_accuracy in 3rd epoch. But in second epoch i got 91.6% val_accuracy. It's our choice how many epoch we want . By seeing this if you do 2 epoch is better. We save the model with name as 'facerecog_mobilenet.h5'
Now we are importing the our newly trained trained model that we customize for face recognition. Then we do testing . we have two folder in our dataset 'train' and 'validation'. In train we have two folders 'n0' and 'n1' . In validation we have also two folders 'n0' and 'n1' . We are giving the name to these folders .
Then we are giving prediction . we are randomly taking 10 images from validation for checking that the predictions are right are wrong . To show the images we are using cv2 module. Let's see our predictions.
See it shows the correct image (my face) with header Anurag_Mittal. you can match with the output shown on left side . Let's check one more.
Yes again it is working fine. It predict the exact image with header named as Aamir khan . In output it is also showing Aamir khan. It means our model is perfect.
Task / project is completed
Github URL - https://github.com/anurag08-git/mlopstask2.git
Thankyou for the reading and giving your valuable time.