Face recognition using transfer learning
In this blog post, you will learn about "How to make face recognition model using transfer learning".
So, let's first understand about Face detection and Face recognition.
Face Detection: It is an (AI) based computer technology used to find and identify human faces in digital images or videos. It now plays an important role as the first step in many key applications including face tracking, face analysis and facial recognition.
Face recognition: It is a method of identifying or verifying the identity of an individual using their face.
Every task requires a team, my team members are
The Implementation part of this project consist of several activities some of those are as follows:-
- Data Generation using Open CV for face extraction for the training part of the model.
- After Extraction of the faces we've divided our whole data generated in the form of images into two parts 1.) Training Part 2.) Testing Part with ratio of 7:3 .
- In the next step we will use a pre-trained Deep Learning Model called VGG16. We can also use ResNet Model for this but in this Article I'm using VGG16 that is easily available in the keras module.
For reference of this implementation, I'm providing some of my code snippets.
from keras.layers import Input, Lambda, Dense, Flatten from keras.models import Model from keras.applications.vgg16 import VGG16 from keras.applications.vgg16 import preprocess_input from keras.preprocessing import image from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential import numpy as np import matplotlib.pyplot as plt from glob import glob
Above modules are used in order to build this kind of DL model. You can also refer to my Github Link here.
VGG16 architecture consists of twelve convolutional layers, some of which are followed by maximum pooling layers and then four fully-connected layers and finally a 1000-way softmax classifier. VGG16 is a pretrained model so we don't have to train that layers again as it's already trained for days so we don't have to waste our time for training again. So that's why we will freeze all the layers. And then we'll add a output layer at last and training that particular layer, this concept is known as Transfer learning.
# don't train existing weights for layer in vgg.layers: layer.trainable = False
After initializing the model, we can use Data augmentation technique for increase our dataset.
from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) training_set = train_datagen.flow_from_directory('Datasets/Train', target_size = (224, 224), batch_size = 32, class_mode = 'categorical') test_set = test_datagen.flow_from_directory('Datasets/Test', target_size = (224, 224), batch_size = 32, class_mode = 'categorical') # fit the model r = model.fit_generator( training_set, validation_data=test_set, epochs=5, steps_per_epoch=len(training_set), validation_steps=len(test_set) )
Now, save the model for predictions.
And here you go, just load the model and, it will accurately detect the person's face on which we've trained our model.
Thank you.
Good work Keep Learning