face-recognisation-by-transfer-learning
Anudeep Nalla
Opensource Contributer | Platform Engineer | EX-NPCI | RHCA Level III | OpenShift | CEPH | CK{S,A,AD} | 3x Microsoft Certified | AWS CSA | Rancher | Nirmata | DevOps | Ansible | Jenkins | DevSecOps | Kyverno | Rook-Ceph
In this project, I have created a Face-Recognition model using the concept of Feature Tuning.
Step 1: We start of by collecting our dataset. For this, I have used Haarcascade FrontalFace. I have collected 200 images of mine & my friend for training the model & 100 images each for testing the model. You can use the following code to collect the images and prepare the dataset.
I have used the same block of code multiple times to collect all the training & testing images of me & my friend. You can collect images of more people as per your requirement. For more details check the code
Step 2: Now, we import pre-created MobileNet model from keras.applications. We freeze the already trained layers by layer.trainable= False.
Step 3: We add layers as per our requirement. Here, I have used Softmax activation function.
Step 4: Next, we load our dataset. We have used the augmentation technique to increase our dataset since the size of original dataset is too small for a good accuracy.
Step 5: Now, we begin training our model.
The model has been effectively trained and ready to use. You can use this model for prediction.
In this model, I got 99% accuracy, because the data was very less.
Step 6: Now, I have loaded the created model for prediction, and predicted mine and my friend face.
The output of predicted model:
github link:https://github.com/Anuddeeph/FaceRecognisation-Mobilenet-.git
thanks to Vimal Daga sir, for guiding us.
#artificialintelligence #facedetection #mlops #project #python #devops #github #vimaldaga #git