#21 Face Detection with OpenCV
Today we will use OpenCV to detect faces and draw a bounding box around them.
What is OpenCV?
This is a computer vision library. It was created by Intel in 1999 and was later made open-source and released to the public. Since human faces are so diverse, face detection models are trained on large amounts of input data for accurate detection. The training dataset must be diverse.
Haar Cascade Classifiers
This method was first introduced in the paper
, written by Paul Viola and Michael Jones.
This paper introduces a method for detecting objects in images in less time. The model follows a three-step process.
introduces a method for quickly and accurately detecting objects in images, particularly faces, through a series of steps:
Code:
# !pip install opencv-python
import cv2
import matplotlib.pyplot as plt
# Read the image.
# Returns the image in the form of a Numpy array
imagePath = '/content/test_image_family.jpg'
img = cv2.imread(imagePath)
# Let's look at the dimensions of the image
img.shape
This is a 3-dimensional array. They represent the height, width and channels respectively. Since this a colour image, there are 3 channels- RGB (Red, green and Blue) The OpenCV library uses the opposite layout- BGR (Blue, Green and Red)
# Convert the image to Grayscale
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray_image.shape
Since the image no longer has 3 channels, there are only two values in the array.
领英推荐
# Load the pre-trained Haar Cascade Classifier
face_classifier = cv2.CascadeClassifier(
cv2.data.haarcascades + "haarcascade_frontalface_default.xml"
)
# Performing the Face Detection
face = face_classifier.detectMultiScale(
gray_image, scaleFactor = 1.1, minNeighbors = 20 , minSize=(40,40)
)
The detectMultiScale() method is used to identify faces of different sizes in the Input Image.
# Drawing the bounding boxes
for (x, y, w, h) in face:
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 4)
# Displaying the Image
# we first need to convert the image from the BGR format to RGB:
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(20,10))
plt.imshow(img_rgb)
plt.axis('off')
plt.show()
Input Image
For the input image, I intentionally chose one that had blurry faces. I wanted to see how much the model overshoots to detect blurry faces.
Results:
Changing minNeighbors parameter
# This is the base code for performing the Face Detection
# I will change the minNeighbors with everything else constant
face = face_classifier.detectMultiScale(
gray_image, scaleFactor = 1.1, minNeighbors = 0, minSize=(40, 40))
As seen from the images, there is a trade-off between removing noise and detecting blurred faces.
Full Code: https://github.com/RiyaChhikara/100daysofComputerVision/blob/main/Day21_face_recognition.ipynb
Sources:
Senior Network Consultant Engineer @ Cisco | Network, Automation, ML, DC, Cloud,AI
12 个月Good job Riya