Transform your face using AI in just one tap !!!
AYUSH GUPTA
AI Consultant | Lead AI & ML Expert | GenAI Innovator | Proficient in LLMs & Multimodal AI Models | Strategic Advisor in AI-Driven Transformations | Empowering Businesses with Intelligent Solutions!
Today, AI-powered face-swapping technology which awed the internet now a days with its fantastic results and also it is becoming one of the interesting thing to do.With recent advances in computer vision and graphics, it is now possible to generate new faces from extremely realistic synthetic faces even in real time countless techniques are available on the internet, In fact distinguishing between original and manipulated images can be a challenge for humans and computers alike especially when they have swapped images on social networks or in real time.
Face swap refers to an activity in which a person’s face is swapped with the face of another person or animal or with an inanimate object.It is something where the technology is getting more and more creepy and advanced.
“So can you hijack someone else’s face on your face in real-time then this is most interesting thing to do and for sure it will blow your mind.”
Basically, working on technology that lets you take over the face of anyone by sitting in real-time in a target where you will find results more convincing and realistic that can blow your mind.The face swap is done by tracking the facial expressions of both the images source and the target, doing a super fast “deformation transfer” between the two images warping all the features to produce an accurate fit and generate the synthesised face in real world.
The specific area of GAN (Generative adversarial network) that uses this technique so called Image processing. Image processing is the transformation of an image by performing mathematical operations on each individual pixel on the provided picture and use this pixels to generate another form.
Module Overview:-
A GAN (Generative adversarial network) based approach for module to swap them all.
The Module requires one source face and one target face photo wherein the input image passes through different convolution layers of model and generate a new image basis on the features of source and reference image. This is possibly due to limited represent ability of the feature extractor.
Defining Module:-
The image illustrates our generator, which is a encoder-decoder based network at test phase where module approaches a GAN conditioned on the latent embedding extracted from a pre-trained face recognition model.During training phase the input face is heavily blurred so I have trained the module here objective is to improve translation performance while keeping semantic consistency such as minimum perceptual loss on RGB output.
I have tried to down sample the input image as in face swap-GAN instead of masking it but the model did not learn proper identity translation but output face that is similar to its input.
GAN comprises of Discriminators and Generators.
Generator:-The Generator is composed of an encoder, a conv_ net that turns an image into a small feature representation and a decoder, a transpose conv_ net that is responsible for turning that feature representation into a transformed image.
Discriminator:-The discriminators in this GAN are convolution neural networks that see an image and attempt to classify it as Face-swapped image.
Module Code-Sample:
- from models import FaceTranslationGANInferenceModel
- from face_toolbox_keras.models.verifier.face_verifier import FaceVerifier
- from face_toolbox_keras.models.parser import face_parser
- from face_toolbox_keras.models.detector import face_detector
- from face_toolbox_keras.models.detector.iris_detector import IrisDetector
Mask & Align Images:
- src, mask, aligned_im, (x0, y0, x1, y1), landmarks = utils.get_src_inputs(fn_src, fd, fp, idet)
- tar, emb_tar = utils.get_tar_inputs(fns_tar, fd, fv)
- out = model.inference(src, mask, tar, emb_tar)
Visualise Results:
Random results on low resolution:
Extensions:-
This section have some ideas for extending this article that you may wish to explore.
- Different Datasets - Update the example to use it on some other datasets.
- Without Identity Mapping - Update the example to train the generator models without the identity mapping and then compare results.
- Will extend this module into complete software package.
If you explore any of these extensions, I’d love to know.
?Thank You.