Deepfake Interviews: An Ongoing Issue for Remote Positions
Over 50% of people hired since early 2020 have reportedly never met colleagues in person

Deepfake Interviews: An Ongoing Issue for Remote Positions

There is increasing concern over criminals utilizing deepfake technology to impersonate job candidates in remote interviews, aiming to gain access to sensitive company data and systems. In 2022, the FBI warned about a rise in complaints involving the use of stolen personally identifiable information (PII) alongside deepfake videos to apply for remote positions, particularly in IT, programming, database management, and software development roles that grant access to customer data, financial details, IT infrastructure, and proprietary information. In these fraudulent interviews, the video's lip movements and actions may not always align perfectly with the audio, often lacking synchrony with actions such as coughing or sneezing. However, the technology is advancing quickly and becoming more difficult to detect.

How Fraudsters Conduct Deepfake Interviews

Scammers, armed with stolen personal information and deepfake videos, apply for remote IT and programming jobs to access sensitive company data, financial information, and IT systems. While creating deepfakes typically requires technical know-how, an increasing array of apps and web services are making this technology accessible to the general public, including tools like FaceSwap, DeepFaceLive, Zao, Wombo, among others. These platforms allow users to easily generate deepfakes by uploading source videos/images. Fraudsters then set up these tools to project the deepfake video stream via a virtual camera, chosen as the video source in video conferencing tools such as Zoom.

Here's an example from DeepFaceLive’s Gitub page: a person's face is swapped with Keanu Reeves's face in real time (

Protection Against Fake Candidates

To combat fake candidates, companies can adopt several strategies, such as asking individuals to turn sideways (as deepfakes struggle with profiles), scrutinizing digital footprints and social media presence, verifying identity documents, and posing probing questions to catch imposters off guard. However, no approach is completely fail-safe.

A handful of remote interview software solutions have incorporated features to deter deepfake candidates, like HirePro and FloCareer's Live Interview, integrating AI-powered detection of impersonation by analyzing interviewees' behavior, gestures, body language, and facial expressions for inconsistencies that could suggest deceit. Some platforms also employ voice analysis to identify changes in pitch, speed, or hesitations indicative of dishonesty. Yet, these platforms hold a minor share of the market, with most interviews still conducted via major platforms like Zoom, Teams, and Webex.

In conclusion, although manual detection techniques and specialized interview software provide some level of protection, the continuous improvement of deepfake technology means the challenge is far from resolved. The responsibility now lies with major video conferencing platforms to adopt a more active stance in this fight, or they risk being outcompeted by startups in the remote interview market. It's possible that we may see significant changes in this area in the near future.


I'm not sure you even need Deep Fakes, sometimes just imposters work, at least back in the day before zoom/video interviews. We once interviewed a decent candidate over the phone, several levels/interviews, and a month later he was working with us, but he seemed clueless. I had my suspicions, and one day another co-worker said to me "I don't think this is the same guy we interviewed." Sad thing was the resolution - it was too hard to fire him at that point, so he was moved into another group, and as far as I know he is still there.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了