Deepfake scams and their implications for Cyber-Security

Deepfake scams and their implications for Cyber-Security

The malicious usage of?Artificial Intelligence (AI)?to introduce new types of attacks is a topic on which I have written?extensively?as AI adoption continues to rise across the globe. Cyber-criminals of today are an inventive bunch and always on the lookout for new vectors to try against a company’s defense systems and they seem to have found one with Deepfakes.

Defined as:

synthetic media[2]?in which a person in an existing image or video is replaced with someone else’s likeness. While the act of creating fake content is not new, deepfakes leverage powerful techniques from?machine learning?and?artificial intelligence?to manipulate or generate visual and audio content that can more easily deceive.


People have been playing around with Deepfake for many years superimposing the faces of politicians and actors on their own videos with uncanny results (the?Tom Cruise?one being my personal favorite)

No alt text provided for this image

However,?Deepfake?is not just an amusing pastime for people to create viral YouTube videos but can have serious trust implications if misused. People generally believe what they see especially if it is coming from a person from a position of authority and the potential of deepfake technology to spread misinformation can be quite severe.

Let us see one recent alarming trend:

Malicious use of Deepfakes for remote work positions

Remote / Hybrid working has become a reality for most companies nowadays and interviews for such positions are also happening remotely. The?FBI Internet Crime Complaint Center (IC3)?issued a?warning?on June 28, 2022 warning about the increase of Deepfakes by criminals to apply for sensitive remote-based positions which will grant them access to PII. The jobs range from?programming, database administration,?and other sensitive positions with the attackers using stolen PII to make their application look more convincing e.g. using stolen identities to pass pre-employment background checks with the companies being none the wiser.

The companies usually detected something fishy (phishy?) when they observed the person's lips not being fully in synch with their speech during interviews and or other auditory clues. However, the speed at which AI is developing makes this only a minor problem to overcome for cyber-criminals. Criminals have the tools and knowledge to continue to refine this attack until it becomes virtually undetectable and the lure of gaining access to sensitive PII databases, corporate reports, etc. can be quite the motivational factor!

The FBI had last year released a private industry notification ( accessible?here) in which they had warned about deepfakes being used for a new type of attack referred to as?Business Identity Compromise (BIC). As per the notification

BIC will represent an evolution in?Business Email Compromise (BEC)?tradecraft by leveraging advanced techniques and new tools. Whereas BEC primarily includes the compromise of corporate email accounts to conduct fraudulent financial activities,?BIC will involve the use of content generation and manipulation tools to develop synthetic corporate personas or to create a sophisticated emulation of an existing employee. This emerging attack vector will likely have very significant financial and reputational impacts to victim businesses and organizations.

Fighting fire with fire ( or AI with AI )

As these types of attacks evolve, the best way to counter them would be not to look at AI technologies with mistrust but for companies to adopt AI and machine learning for authentication purposes A simple Zoom call for a sensitive position might no longer cut it and companies will need to invest in machine learning solutions that leverage automation and “liveness detection” to catch attempts by attackers to pose as someone else. What can be missed by the human eye as deepfakes become more and more lifelike can be detected by AI systems powered by machine learning.

No alt text provided for this image

In addition to investing in AI for cyber-security, companies should also follow the below best practices:

· Assess your current procedure for how interviews for sensitive positions are carried out and what other authentication protocols can be introduced. Carry out a thorough risk assessment of the process

· Add deepfake awareness to your cyber-security program to ensure that awareness of these risks exists at all levels ( especially with HR ). Train your staff on how to detect deepfakes using websites such as?these?which highlight telltale visual inconsistencies in deepfake videos such as glare, lip movement, facial hair, etc.

· Update your?Incident response processes?to incorporate deepfakes where attackers can impersonate potential applicants or even a C-level executive in a deepfake video and make sure your legal and media teams are included the same

We are truly entering into uncharted territories here with AI-based attacks and companies and their cyber-security teams will truly need to step up their game to keep up with attackers going forward. It seems the days of simple email-based social engineering are behind us and cyber-security teams will need to adapt or get left behind by AI attacks.

Good luck on your AI journey!

NOTE: if the topic of AI and Cyber-security is interesting to you then do check out my course on?AI Governance and Cybersecurity?OR my recently?published book?available on Amazon

要查看或添加评论,请登录

Taimur Ijlal的更多文章

社区洞察