Social Engineering Evolution: The Rise of Deepfake Phishing
Dan D'Augelli, MS
Helping organizations make their cybersecurity a catalyst for transformation
Cybersecurity has always been an arms race between cybercriminals and defenders. Defense against attackers will improve to adapt to new threats, and then attackers respond by refining their tactics in order to find the next vulnerability in the defense. It's one of the most dynamic environments in the world of computer science.
And one of the most successful and increasingly prevalent ways of attack has come from social engineering, which is when criminals manipulate humans directly to gain access to confidential information. Social engineering is more sophisticated than ever, and its most advanced iteration is the topic of today's discussion: deepfakes.
Deepfakes is a portmanteau of "deep learning" and "fake." The "deep learning" part references the deep learning that occurs with the help of AI and ML algorithms. Using AI/ML, deepfakes can generate audio, video, or photographic content that imitates real people—and they can do so with frightening accuracy.
Originally, the technology gained its reputation from its use in entertainment and media. Fake YouTube and TikTok videos are already a common sighting. That said, its implications for cybersecurity are much more alarming. Cybercriminals have been quick to recognize and take advantage of these new capabilities, which has given birth to a new epoch of phishing called "deepfake phishing."
The mechanics of deepfake phishing
The way traditional phishing works is rather simple. The phisher sends fake emails that attempt to seem legitimate to lure victims into giving them sensitive information such as login credentials or financial information. Commonly, this involves using scare tactics in an attempt to bypass the user's rational mind and emotionally manipulate them into action without them second-guessing the authenticity of the request.
The reason deepfake phishing is so effective is because it amplifies this emotional manipulation. The deepfaked material is so accurate that it's able to catch more people off guard and make it that much easier to bypass their rational minds.
Imagine getting a video call from your CEO, complete with all of his/her familiar gestures and tone of voice, asking you to access certain data on the company network. It sounds like science fiction, but it's not. And it's not hard to see how devastating such a scenario could be if replicated.
Barriers to entry
The higher quality of deepfake footage will be further exacerbated by the growth of the AI industry. At the mention of AI, most cybersecurity experts get excited about threat detection, automated incident reports, and easy discovery of polymorphic code.
However, the fact that deepfake phishing will require next to no effort to code, thanks to AI, is a big problem. Nowadays, being a successful "black hat" takes a lot of effort. To even catch wind of a potential profit, criminals must attempt to breach a company's internal software, such as JavaScript DOCX editor, which is often difficult because of a lack of server-side conversions and no versioning issues. And even if they managed to find a weak point, it is entirely a different matter to actually exploit it.
Now, consider using deepfake content instead. With a few photos or voice clips and a subscription to AI tools, hackers will be able to, for example, jump on a video call with a company's CFO to authorize a large payment to a fraudulent account with ease. No skills are needed, and everything seems 100% legitimate to the victim.
Bypassing traditional security
Possibly the biggest strength of using deepfakes for phishing is the ability to bypass conventional security measures. Most modern cybersecurity systems are geared against malware, ransomware, and brute-force attacks. Email filters have a chance at blocking traditional phishing attempts, but they're not equipped to handle a legitimate-seeming video call if it seems to originate from a trusted source.
What's worse is that the human factor plays such a huge role here. The truth is, technology is limited by human activity. While it can aid us in detecting deepfakes, in the end, it comes down to the person in front of the computer to make the right choices.
Adapting to the threat: detection and prevention
In order to combat the threat of deepfakes, there will need to be many changes in the cybersecurity arena that combine technology with training and procedure changes. That's right, relying on technology alone isn't enough. Businesses will have to change their practices in order to adjust to the new threat. This will involve:
领英推荐
Real-world implications and cases
The threat of deepfake phishing is not just theoretical. There are in fact already notable cases that showcase the real-world implications of its existence. One example of this is an incident involving a Brazilian crypto exchange, BlueBenx, which was effectively ruined by criminals using AI to impersonate Binance COO Patrick Hillmann. They were scammed into sending $200,000 and 25 million BNX tokens, all because of a convincing Zoom call.
If scammers can fool a crypto exchange, despite all the safety features involved, they can fool anyone. Incidents like these should be a bright red warning sign and serve as a wake-up call for any businesses not privy to this threat.
Staying ahead: proactive measures for tomorrow
According to research, the global AI market is expected to balloon to more than $300 billion by 2025, and a large part of that will be cybersecurity companies providing AI-driven deepfake-busting software.
White hats are already working on defensive algorithms that will be able to detect artificial videos, pinpoint anomalies, and even track the source and maker of the deepfake content. But that's tomorrow; nowadays, businesses have to fend for themselves. Here are a few ways to stay proactive as deepfake phishing is becoming more sophisticated:
Conclusion
Cybercriminals are cunning and adaptable, which is why they are unfortunately often successful. Deepfake phishing is simply their newest way of deploying their scams.
That said, organizations and individuals are not helpless against them. If they make use of technological advances and strengthen their protocols to fill in the gaps in human psychology, they can prevent such attacks from occurring.
Finally, by understanding the threat and investing in measures to help them get ahead of the curve, and ensuring their work culture emphasizes awareness and skepticism, they can truly put a dent in mitigating the success of these attacks.
Source: SecureWorld | Nahla Davies
###
In 2023, the average cost of a data breach ballooned to a record high of $4.45 million globally. But there are security strategies organizations can adopt to decrease this cost according to the latest research published by the?Ponemon Group .
Don't become a victim~?IBM Security ?can help: provide a Zero Trust security strategy to support your business initiatives; protect your users, data, and applications; proactively manage your defenses against sophisticated threats; and, modernize your security infrastructure with an open hybrid cloud platform which will save you time and money.