The growing use of AI in BEC scams
Thomas Murray
Global Risk Intelligence | Safeguarding clients and their communities since 1994
Advances in artificial intelligence (AI) create as many opportunities as they do dangers, though it pays to keep a sense of perspective. While it’s important to be aware of how threat actors could use AI to harm your organisation, it’s also important to think strategically about how to manage these risks.
The evolution of business email compromise (BEC) scams is a case in point.
Phishing scams and their variants have been around for a long time. By now, most people with an email account will know it’s very unlikely that an exiled prince needs the help of a total stranger to launder millions of dollars. What they may not be ready for, however, are the increasingly sophisticated descendants of this type of scam.
WormGPT: Irresistible bait or scammers getting scammed?
Described as the ‘evil’ cousins of ChatGPT, WormGPT and FraudGPT are versions of generative AI unfettered by the ethical constraints built into their mainstream counterparts. Just how dangerous this development is seems to depend on your point of view.
Some cybersecurity experts have found the BEC emails generated by WormGPT to be ‘unsettling’, ‘remarkably persuasive’ and ‘cunning.’ Others have been far less impressed, describing their experiences with WormGPT’s efforts to be ‘not especially convincing’, ‘rudimentary’ and ‘generic in a way that should ring alarm bells.’
One aspiring BEC scammer complained on a Dark Web forum that WormGPT’s code is “broken most of the time” and cannot generate “simple stuff.” In fact, suspicions have already emerged that FraudGPT and WormGPT are, in and of themselves, scams.
Asia becomes new focal point for whaling attacks
Unlike cruder forms of phishing, a BEC scam will often purport to be from someone known to the recipient. This more targeted approach is called ‘spear phishing’. The scale of its impact is hard to gauge – many organisations may not even know if they’re targeted, as one survey found that an astonishing 98% of employees admit to deleting suspect emails without reporting them to their IT Security teams.
Multinational operators and those reliant on international supply chains need to be aware that Asia is an emerging hotspot for BEC attacks that target the accounts of high-level executives (also known as ‘whaling attacks’). In Singapore, successful BEC scams of this type defrauded 93 victims of US$41.3m in the first three months of 2022 alone.
In Japan, recent targets have included:
All three suffered multi-billion yen losses as a result.?
In a Singaporean case, IBI Group Hellas Single Member Société Anonyme v Saber Holdings Pte Ltd [2023] SGDC 95, the threat actor posed as Saber’s CEO on WhatsApp and successfully duped an employee into making a large cash transfer (ostensibly for an acquisition). The threat actor even compromised the CEO’s email address to send further instructions.
领英推荐
The employee transferred €700,000 to the bank account of a Hong Kong company. Saber was eventually able to recoup the money, though presumably it was missing a chunk thanks to the legal fees required to get it back.
The personal touch: moving beyond the written word
WormGPT purports to have a high success rate when it comes to bypassing email filters and anti-spam engines through crafted targeting and personalisation aimed at the would-be victim organisation. BEC attacks ultimately rely on successfully convincing the recipient to interact with the email or communication message. This level of sophistication and personalisation certainly helps attackers in that regard.
However, a new, perhaps more worrying, trend has emerged – deep fake voice cloning. While the whaling scam is not necessarily new, AI allows attackers to take this to the next level by cloning. Microsoft’s VALL-E AI engine reportedly needs just three seconds of audio to clone a voice .
Consider for a second how many conferences, webinars and other media that organisations put out to the world with CEO, COO and other high-profile positions being represented. This is a clear opportunity for attackers to elevate their scams, as evidenced by the case of the executive in Hong Kong who was caught up in a deep fake voice scam .
Sounds familiar
Following an increase in the number of BEC complaints involving the use of virtual meeting platforms, the FBI issued an alert that identifies multiple ways in which threat actors use virtual meetings, webinars and video conferencing tools:
Regular training will help to nip many BEC attempts in the bud. It’s also essential that you ensure robust procedures are followed when it comes to authorising large financial transactions.
But the key takeaway is that your organisation needs to prepare its people to combat this new generation of multichannel and AI-driven BEC tactics.
The solution: Orbit Security
Security ratings for enhanced attack surface management and third party risk. Monitor for breaches and vulnerabilities that could be exploited by threat actors.