The flipside of ChatGPT – A much more believable way to compromise us via email
October was Cybersecurity Awareness Month across the world, and at OMERS , we tried to use the month to further update our teams on the latest trends taking place within the industry, including the threats we all face.
To that end, everywhere we’ve looked in 2023, we’ve heard about the daily benefits now being enjoyed by the average person using ChatGPT and other LLM (large language models). Whether it’s to decipher the meaning of a long paper, get ideas flowing on a new project or even to help us write clearer and more concise emails, the rapid changes these LLMs have brought about have been discussed extensively. The generative artificial intelligence (AI) behind them, which is capable of creating text, images, or other media using generative models, has the potential to do so much for us, both in the short and long term.
Email still remains one of the primary attack vectors for most organizations, and while many people look at generative AI, and in particular LLMs, to help create better emails, so have threat actors. Why? Because LLMs make it possible to improve their business email compromise (BEC) malicious campaigns.
Looking for signs
Until now, we’ve all been taught some telltale signs to look out for when receiving suspicious emails; from spelling errors to weird grammatical slip-ups to odd calls to action (“click here to receive your refund from Amazon”). The more our cybersecurity teams test us on these subjects, the better we seem to get. Yet what generative AI offers malicious actors is the ability to drastically improve these campaigns. How? By greatly reducing, if not eliminating, these written mistakes that our brains are now trained to jump on. LLMs are quickly gaining the ability to write smoother prose than your typical cyber thief, and beyond that LLM tools mean threat actors no longer need to have fluency in a given language. Generative AI does that for them.
领英推荐
ChatGPT’s misbehaving brother
It’s an overused trope on pretty much every soap opera that’s ever been created – that of an evil twin wreaking havoc at the expense of the “good one.” And while ChatGPT undoubtedly can be, and often is, used by these same bad actors (pun intended), it is dark web-developed and offered tools such as WormGPT that are currently posing the most credible threat. On dark web forums, it is even promoted in soap opera language, referred to as the “biggest enemy of the well-known ChatGPT that lets you do all sorts of illegal stuff.” From a BEC point of view, it provides the ability to automate the creation of incredibly realistic, yet fake, emails that are personalized for recipients, greatly increasing the odds they will be tricked into disclosing private information or installing malware. Organizations in Europe have been increasingly experiencing BEC attacks in their own languages, including those in Spain, France, Germany, the Netherlands and Sweden.
So what do we do?
In the cybersecurity field, we believe that AI-aided threats will continue to evolve, and how AI is being used to improve malicious techniques within business email compromise is the tip of the iceberg. Organizations will need to keep improving our own AI-aided defense capabilities and many of our partners already offer machine learning models tailored for and specialized in processing language and text. ?One of our mantras at OMERS this Cybersecurity Awareness Month was that “It’s better to be Cyber Safe than Cyber Sorry,” and with the proliferation of LLMS and advancement of threat actor tools, techniques and practices this has never been more true.
Assemble Incorporated
11 个月David White You can never overuse the evil twin plot line! It's good to get the news out about being on the lookout for increased personalization. Thanks for the update.
Director, Corporate Communications at OMERS
11 个月Great piece, David White! So many things happening at warp speed these days, so it's awesome to learn more about it from an expert.