Going Phishing with ChatGPT

Going Phishing with ChatGPT

Imagine you’re a system administrator at a tech company. You’re working late on a critical project. Suddenly, you get an urgent email from the CIO insisting you shut down the entire system immediately -- there’s a hacker on the network, and the entire business is at risk of being compromised. Your boss is copied on the email, but almost certainly asleep.

?Or, perhaps, you get an email from your boss.?She notes she’s testing a new tool, and asks you to click on a link to see what you think.

?What would you do?

?Both of these requests could be legitimate.?But they could also be attempts by bad actors to compromise your company. These types of attempts are not new, but generative AI tools like ChatGPT will make them more frequent, more convincing, and more successful.

?ChatGPT will not explicitly write a phishing email.?In fact, as others have noted, it will scold you for making such an ask.

No alt text provided for this image

But, if you give ChatGPT a more generic prompt, it will happily create an email for you that could be used in any phishing campaign.?

No alt text provided for this image

We had some fun with ChatGPT to see exactly what would happen if we asked it to write emails in the voices of various well-known executives. None of these prompts mentioned phishing, but rather asked ChatGPT to do things like include a link to a survey, or ask recipients to verify personal details. And while the analysis we did isn’t particularly scientific, there are patterns that quickly start to emerge. Namely:

  • The emails are generically professional, with common opening and closing lines like “I hope this email finds you well”
  • The emails aim to make the recipient feel important and central to the request being made, e.g. “I am asking for your help”
  • The emails emphasize the speed required for the requested task
  • The emails give thanks and appreciation for the recipients help

No alt text provided for this image
No alt text provided for this image

These emails represent the ease with which more sophisticated and targeted phishing lures can now be created -- it was simple and quick to create them, and there are endless ways to personalize and customize the prompts to target specific companies, departments, or individuals.?

However, despite these clear patterns, it’s also clear that many of the things we usually rely on to spot phishing emails -- unfamiliar greetings, spelling and grammar errors, and so on -- become much less common. Further, these verbal patterns ChatGPT relies on are also pretty common (if not overused and stereotypical) in most legitimate corporate settings.?

This little experiment quickly makes it obvious that we cannot rely on just training to prevent employees from accidentally engaging in these ruses -- a phishing lure created by ChatGPT will be difficult for even seasoned security professionals to spot. It will take both training and technology tools to thwart attackers and keep enterprises secure.

Training on how to avoid phishing attempts is very effective in helping people spot those less sophisticated phishing lures that will remain in the attackers’ arsenal. But, as we’ve seen, generative AI tools can write phishing lures that are highly personalized and much more difficult to spot.

So long as the tool can train on writing examples from the person who it’s mimicking, it can write phishing lures that sound convincingly like they were written by that person. Mosty, these training examples come from the public domain. After all, many people, and certainly many senior executives, now have significant amounts of writing in the public domain. Moreover, increasingly it seems that tools such as ChatGPT have been able to train on internal company communications . Imagine, then, a phishing lure that makes reference to internal company project code names and uses catch phrases that are part of the company culture.

That doesn’t mean employee training should go out the window, but we need to be realistic about its impacts. We can help people spot suspicious links, reinforce company protocol for authenticating critical requests, and outline reporting processes. But people will make mistakes.?And with highly convincing lures coming from generative AI tools, those mistakes will become increasingly inevitable.

Companies must pair training with tools that can help identify phishing lures and block access to phishing sites, as well as protections -- including microsegmentation and zero trust network access -- that can help block the spread of malware if and when it does make it past that first line of defense.

Beyond that, probably the most important lesson for enterprises is the importance of authentication and authorization in business processes. Ask yourself: If, like in our first example, someone sent a message to the right person, asking that person to shut down a critical service, claiming to be the CIO and demanding urgency, would they do it? No critical business process should be triggered by an email or anything else that cannot be reliably authenticated.

Companies must also educate employees about how to safely and ethically use these tools.?Any data -- even company proprietary data -- entered into ChatGPT becomes a part of its database. So by being smart about what information is entered into ChatGPT, you can help limit how legitimate a phishing lure can sound.

In the end, generative AI isn’t good or bad, but we need to be smart about how we use these new technologies, and be prepared for bad actors to leverage them for their own benefit.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了