Four ways ChatGPT is Reconstructing the Email Threat Landscape
Hornetsecurity
Leading cloud security and compliance SaaS provider, protecting 125,000 organizations globally.
As the development of artificial intelligence technologies gains momentum worldwide, we can't help but wonder about its possible impact on the security threat landscape. Specifically, we're talking about ChatGPT and emails.?
Will artificial intelligence-powered chatbots, such as ChatGPT, help cybersecurity professionals or hinder them??
While ChatGPT can assist cybersecurity professionals with training, analysis, and threat insight, its accessibility can also aid novice threat actors in creating and distributing malicious software and spam messages as well as conducting attacks with minimal effort. For this reason, the AI platform has become a hot topic among cybersecurity professionals, discussing the benefits and ramifications of this powerful tool.?
Here are four ways ChatGPT is impacting the threat landscape:?
1. Accessibility and ease of use?
One of the most significant aspects of ChatGPT is that anyone can use it, unlike other AI tools that require some technical expertise.??
However, this ease of use also means amateur cybercriminals can access and use it to create malicious code, even without having much coding or programming knowledge. This is a major concern because once a hacker identifies a vulnerability, they can use ChatGPT to assist in crafting an exploit and learn how to insert it within an attack chain.??
Sure, there are some controls on malicious content being generated via the ChatGPT web interface, but those controls are easily circumvented via carefully crafted prompts or via the ChatGPT API, which is easily accessible with a modicum of technical ability, and a few google searches. Even if the API is out of reach for an aspiring threat actor, there are services on the dark web selling access to the API.??
2. Machine-learning capabilities?
For the moment, ChatGPT is based on a static data set, but that is set to change imminently. ChatGPT's machine-learning capabilities mean that the more queries it encounters, the more it will continue to learn and become better, more accurate, and more capable of providing effective solutions to technical queries and thus a more formidable weapon for cybercriminals.??
As cybersecurity professionals, it will become harder to detect and counter cyberattacks that use this tool. Given this information, one may ask, will OpenAI add any restrictions to its capabilities for specific requests? At this point, there has been a lot of concern but little information on what, if anything, Open AI will do to gate access to specific queries. Time will tell, but at this stage, the security space will need to adjust accordingly.?
3. Social Engineering?
ChatGPT is capable of imitating human writing, which means it can quickly and easily generate spear phishing emails, making it a powerful social engineering tool. Unfortunately, the closer it gets to imitating human behavior, the more difficult it becomes for recipients to discern that they are dealing with a fraudulent email. In fact, several news stories over the past few months point out that ChatGPT-generated content is challenging to spot, even with other AI tools.???
领英推荐
Scammers typically use poor grammar, but with ChatGPT, they're more likely to craft a perfect-looking email. Pairing this with the ability to look up information on a potential target and fashion an email specifically for that person creates a grim picture for security teams to consider as we proceed into the AI generation.?
4. Exploitation of popularity?
Another concern is how cybercriminals are taking advantage of ChatGPT's popularity and exploiting the public's eagerness to try it out. Due to the high search volume of ChatGPT, attackers are creating fake pages to trick users into downloading malware, which can have serious consequences. In fact, there have been reports of "ChatGPT Addons" for popular web browsers that were actually malware designed to harvest credentials.?
How far will ChatGPT go??
Again, although ChatGPT is designed to refuse illegal requests, researchers are finding ways to trick the system into developing malicious code. This is not to mention the accessibility of the API, which has no such restrictions at the time of writing.??
It is important to note, however, that although ChatGPT might help a user with malicious intent to write harmful code, it still requires SOME technical knowledge to adjust the code and create functional malware for a given situation (also something that ChatGPT can provide input on given the proper prompts). For example, if a novice threat actor knows the target system is an Apache web server, they can use ChatGPT to learn about common Apache vulnerabilities and exploitation methods and ultimately have it generate code to help conduct the attack.??
That all said, the threat actor needs to know where and how to use said exploit, something that AI-based tools can only provide so much information on at this point in time.??
Interested in a concrete example? Here's how our Security Lab at Hornetsecurity used ChatGPT to create a ransomware attack.
So, what can we do to counter these threats??
At Hornetsecurity, our cybersecurity professionals constantly monitor ChatGPT's capabilities and implement the most up-to-date detection mechanisms in our Advanced Threat Protection to stay ahead of the game.??
Our service counters threats with multiple layers of protection to play it safe on all three email threat vectors: attachments, links, and content.???
Advanced Threat Protection is also part of our Next-Gen Security for Microsoft 365 – Learn more!??
Translating goals into reality with the help of technology. Cybersecurity advocate.
1 年I just joined the group sting of security. Thanks Khanh-Chau Mai. Looks very interesting
A super interesting read! ??