Dealing with ChatGPT-generated hype and anxiety

Dealing with ChatGPT-generated hype and anxiety

Since OpenAI dropped ChatGPT 3.0 to the public on November 30, 2022, it’s garnered vast amounts of attention, fascination, and fear. The 175 billion parameter AI language-model bot has led journalists and cybersecurity professionals making claims it significantly altered the threat landscape, permanently lowering the barrier of entry for unsophisticated cyber criminals, and sparked an AI arms race between cyber attackers and defenders. Oh, and GPT-4 is already on the way to Microsoft Office near you.

This blog addresses the anxiety generated about language models in cyber attacks, by placing it within the context of a cyber attack chain. We believe that language models like ChatGPT have implementations for cyber attacks, but essentially the game remains the same.




What is this version of ChatGPT able to do?

Security researchers and threat actors quickly discovered they can trick ChatGPT into generating content for cyber attacks. This author also had a turn and successfully produced a convincing phishing email that imitated a victim’s bank, ransomware in JavaScript, and SQL injection code by simply asking ChatGPT for examples to help with a fictitious training course.

No alt text provided for this image

Beyond this, we observed ChatGPT generating several interesting usecases

  • JavaScript that steals personal information
  • Emulating a Linux machine with a command-line interface
  • Detecting software vulnerabilities
  • Writing code that follows English-language instructions
  • Acting as an infostealer by scanning for common file types and sending them to an FTP server
  • Secretly downloading the telnet client PuTTY onto a target Windows machine and running it using PowerShell
  • Running a reverse shell
  • Generating a piece of VBA code that could be embedded in a Microsoft Excel document that would infect a computer if opened
  • Providing instructions on how to use Metasploit, creating polymorphic malware that can evade detection

Additional capabilities are unlocked when combining ChatGPT with other AI tools, like Codex, but that’s for another blog post in itself.?


ChatGPT’s (current) place within a cyber attack chain

To not get overwhelmed by that list of capabilities, let’s contextualize them. The ability to write a phishing email or malware doesn’t equal a successful attack. It’s a necessary, but far from the only, step. Cyber attacks are processes that entities like MITRE have mapped into distinct stages. The image below demonstrates the 14 tactics MITRE observes.

No alt text provided for this image
https://attack.mitre.org/matrices/enterprise/

The majority of ChatGPT’s capabilities centre on the resource development stage. We believe it speeds up production of tools or materials used to conduct an attack, making development more time efficient.

To a lesser extent, ChatGPT can be used to facilitate initial access. For example, generating rudimentary code snippets that can supported do develop more complex scripts. We believe that at this stage, the capability in this stage is far from ideal.


More goes into a cyber attack

What the above MITRE’s ATT&CK tactics demonstrate is that there are multiple stages a threat actor will have to go through to execute a successful attack. They won’t necessarily need to go through every stage, but we expect 6-8 stages at a minimum.

More importantly, multiple stages which language models like ChatGPT can’t assist with. Each stage presents an opportunity for defenders to detect and respond to the attack. Just because a script kiddie can generate code beyond their natural abilities, it doesn’t guarantee they will be able to navigate an organisation’s network undetected and exfiltrate data successfully.


Counter fear with knowledge and preparation

Thinking like your attacker and mapping out a sequence of their attack chain will help prepare for the potential ChatGPT-assisted attacks. Building resiliency by improving phishing identification training in employees, configuring your email gateway effectively, automating detection rule generation in your security tools, and collecting cyber threat intelligence on the latest AI-generative attack methods are some proactive steps that can be taken to mitigate ChatGPT’s potential impact.

We can address the anxiety ChatGPT is causing with knowledge borne of cyber kill chains and attack scenarios. The fact remains that there will always be technological advancements but those who act fast and adapt will thrive. When GPT-4 hits, expect us to keep track of how this relates to current attack scenarios.




We found the below articles quite informative on this topic; we hope you do too.

要查看或添加评论,请登录

Venation的更多文章

社区洞察

其他会员也浏览了