I asked ChatGPT to construct an attack

I asked ChatGPT to construct an attack


Before I get into the specifics, this isn’t the first and undoubtably won’t be the last article written about AI in cybersecurity. There’s simply no way around AI. You’d be hard pressed to find a single datasheet that doesn’t have terms like AI, Machine Learning, Deep Learning or Neural Networks on them and rightfully so.

Not too long ago, AI was much more of a behind-the-scenes-function aiding the cybersecurity vendors. Few people spoke about it but fast forward a few years and it’s at the forefront of buying decisions, evaluations and technical decision criteria.

That said, an exploit will always be an exploit and an attack will always be an attack. Methods change though, and with the explosion of generative AI/Large Language Models (LLM’s) so do the means.

So, I decided to see how far I can get with an LLM’s when trying to construct an attack and here is what happened:


Before starting, it’s a good idea to provide your GPT with some context (there’s a sentence I never thought I’d say…), i.e. “you are a security engineer and..”

…after which you’ll typically get better quality answers.

?Its answer I can attest to, was spot on. It explained the whole process ransomware attacks go through, from infection vectors to ransom payment and cleanup. For brevity purposes I won’t paste the answer here but handy, right?

Part of the initial explanation is the use of droppers. For those unfamiliar with it, this is a bit of code that is used to “drop” the actual payload that does the nefarious activities.

When asked how I can create a dropper that evades detection, the ChatGPT guardrails kicked in. It didn’t want to help me. Instead, it just gave me some suggestions on how to do it myself.

.....I’m not sure what to think of that but let’s move on.....

I proceeded to ask about droppers, specifically which ones were available and sure enough, within seconds, I had a list to choose from. When asking about the code of an open-source dropper it declined to answer me.

...however…. it told me where to find it myself.

….!?!?.... Anyway.. moving on…..

I responded that I didn’t have access to this well-known ‘hub’ (get it?) where you can find almost any code and I wanted help writing something myself. Reluctant to answer me, after some back and forth, I told ChatGPT I just needed it for educational purposes. It still didn’t give me the full code but now offered part of it, specifically the encrypted communications channel with a backend.

Now, I still needed some code that functions as a dropper. Since it didn’t want to write it for me, I rephrased my question and sure enough within seconds, it was done.

Now this code isn’t particularly useful as-is but what if we can add some code that would perform actions like doing directory listings and encrypting all its content. (Sounds familiar?)?

Within seconds (!) I get this code:

Throughout my conversation with ChatGPT, I also asked it about general evasion techniques, which to my own knowledge, it very thoroughly answered! These evasion techniques I added to the code to make it “better”.

I will not paste that part of the process here for obvious reasons.

If I add all this together, I now have a way to perform typical ransomware tasks and have a method to communicate between “client” and backend.

All I now need is some code that will transfer the content that’s just been encrypted, and well, there we go:

With a rough program in place it is time to do some tuning so we continued to have a conversation about how to get the 'backup program' to work without those pesky EDR solutions intervening.

?(points omitted for obvious reasons)

Yet again, all points raised were spot on! Asking a bit more about how to avoid detection, it helped me 'improve' the code a bit further.

(points omitted for obvious reasons)

The closing statement we wholeheartedly agree on reads:

This exercise isn’t to pick on ChatGPT or LLM’s in general, it’s just to show how difficult it is to find a balance between benefits and risks. It’s both a powerful tool and weapon depending on how you use it.

Understanding the intention of the human operator (is that a thing now?) is critical but is nearly impossible to get right every time. I guess AI is also susceptible to social engineering!

Closing out, I cannot stress enough how much of a game changer the widespread availability of AI is for both blue-teams and red-teams. I cannot stress enough how we all need to shore up our defense mechanisms to help combat these AI (augmented) attacks.

Feel free to reach out if you want to dive a bit deeper into this topic!

Thanks for making it to the end!

?

?

Bert Zéfat

Security Architect / Ethical Hacker / OSCP i.o.

11 个月

Mooi stuk Robert. We hebben het al vaker over dit soort dingen. Ben benieuwd hoe andere AI met dezelfde vragen omgaan. Misschien samen eens onderzoeken?

回复

Did you also ask it how to enhance your EDR to detect your evasion tactics and passed it on to engineering? :)

回复

要查看或添加评论,请登录

Robert Tom的更多文章

  • If you can’t prevent an attack, manipulating it might be the next best thing.

    If you can’t prevent an attack, manipulating it might be the next best thing.

    Yesterday, I had the pleasure of discussing the impact of AI on our cyber defenses during Fortinet Security Day in the…

  • The overlooked benefits of ZTNA.

    The overlooked benefits of ZTNA.

    With the rise of SASE in recent years, Zero Trust Network Access (ZTNA) as part of SASE has emerged as a critical theme…

    8 条评论
  • SASE - the answer to your security challenges.

    SASE - the answer to your security challenges.

    Spinning up firewalls and VPN’s in the cloud and calling it a SASE solution doesn’t solve cybersecurity challenges Data…

    9 条评论
  • What’s the deal with decoys?

    What’s the deal with decoys?

    Those who know me know deception is one of my favorite toys as a security engineer. It’s just SUCH an annoying piece of…

    4 条评论
  • Credential theft malware with AI

    Credential theft malware with AI

    After reading a bunch of articles on this topic, I thought, surely, it’s not THIS simple to write malware! …So I…

  • Sandboxing isn't sexy

    Sandboxing isn't sexy

    In the cybersecurity industry, we love our jargon, acronyms and abbreviations. The list is endless! Yet, sandboxing…

    2 条评论
  • The art of deception

    The art of deception

    As an Engineer at Fortinet, I get to talk to many organizations across the country about their security plans, risks…

    12 条评论
  • What if AI is used to autonomously run cyberattacks without human interaction?

    What if AI is used to autonomously run cyberattacks without human interaction?

    Artificial intelligence (AI) is a powerful tool that can be used for both good and evil. In the context of…

  • Threat detection with Deception techniques

    Threat detection with Deception techniques

    To my english speaking contacts, translation below Dutch text + English captions included in the demo video. In…

  • SASE, wat gedachten op een rijtje! // Some thoughts on SASE!

    SASE, wat gedachten op een rijtje! // Some thoughts on SASE!

    For my international contacts - English translation below Dutch text. Het nieuwste buzzwoord wat de laatste tijd…

    5 条评论

社区洞察

其他会员也浏览了