I asked ChatGPT to construct an attack
Before I get into the specifics, this isn’t the first and undoubtably won’t be the last article written about AI in cybersecurity. There’s simply no way around AI. You’d be hard pressed to find a single datasheet that doesn’t have terms like AI, Machine Learning, Deep Learning or Neural Networks on them and rightfully so.
Not too long ago, AI was much more of a behind-the-scenes-function aiding the cybersecurity vendors. Few people spoke about it but fast forward a few years and it’s at the forefront of buying decisions, evaluations and technical decision criteria.
That said, an exploit will always be an exploit and an attack will always be an attack. Methods change though, and with the explosion of generative AI/Large Language Models (LLM’s) so do the means.
So, I decided to see how far I can get with an LLM’s when trying to construct an attack and here is what happened:
Before starting, it’s a good idea to provide your GPT with some context (there’s a sentence I never thought I’d say…), i.e. “you are a security engineer and..”
…after which you’ll typically get better quality answers.
?Its answer I can attest to, was spot on. It explained the whole process ransomware attacks go through, from infection vectors to ransom payment and cleanup. For brevity purposes I won’t paste the answer here but handy, right?
Part of the initial explanation is the use of droppers. For those unfamiliar with it, this is a bit of code that is used to “drop” the actual payload that does the nefarious activities.
When asked how I can create a dropper that evades detection, the ChatGPT guardrails kicked in. It didn’t want to help me. Instead, it just gave me some suggestions on how to do it myself.
.....I’m not sure what to think of that but let’s move on.....
I proceeded to ask about droppers, specifically which ones were available and sure enough, within seconds, I had a list to choose from. When asking about the code of an open-source dropper it declined to answer me.
...however…. it told me where to find it myself.
….!?!?.... Anyway.. moving on…..
I responded that I didn’t have access to this well-known ‘hub’ (get it?) where you can find almost any code and I wanted help writing something myself. Reluctant to answer me, after some back and forth, I told ChatGPT I just needed it for educational purposes. It still didn’t give me the full code but now offered part of it, specifically the encrypted communications channel with a backend.
Now, I still needed some code that functions as a dropper. Since it didn’t want to write it for me, I rephrased my question and sure enough within seconds, it was done.
Now this code isn’t particularly useful as-is but what if we can add some code that would perform actions like doing directory listings and encrypting all its content. (Sounds familiar?)?
Within seconds (!) I get this code:
领英推荐
Throughout my conversation with ChatGPT, I also asked it about general evasion techniques, which to my own knowledge, it very thoroughly answered! These evasion techniques I added to the code to make it “better”.
I will not paste that part of the process here for obvious reasons.
If I add all this together, I now have a way to perform typical ransomware tasks and have a method to communicate between “client” and backend.
All I now need is some code that will transfer the content that’s just been encrypted, and well, there we go:
With a rough program in place it is time to do some tuning so we continued to have a conversation about how to get the 'backup program' to work without those pesky EDR solutions intervening.
?(points omitted for obvious reasons)
Yet again, all points raised were spot on! Asking a bit more about how to avoid detection, it helped me 'improve' the code a bit further.
(points omitted for obvious reasons)
The closing statement we wholeheartedly agree on reads:
This exercise isn’t to pick on ChatGPT or LLM’s in general, it’s just to show how difficult it is to find a balance between benefits and risks. It’s both a powerful tool and weapon depending on how you use it.
Understanding the intention of the human operator (is that a thing now?) is critical but is nearly impossible to get right every time. I guess AI is also susceptible to social engineering!
Closing out, I cannot stress enough how much of a game changer the widespread availability of AI is for both blue-teams and red-teams. I cannot stress enough how we all need to shore up our defense mechanisms to help combat these AI (augmented) attacks.
Feel free to reach out if you want to dive a bit deeper into this topic!
Thanks for making it to the end!
?
?
Security Architect / Ethical Hacker / OSCP i.o.
11 个月Mooi stuk Robert. We hebben het al vaker over dit soort dingen. Ben benieuwd hoe andere AI met dezelfde vragen omgaan. Misschien samen eens onderzoeken?
Did you also ask it how to enhance your EDR to detect your evasion tactics and passed it on to engineering? :)