ChatGPT and the future of CyberSecurity

ChatGPT and the future of CyberSecurity

If you have been using ChatGPT, you probably feel now that nothing will never be the same. It is the same feeling you had when you used the internet for the first time, when you touched your first iPhone. There will be a before and an after Artificial Intelligence, and the tipping point is now.

It is hard to predict what a world supported by Artificial Intelligence (AI) will look like. Some think it will be a scary place, some see a lot of exciting opportunities. But regardless what your views are, there is no denying that the cybersecurity industry will be - and already is - particularly affected. Tools like ChatGPT are a new set of capabilities, with both new opportunities and new challenges for security professionals.?


What is ChatGPT?

ChatGPT is an artificial intelligence language model developed by OpenAI, capable of processing natural language text. It allows for natural conversation with a chatbot and can answer questions and assist with tasks such as writing an article, emails, or even computer code.

ChatGPT has been trained on a large dataset of text, including books, articles, and websites. It uses this data to learn the patterns and structures of language, and to build a language model that can generate coherent text. This “smart-bot” can generate responses to questions or prompts, and can even engage in conversation with humans.

It is important to know that ChatGPT does not actually understand the structure of a request it receives. It simply predicts the language and produces what is (statistically) the most plausible answer. So, as much as it gets it right most of the time - especially with simple requests - the answers are not always completely accurate. It might be 90-95% correct (that how accurate ChatGPT claims to be), but it is still up to the user to decide if the gap - the 5-10% incorrect - is acceptable. If you are asking ChatGPT to explain an otherwise obscure technical concept, it might be okay. But if you want it to write a thesis on it, you will need to double-check the information.

Despite its limitations, it is already clear that tools like ChatGPT are going to reshape the way people work and interact with computers in the future. They can simplify many of the tasks that are complex and time-consuming today - hey, it is even helping to write this article (see below, “How ChatGPT helped write this article”).?

For more on this topic, Nisha Talagala wrote an excellent article on how ChatGPT will be reimagining human intelligence and its impact.?


Adversarial use of ChatGPT

As people around the world become enthusiastic about the capabilities of ChatGPT, malicious actors have also become captivated. If ChatGPT can help people simplify everyday tasks and make things better, could it be used for nefarious means?

Without surprise, it did not take long before we started seeing hackers using the tool to engineer better social engineering attacks, creating more convincing phishing emails or social media messages that could lead to individuals or organisations disclosing sensitive information or clicking on malicious links. Phishing emails without typos and in proper English? About time, right? ??

Another thing we can certainly expect to see soon is an increase in more targeted attacks. Attacks that focus on sending a pertinent message to each potential victim. Today, customising an attack message to someone is a very manual process. Hackers have to find the specific interests of their victim and adapt their messaging using the information they have gathered. Because they are often trying to reach as many people as possible, it's easier to simply send a single generic message to everyone. There will always be a few people who fall for it. But if an AI can do the work for them, find out what will most likely trigger each individual, and craft a targeted attack for each of them, then attacks have a much bigger chance of success while keeping their personal effort minimal.

Evil chatbots are another very likely new threat we will see emerge from the ChatGPT revolution. Malicious actors could use solutions similar to ChatGPT to create chatbots that simulate real people, using them to spread disinformation or to manipulate people. Guess what: Evil ChatGPT bots are already being sold by cybercriminals online.

But the most immediate aspect of how tools like ChatGPT are going to be exploited for cyberattacks is certainly its code generation capabilities. Yes, one things ChatGPT does very well is write computer code based on a simple text request. Already, hackers have demonstrated that ChatGPT can write new malware variants. They are derivations of existing malware and do not bring any major innovation to the field, but they will still make the life of security professionals more complicated.?

The key thing is that actors with very limited technical knowledge can now produce tools to execute attacks at next to no cost, something that was previously limited to a handful of skilled hackers. Some recent examples include people writing malware capable of scanning a device for useful documents and extracting them to a remote server, or downloading more aggressive payloads (such as crypto lockers) onto infected devices.


How ChatGPT Helped Me Bypass Data Loss Protection

These new capabilities are not limited to attackers lurking on the dark side of the internet. Internal threats, such as people within your organisation or network, will also be able to gain access to AI to support dangerous activities, whether on purpose or by accident. This became quite obvious to me as I experimented with ChatGPT and, way too easily, managed to use it to bypass data loss protection (DLP) capabilities.

DLP solutions are designed to help organisations prevent sensitive information from being accidentally or maliciously leaked or lost. They do so by monitoring, identifying, and blocking the movement of sensitive data. They are not perfect and cannot stop every attempt to exfiltrate data, but they have come a long way since their introduction about a decade ago.

Initially, DLP solutions were, at best, only capable of recognising potential human errors. But as they evolved, the technical effort required to work around them have drastically increased, making such internal attacks quite complex to engineer, and limiting their use to really motivated and technically skilled users.

But ChatGPT changed everything. Testing the tool in my lab, I asked it to tell me how I could go around a DLP solution (or more precisely, to avoid its weak content filters, to explain how "someone else" could use ChatGPT to bypass a DLP solution).?

The answer was surprisingly insightful: “someone” could ask ChatGPT to write a script that would encrypt a file and hide the encrypted content into another file to avoid detection. It was a great answer. My coding skills being a bit rusty, and I quickly estimated that it would take me two or three days to write such a script. Clearly, a heavy effort.

I proceeded to ask ChatGPT to actually write me a that script: a Microsoft Word macro that would: (1) ask for a file, (2) encrypt the file, (3) hide the encrypted content in another Word document, and (4) reverse the entire process. And to my surprise, it did just that. I was given an impressive VBA code, clear and well-written, using fairly good encryption and steganography algorithms. ChatGPT even explained to me the most advanced parts that I struggled with.

However, testing the code quite quickly showed that it was not as perfect as it looked. Some of the functions were not fully functional. But it took me only a small hour to correct the few errors myself, and I had in my hands a functional script. I tested it with some of the DLP tools I have in my personal lab, and as expected, most of them did not detect that I was doing something nefarious. They simply thought that I was trying to send out a slightly weird-looking Word document.

Using ChatGPT took me just over an hour to create a tool to bypass an industry-standard security solution. While the tool was by no means an engineering marvel, it would have taken me days to do it all by myself.

As ChatGPT simplifies and automates aspects of our lives, it will also help simplify and automate a lot of bad actor activities and make such capabilities accessible to a larger audience. An audience that might not have had the skills before but will be able to perform basics yet effectives attacks in the future.


Mitigating Adversarial Use

When I asked ChatGPT how an organisation could prevent its misuse, the answers it provided me were unfortunately not as brilliant as its malicious coding skills.?

It recommended ensuring strict access control, implementing DLP, staying up-to-date on the latest security threats and vulnerabilities, and implementing a robust security strategy. All good points but a rather generic answer, which clearly shows that the industry is still scratching its head on how to tackle the topic.?

Already, a number of tools can help detect if something has been written by ChatGPT, but solutions to really stop the malicious use of AI are still to be invented. Security has always been a race between hackers and cyber-protection teams, and this is no exception.


Applications of AI in Cybersecurity

On a more positive note, while ChatGPT is being used by attackers to reinforce their aggressive capabilities, the solution also shows a lot of future benefits for cybersecurity professionals. With the constant change of the environments we have to protect, and the increasingly large amount of data that requires monitoring, tools that can process large volumes quickly, adapt to new threats and learn to improve over time are definitely welcome. Many monitoring solutions already rely on a number of self-learning algorithms to detect unusual patterns that could be a sign of an attack.

I also already see a number of cyber researchers taking advantage of what ChatGPT can do. Because ChatGPT can not-only write code but also understand (and explain) code, it is a great tool to reverse engineer new attacks.?

When writing malicious code, attackers very often obfuscate it. Code obfuscation is the practice of deliberately writing source code in a way that is difficult to understand, purposely using complex, random variable names or performing additional useless tasks. When researchers come across new malicious code, it can often take them hours, even days, to clearly understand what the code does, having to drill through thousands of lines that were written to be made as difficult as possible to understand. Passing the code through a tool like ChatGPT can give them a description of what the script does in more natural language, eliminating all the random, unnecessary complexity of it.

Another use that is starting to spring out of AI solutions is in the area of honeypot generation. A honeypot is a fake system or device put on a network to lure an attacker (usually using names that would attract interest, such as "customer database"). Because they are not intended for real use, any connection to them is obviously a red flag and can trigger an alert for investigation. More advanced honeypots even have fake data in order to keep the attacker busy while a response is implemented. AI tools can be used to generate not just a single fake system, but an entire network full of fake environments, devices, and data, making it more difficult for attackers to detect they are being lured.


Because it would be fit to leave the closing words to ChatGPT itself (and also because I am a bit lazy), I asked it to write a conclusion for this article. Here it is:

Chat GPT has undoubtedly had a significant impact on the field of cybersecurity. Its potential for misuse has raised concerns among experts, who are still searching for effective solutions to detect and prevent malicious use. However, Chat GPT also offers a wealth of opportunities for cybersecurity professionals, particularly in areas such as reverse engineering and honeypot generation. As AI technology continues to evolve, it will be essential for the industry to stay vigilant and adapt to new challenges. By working together and leveraging the power of innovative tools like Chat GPT, we can stay ahead of cybercriminals and ensure the safety of our digital infrastructure.




How ChatGPT Helped Write This Article

Here are all the contributions that ChatGPT made to this article:

  • It provided me with an initial list of key sections ("I want to write an article about Chat GPT and cybersecurity. What would be the key sections?"). However, the initial draft needed quite a bit of work, as it focused primarily on how ChatGPT could benefit cybersecurity and ignored how attackers could use the tool for their own nefarious purposes.
  • We had a discussion about the adversarial use of ChatGPT, and it provided some interesting insights on how the tool could be used for social engineering and chatbots. However, as I mentioned, its view on mitigations was much more generic and limited.
  • I asked it to draft the section on how ChatGPT works, and while it was quite good, I had to add a section on what ChatGPT actually understands and its limitations. When I asked ChatGPT how accurate it was, it provided me with some interesting metrics that I used in the article.
  • ChatGPT provided the definitions I used for DLP and code obfuscation.
  • It proofread the entire article, which it was definitely quite good at.
  • Finally, it wrote the conclusion.

No alt text provided for this image

Disclaimer: This article is not legal or regulatory advice. You should seek independent advice on your legal and regulatory obligations. The views and opinions expressed in this article are solely those of the author. These views and opinions do not necessarily represent those of HSBC or its staff.

The only way ahead for Cyber attack prevention and cure is automation and the use of machime learning/AI. Too many attacks, too many logs and patterns, too many people to analyse and react.

Guy Reynolds

Soda Talent Solutions | Talent Consulting | Recruitment l Executive Search l Career Coach l HR Advisory l Diversity

1 年

Good article. Thanks for sharing JB.

James Tin

Cyber Security Presales Engineer | SASE | DLP | EDR | Threat Hunting | Threat Intel | Email Security @ Symantec by Broadcom

1 年

ML and AI are definitely needed to defend enterprises against determined attackers. The current enterprise architecture of most organisations is too “holey”.

要查看或添加评论,请登录

Jean-Baptiste B.的更多文章

社区洞察

其他会员也浏览了