Influence of AI on the Cyber Threat Landscape
Katja Feldtmann
Managing Director at Cybershore - Cyber Security Consulting with a Business Mindset and President - ISACA Wellington Chapter
For all who know me, I love following what is going on overseas, especially in Germany and the European Union.
The EU and countries like Germany, the United Kingdom, and the United States often seem a step ahead of us regarding insights. Frequently, Kiwis think those insights don't apply to us because we are small and far away.
But, we should remember that regardless of physical location, we operate in a connected world, and there is no difference in terms of attackers using technologies such as Artificial Intelligence (AI). Physical boundaries don't bind attackers, nor are they following laws and regulations.
It's up to organisations to understand the current threat landscape, not just in New Zealand but globally. Attackers' use of AI significantly impacts the current and future threat landscape.
I came across a publication from the German Federal Office for Information Security that discusses the impact of AI on cybersecurity. I have translated it to share some of those insights with you.
I have highlighted in bold font what I found insightful.
Background
Applications based on large language models (LLMs) are increasingly being rolled out, and AI is a much-discussed topic. However, while the boundaries of these new models are still being explored, the lasting changes that the current AI trend will bring are still being determined.
Undoubtedly, there are concerns regarding the impact of AI on cybersecurity, as it is already altering the cyber threat landscape for both attackers and defenders.
The German Federal Office for Information Security examined how attackers' attacks and activities are changing due to newly available technology, focusing on the offensive use of AI. They found that generative AI is already increasing the quality and quantity of social engineering attacks (e.g., deepfakes or highly personalised phishing). However, their research focused more on technical attack vectors and less on the human factor. Still, it highlighted that it should be noted that social engineering attacks are among the most common, and AI has a substantial impact on this type of attack.
The report identified AI-enabled applications already available for offensive use and assessed how these threats could evolve in the near future. This includes dual-use tools and applications (so-called dual-use goods), such as penetration testing tools that can assist in ethical red teaming and criminal activities. It also addressed concerns regarding autonomous hacker AI, which is occasionally mentioned in the media.
Results
Based on a literature review and evaluation of various tools and projects, the German Federal Office for Information Security summarised the key findings as follows:
Overall, those findings agree with a recently published report by the British National Cyber Security Centre: The near-term impact of AI on the cyber threat .
Recommendations
Given the changing threat landscape, the German Federal Office for Information Security highlights that it is important to prioritise cybersecurity and that it will be crucial to increase the speed and scope of defensive measures, especially but not exclusively by:
AI often amplifies classical attacks, so these measures largely fall within the classical area of IT security.
Both cybersecurity and artificial intelligence are subject to constant change, making it important to continue monitoring changes and new developments regarding the threat landscape. While the German Federal Office for Information Security notes that autonomous hacker agents do not yet exist, they also highlight that it is difficult to reliably assess the capabilities of programs of capable actors or predict technical breakthroughs.
According to their current assessment of AI's impact on the cyber threat landscape, they assume that there will be no significant breakthroughs in the development of AI, particularly large language models (LLMs), in the near future.
Here are the translated report sections:
Effects of Large Language Models
With the launch of ChatGPT in November 2022, competition for leadership in the chatbot market began. New products and language models are constantly released, offering significant performance improvements. Consequently, powerful language models are accessible to virtually everyone, delivering unprecedented quality results. The performance and availability of these models have impacted various industries and will significantly influence the cybersecurity sector.
LLMs can be beneficial for cybersecurity applications. They can be accessed directly via a web or mobile app (typically as a chatbot). API access can also integrate LLMs into existing tools (e.g., reverse engineering or penetration testing frameworks) or develop new applications.
领英推荐
The ethical use of these methods and applications in cybersecurity depends on the user's intentions. Unfortunately, it is easy for users with malicious intent to misuse the capabilities of LLMs.
Besides general productivity gains for malicious actors, the German Federal Office for Information Security currently sees malicious use primarily in social engineering and generating malicious code. Easy access to high-quality LLMs allows even those with little or no foreign language skills to automatically create convincing, high-quality phishing messages. Instructions can be supplemented with additional context to personalise the messages or use a specific writing style, leading to convincing communications.
Traditional methods for detecting fraudulent messages, such as checking for spelling errors and unconventional language use, are no longer sufficient. LLMs can also be used to further increase the success rate of phishing attacks by generating plausible domain names and URLs, for example.
Combining an LLM with other generative AI techniques, such as deepfakes for image and audio content, enables malicious actors to carry out social engineering attacks of unprecedented quality. It is generally difficult to link a specific attack to using an LLM, as this is closely related to the problem of detecting AI-generated content in general. However, reports in the media from security consulting firms and government agencies, as well as investigations on marketplaces, provide clear evidence of the use of LLMs by malicious actors, including so-called Advanced Persistent Threats.
The ability of LLMs to generate malicious code also changes the cyber threat landscape. It lowers the entry barriers for individuals who want to carry out malicious activities, enabling even those with limited technical skills to produce sophisticated malicious code.
Even already capable actors benefit from productivity increases. Providers of chatbots or open LLMs usually take precautions to ensure that their products cannot be misused. Filter systems are used to prevent unwanted outputs. These systems are generally useful for catching simple requests with malicious intentions, such as "create me code for ransomware."
However, circumventing these systems often takes little effort and domain knowledge. Since filtering is always a compromise between preventing unwanted outputs and simultaneously providing a system with high utility, it is questionable to what extent such filtering can effectively prevent misuse. Using a chatbot provided by an online service that employs a system to prevent unethical outputs is not the only way to access an LLM. Other options include using "jailbreaks" (user inputs that override existing filters and instructions), using services that do not rigorously filter output, or using "uncensored" public models. As mentioned above, additional steps to bypass filters are not required here.
AI-Based Malware Generation
Malware is the collective term for harmful software, such as ransomware, worms, or trojans. The goal of attackers is often to place malware on a target computer, whether through exploits or social engineering. Tools like virus scanners combat such software by detecting and blocking its execution, leading to an arms race between attackers, who develop new malware, and defenders, who adapt their defences to new threats. Thus, it's interesting to examine how AI influences the creation and use of malware.
The German Federal Office for Information Security has found several ways AI is used in this area. The models range from LLMs and GANs (Generative Adversarial Networks) to reinforcement learning systems, each serving different purposes. AI enables individuals with minimal or no technical knowledge to create malware more easily. They don't need a deep understanding of programming or malware operations and can make their requests in natural language. Moreover, there's concern that AI might be used to write malware autonomously. This goes a step further than just aiding human actors. While LLMs can already write simple malware, the German Federal Office for Information Security has found no AI capable of independently writing advanced, previously unknown malware (e.g., with sophisticated obfuscation methods or zero-day exploits).
The training data required for malware and vulnerabilities would also be very difficult and costly to create. Next, AI can help modify malware. This is more realistic than creating malware from scratch, and there are already several research papers about modifying malware using AI. This usually happens in a feature space, not at the actual code level, intending to avoid detection. However, this has occurred in a more academic setting, and the German Federal Office for Information Security has found no evidence that models that can help modify malware are already in use. Moreover, there are no sophisticated tools, only PoCs and research projects. This approach is only suitable for highly qualified actors in both the malware and AI areas, and a good data basis is required for training such tools.
The German Federal Office for Information Security mentions AI as part of malware. Here, AI does not create the malware itself but is integrated into the functionality of the malware. The goal is often to obscure the malware to prevent its detection. To evade detection, so-called polymorphic engines change the malware's code while preserving its functionality. The use of AI in this area is at least conceivable. Here, an AI model would determine the code manipulation, but there is no evidence that such a model is used. However, there are many warnings about such a theoretical possibility. Another possibility would be to train an AI model to mimic user behaviour so that the malware's actions are less conspicuous.
AI-Based Attacks
The most interesting tool for cybercriminal activities would be an AI that receives a target input (be it an IP range or a name) and autonomously executes all steps of a cyberattack. The strategic and abstraction capabilities of the latest AI technologies make them prime candidates for developing such tools. From a penetration tester's perspective, this would be a useful tool to harden systems and reduce the time required for penetration testing.
This area is a current research field, and efforts are being made to develop such a tool. In this field, reinforcement learning systems are a common approach as they can interact with an environment, learn from it, and develop long-term strategies. Recently, LLMs have also been proposed as a solution to this problem. The German Federal Office for Information Security's research has not found a tool to solve this task fully.
However, some tools automate parts of the process. Most of these tools are academic projects or PoCs that are not particularly user-friendly or sophisticated. Often, the scope of these tools is either very large or very small. For example, on a large scale, tools for planning attack paths consider an abstract version of a target network and plan an optimal attack route. An active attack does not take place in this case. Similarly, some models find optimal exfiltration paths for systems. On the other hand, there are tools explicitly trained on a single, specific network trying to launch a successful attack. This requires knowledge of the target network and a training phase that is hardly unnoticed. Moreover, a trained agent cannot easily be generalised to other networks. The environments of different systems and networks vary greatly in size and available actions, making generalisation very difficult. Additionally, a very large amount of training data is required to cover the range of options. These problems make the step from a proof of concept to a real, general application a difficult, probably currently unsolved problem. LLMs could be an approach to improve generalisability.
Several tools support pen-testing through AI assistants. After testing, we have found that these tools primarily support individuals attempting to launch an attack, lowering the entry threshold. Another approach for LLMs, similar to the above-mentioned tools, is to automate certain parts of the attack chain. Here, the reconnaissance phase is particularly notable, but other steps, such as analysing server responses, are also mentioned. The application of AI as a fully automated attack tool is an intensively researched area. The German Federal Office for Information Security expects further projects and tools in this area, especially those focusing on using LLMs and generative AI.
Further Interfaces Between AI and Cybersecurity
The most visible is the integration of LLMs into various tools, as seen in other fields. LLMs are integrated into IDEs (integrated development environments), and plugins exist for reverse-engineering or penetration testing tools. These plugins typically call the API of an LLM provider with a predefined command, and the result is displayed within the application. The current utility of direct LLM use, such as in browsers, is still limited. AI is also used to automatically detect security vulnerabilities, an active research field with many open-source and commercial products available. The ease of analysing open-source applications with these tools highlights the importance for open-source projects to use these tools before malicious actors do it proactively.
Although source code is usually required for analysis, combining it with reverse-engineering tools can extend vulnerability detection to closed-source applications. Various projects automate this process using an LLM, but results vary based on the code's complexity and obfuscation techniques. AI is also employed to bypass CAPTCHAs, which are ubiquitous for distinguishing between automated bots and genuine human users by tasks like distorted text or image recognition, aiming to prevent malicious activities such as spamming, brute-force attacks, and data scraping. However, methods to circumvent CAPTCHAs have existed since their invention. Today, many tools and online services use AI to provide effective results.
Unlike existing tools that manually formulate such rules, AI is also used to guess passwords and learn rules for likely password choices from data. The abundance of data from various leaks provides ample training material. As AI integration into processes increases, so does the risk of embedded malware in AI models or data. There are instances where malware is encrypted within the parameters of neural networks without significantly affecting the model's utility. Malicious code can also be hidden in trained models commonly distributed on certain platforms. Moreover, LLMs and their associated ecosystems can be misused to distribute malicious software to users.
On the hardware side of attacks, side-channel attacks are a known vector requiring significant technical skill and know-how. There are PoCs for AI models that facilitate side-channel attacks, potentially making them more accessible to less experienced attackers.
Reputable software providers have started using LLMs for customer support. Similarly, malicious actors might use LLMs to assist users of malware-as-a-service.
Victims of ransomware attacks, often technically unsophisticated, may struggle with obtaining the cryptocurrency needed for ransom payments. Cybercriminals already offer assistance with procurement and payment, which could be automated and offered in multiple languages using LLMs to increase success rates.
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
6 个月The insights provided by the Federal Office for Information Security shed light on the evolving landscape of AI-driven cyber threats, emphasizing the growing sophistication of social engineering tactics facilitated by generative AI. While the current capabilities of LLMs pose significant concerns, the research underscores the importance of ongoing vigilance and preparedness in cybersecurity measures. Considering the dynamic nature of technological advancements, how do you perceive the role of proactive threat intelligence and adaptive security strategies in mitigating emerging AI-driven cyber risks?
CIO50 Next CIO Finalist 2023 | Aspiring CIO | Continual Learner | Problem Solver |
6 个月Interesting. Suppose the fact is that it's coming...so regardless of how soon that capability becomes available to attackers, organisations need to prepare. The good news from this, at least, is that they have the opportunity to get ahead, potentially.
Managing Director at Cybershore - Cyber Security Consulting with a Business Mindset and President - ISACA Wellington Chapter
6 个月Hotanya Ragtah Christo Lourens Raymond Tang Thought this might be of interest