- IBM's 2023 Cost of a Data Breach Global survey found that "almost all organizations use or want to use AI for cybersecurity operations" but "only 28% of them use AI extensively, meaning most organizations (72%) have not broadly or fully deployed it enough to realize its significant benefits".
- That same year, an IBM Global Security Operations Center Study determined that SOC professionals "waste nearly 33% of their time each day investigating and validating false positives".
- SOC professionals from the same report also stated that, "manual investigation of threats slows down their overall threat response times (80% of respondents), with 38% saying manual investigation slows them down “a lot”".
AI has significantly transformed the landscape of cybersecurity, affecting both defenders and attackers in a multitude of ways. In this article, we will look at AI's impact on cybersecurity from the lenses of cyber defenders and attackers and discuss the future of cybersecurity now that AI is being adopted by both parties. For the sake of simplicity, we will refer to "traditional AI", large language models (LLMs) & generative AI, and machine learning (ML) as AI.
"Invincibility Lies in the Defence" ~Sun Tzu
Let's face it, defenders have it hard. Between competing priorities, teams being short-staffed, and the need to sleep, defenders have a lot to deal with. If only there was a tool that defenders could use to bolster their defenses. Well have no fear, AI is hear. From a defenders standpoint, AI can be used for:
- Threat Detection and Prevention: AI-powered tools have improved threat detection by analyzing vast amounts of data in real time, helping security teams identify anomalies and potential threats more accurately than traditional rule-based methods.
- Automated Incident Response: AI-driven incident response can help in rapidly containing and mitigating threats. Automated systems can execute predefined responses to certain types of attacks, reducing response times and potentially minimizing damage.
- Phishing Detection: AI can analyze emails and URLs for signs of phishing attacks, helping to filter out malicious content and reduce the risk of successful social engineering attacks.
- Vulnerability Management: AI can assist in identifying and prioritizing vulnerabilities by analyzing vast databases of vulnerabilities, patch information, and threat intelligence.
- Pattern Recognition: Machine learning algorithms can identify patterns in data that might be too complex for human analysts to discern, aiding in the identification of sophisticated attacks.
"The Possibility of Victory in the Attack" ~Sun Tzu
The defender's dilemma goes something along the lines of "security incidents are essentially inevitable because defenders have to be right 100% of the time whereas the bad guys and gals only have to be right once". While there is a lot of debate about whether security defenders need to be right 100% of the time, or just often enough, it's hard to ignore the fact that attackers have a plethora of tools that they can use to wreak havoc. From an Attackers standpoint, AI can be used for:
- Automated Attacks: Attackers can leverage AI to create automated attack tools that can adapt and evolve in response to defensive measures. This makes attacks more sophisticated and harder to detect.
- Evasion Techniques: AI can be used to generate evasion techniques that enable malware to bypass traditional security measures, making it challenging for defenders to identify and block malicious activities.
- Impersonation and Social Engineering: AI can be employed to create more convincing phishing emails, deepfake voice recordings, or even chatbots that convincingly imitate human communication.
- Exploiting AI-Based Defenses: Attackers can study and reverse-engineer AI-based defenses to find weaknesses, allowing them to design attacks that exploit the blind spots or limitations of these systems.
- Data Poisoning: Attackers can manipulate the training data of AI systems to introduce bias, degrade performance, or cause misclassification of inputs.
"The future belongs to those who prepare for it today." ~Malcolm X
The future of cybersecurity in the context of widespread AI adoption by both defenders and attackers is likely to be characterized by a dynamic and constantly evolving landscape. Here are some key trends and possibilities that one can expect to see in the near future:
- AI-Powered Autonomous Systems: Defenders will increasingly rely on AI-driven autonomous systems that can detect, respond to, and mitigate threats in real-time. These systems will learn from ongoing data, adapt to new attack vectors, and make decisions without human intervention, significantly reducing response times.
- Enhanced Threat Intelligence: AI will enable the aggregation and analysis of massive amounts of threat intelligence data, allowing organizations to proactively identify emerging threats, vulnerabilities, and attack patterns.
- Hyper-Personalized Attacks: Attackers will use AI to generate highly personalized attacks that leverage publicly available information to create convincing social engineering attempts, making them even harder to detect.
- Privacy, Ethics Concerns, and Regulatory Changes: The use of AI in cybersecurity will raise privacy and ethics concerns, especially when it comes to collecting and analyzing user data for threat detection. Striking the right balance between security and individual rights will be a challenge. Additionally, as AI becomes a fundamental tool in cybersecurity, regulatory bodies may introduce new standards and requirements to ensure responsible AI usage and protection against AI-driven attacks.
- Adversarial Machine Learning: The arms race between defenders and attackers will intensify in the realm of adversarial machine learning. Defenders will work to create AI models that are robust against adversarial attacks, while attackers will develop increasingly sophisticated techniques to bypass AI defenses.
AI has brought both advantages and challenges to the realm of cybersecurity. Defenders benefit from enhanced threat detection and response capabilities, while attackers can exploit AI's capabilities to create more sophisticated and adaptive attacks. In this cold war between attackers and defenders, innocent bystanders, known as end-users, are directly in the crossfire. To prevent or at the very least minimize collateral damage, AI regulation, policy frameworks, and governance is needed. The fun is only starting; let’s see what the future holds for AI, especially when it comes to its adoption in cyberspace.