A CISO's Perspective - AI for Bad Actors
Andrew Smeaton
Chief Information Security Officer @ Jamf CISSP, CISA, CISM, CRISC, CCISO, CGEIT
In recent years, the threat landscape has drastically changed, resulting in the need for AI to competently protect the dynamic enterprise attack surface. With AI, enterprises can effectively identify, analyze, prioritize, and respond to risk; spontaneously find any malware on a network; tackle incident response, and detect intrusions. In essence, instead of continually chasing after malicious activity, AI provides much-needed analysis and threat identification to help cybersecurity professionals reduce risk and improve security posture.
However, it is crucial to understand that while organizations bring AI into their operational and cybersecurity systems, cybercriminals (such as Advanced Persistent Threats (APTs), cyber threat actors, ideological hackers, etc. have the ability to employ those same AI techniques or even create their own AI systems to defeat defenses and avoid detection. This is what we call AI/Cybersecurity conundrum (get the whiskey out).
I suspect the threat landscape to change both by expanding some existing threats and the emergence of new threats that are on the way. We also expect that the typical character of threats will shift in a few distinct ways. In particular, we can assume the attacks supported and enabled by progress in AI to be especially effective, finely targeted, tough to attribute to sources.
Some of the ways in which AI can be used against information security are as follows.
- Data Poisoning/Manipulation Attacks
The heart of AI is data.
Machine learning and artificial intelligence tools need data to function correctly and accurately. The algorithms make better predictions with massive volumes of data and events. However, if data is not ample, the result from the AI system is inaccurate and contains lots of false positives.
Thus, the primary method the attackers can use to compromise AI systems is through data poisoning/manipulation attacks. Here, the focus of cybercriminals will be to foil security algorithms by targeting the training data. If a data manipulation attack is successful, attackers could easily damage or destroy an organization's entire information system.
- Large-scale Automated Attacks
Today, the most sophisticated cyber-attack requires a very skilled workforce to conduct research on their target and develop corresponding attack payload. It is both time-consuming and resource-intensive. With AI, the same level of sophistication can be achieved in a short duration and on a much large scale.
AI systems are both efficient and scalable. Efficient in the sense that it can execute tasks more quickly than humans, and scalable because with increased computing power, it can effectively complete many instances of a nefarious task.
Denial-of-service (DoS), distributed denial-of-service (DDoS), ransomware, and other malware attacks will become more prevalent and more comfortable with the use of AI.
Thus, AI systems can be trained to perform large-scale coordinated cyber-attacks without much human interference.
- Phishing Attacks
Phishing messages are often sent out as massive spam email campaigns. Spearphishing, a more targeted phishing attacks, focuses on specific individuals and are typically custom-designed. Overall, for these attacks to be successful, it needs to be carefully designed for their target audiences and requires relevant research.
With AI, attackers can easily automate the research and synthesis tasks to create highly effective and tailored messages and engage in mass phishing/spearphishing attacks in a manner that is currently infeasible.
- Impersonation Attacks
AI is a simulation of human intelligence in learning systems to make them think like humans and imitate their actions. As we train AI systems to become increasingly capable of replicating human behavior, they may be able to convincingly replicate specific individuals' physical characteristics. In one case, a hacker transferred around $240,000 using voice-mimicking software mimicking the voice of a European company executive.
We can expect AI attacks to be highly tailored. An attacker can use AI systems to learn the nuances of human behavior by easily analyzing available information such as email, social media communications, phone conversations, etc. The knowledge gathered can then be used in impersonation attacks that will be almost impossible to distinguish from a genuine one.
- The exploitation of AI and other Vulnerabilities
AI is a relatively new, sophisticated, and complex technology that has become increasingly pervasive. However, increasing complexity increases vulnerabilities. So, AI also brings a range of security vulnerabilities that can substantially exacerbate an organization's risk exposure. It is expected that attackers will focus on exploiting these vulnerabilities.
Also, attackers will use AI to find and exploit Zero-day and other vulnerabilities that exist in network devices.
- Expansion and Evolution of Threats
With the increasing use of AI systems, the costs of attacks will be reduced. It is because AI would reduce human labor, intelligence, and expertise requirements. As a result, we can see the wide expanse of the set of threat actors, the set of potential targets, and the rate of cyberattacks.
Also, the typical nature of threat might change as attacks are expected to be highly effective, finely targeted, and difficult to attribute. Moreover, new attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans.
- Attack Automation
At times where cyber defenders are using AI to automate repetitive and mundane tasks, attackers can use similar AI systems to automate tasks involved in carrying out cyberattacks. Some examples include labor-intensive cyberattacks such as spear phishing, password-guessing, brute-forcing and stealing, impersonation attacks through the use of speech synthesis, exploitation of existing software or AI vulnerabilities, or payment processing or dialogue with ransomware victims, etc.
- Target Prioritization
For an attack to be successful, it is essential for cyber attackers to identify and prioritize victims. In this sense, attackers can employ AI to effectively analyze large datasets and prioritize their target.
- Social Engineering Attacks
These days, much personal information is freely available on social media and other platforms. Attackers can use AI to analyze such data and use it to automatically launch a targeted attack against victims with scams, phishing, or disinformation.
- Disinformation and Deepfakes
AI has opened new ways to generate realistic images, human voices, and videos, which has opened new avenues of attack.
Attackers can use this technique for various malicious purposes such as to potentially create convincing fake content, fool facial recognition security systems, attack speech applications and subvert biometric voice systems, trick information system to classify malware file as a safe file, etc.
There is a huge impact of carefully crafted AI-powered disinformation. For example, the impact of manipulations of opinion through social media clearly changed the results of the 2016 US elections and the 2016 UK's Brexit referendum. Moreover, attackers can use disinformation to create social divides, create confusion, manipulate victims into having more extreme views and opinions, and misrepresenting facts and the perceived support that a particular opinion has.
Conclusion
While advancements in AI systems have significantly improved the security posture of an organization, it has equally presented with new challenges, new risks, and new avenues for attackers. Therefore, it is essential for cyber defenders to first understand the AI associated risks, understand the working of AI-based applications, their susceptibility to adversarial attacks, and become well-versed in AI technologies.
Account Director at Cologix, Inc.
4 年Great read.
Sales Executive
4 年You are amazing!!