AI Attacks
Raghuveeran Sowmyanarayanan
Passionate about adding value to customers with actionable business insights driven through AI & Analytics
What is an AI Attack?
AI hackers / adversaries can manipulate AI systems in order to change their behaviour for a dangerous end goal. As AI systems are further integrated into critical components of society, these artificial intelligence attacks represent an emerging and systematic vulnerability with the potential to have significant negative effects.
How they are different from Cyber attacks?
Unlike conventional cyberattacks that are caused by “bugs” or human errors in code, AI attacks are enabled by inherent limitations in the underlying AI algorithms which are difficult to be fixed. Physical objects can be now used for cyberattacks (e.g., an AI attack can transform a Red light into a green light in the eyes of a self-driving car by simply changing). Machine learning algorithms powering AI systems “learn” by extracting patterns from data. These patterns are tied to higher-level concepts relevant to the task at hand, such as which objects are present in an image. Data can also be misused in new ways using these attacks, requiring changes in the way data is collected, stored, and used.
Are there multiple forms of AI attacks?
Yes. These attacks can take different forms that strike at different weaknesses in the underlying algorithms.
Input Attacks: Manipulating what is fed into the AI system in order to alter the output of the system to serve malicious goal of attacker. Because at its core every AI system is a simple machine—it takes an input, performs some tasks/calculations, and returns an output—manipulating the input allows attackers to impact the output of the system.
Poisoning Attacks: Corrupting the process during which the AI system is created so that the resulting system works in a way as desired by the attacker. One way to execute a poisoning attack is to corrupt the data used during this process. This is because the ML algorithms powering AI work by “learning” how to do a task, but they “learn” from one source and one source only: data. Data is its water, food and air. Poison the data, poison the AI system. Poisoning attacks can also compromise the learning process itself.
AI attacks can be used in a number of ways to achieve a malicious end goal.
Causing Damage: The attacker wants to cause damage by having the AI system malfunction. An example of this is an attack to cause an autonomous vehicle to ignore stop signs. By attacking the AI system so that it incorrectly recognizes a stop sign as a different sign, the attacker can cause the autonomous vehicle to misunderstand the stop sign and crash into other vehicles and pedestrians.
Hiding Something: The attacker wants to evade detection by an AI system. An example of this is an attack to cause a content filter tasked with blocking terrorist propaganda from being posted on a social network to malfunction, therefore letting the material propagate unencumbered.
Losing Faith in AI System: The attacker wants an operator to lose faith in the AI system, leading to the system being shut down. An example of this is an attack that causes an automated security alarm to misclassify regular events as security threats, triggering a barrage of false alarms that may lead to the system being taken offline. For example, attacking a video-based security system to classify a passing stray dog as a security threat may cause the security system to be taken offline, therefore allowing a true threat to then evade detection.
How can we protect from AI attacks?
AI Security Compliance program could be one of the mechanisms to protect against AI attacks. The goals of these compliance programs are to 1) reduce the risk of attacks on AI systems, and 2) mitigate the impact of successful attacks.
Latest news of US President convening Leaders of 7 AI Companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft & Open AI) and securing voluntary commitments on security, sharing of information, allowing 3rd party discovery and reporting of vulnerabilities & watermark identification for AI generated content etc. are on these lines.
AI Suitability assessments
Conducting “AI Suitability assessments” would help in evaluating the risks of current and future applications of AI. These assessments should result in a decision as to the acceptable level of AI use within a given application. These tests should weigh the application’s vulnerability to attack, the consequence of an attack, and the availability of alternative non-AI-based methods that can be used.
AI suitability assessments should focus on answering five questions:
? Value: What is the social & economic value added by the AI system?
? Ease of Attack: How easy will it be for an adversary to execute an attack on the AI system? E.g. public availability of datasets, the ability to easily construct similar datasets etc.
? Damage: What will be the damage incurred from an attack on the AI system? E.g. Likelihood of an attack, Ramifications of the attack etc.
领英推荐
? Opportunity Cost: What are the costs of not implementing the AI system? E.g Societal benefits
? Alternatives: Are there alternatives to the AI system
In terms of implementing these suitability assessments, regulators should play a supportive role. They should provide guidelines on best practices for how to perform the tests. In areas requiring more regulatory oversight, regulators should write domain specific tests and evaluation metrics to be used.
Beyond this supportive role, regulators should affirm that they will use an entity’s effort in executing a suitability assessment in deciding culpability and responsibility if attacks do occur. Like any other compliance efforts, a company that demonstrates that it made a good faith effort to reach an informed decision via a suitability assessment may face more lenient consequences from regulators in the case of attacks than those that disregarded the assessments.
Mitigating potential AI attacks
Stakeholders must determine how AI attacks are likely to be used against their AI system, and then craft response plans for mitigating their effect. Some of those aspects could be questioning
·??????How could adversaries have manipulated the data being collected? If the adversary controls the entities on which data is being collected, they can manipulate them to influence the data collected.
·??????Is an adversary aware that data is being collected?
·??????How could we improve detection of intrusion and attack formulation? Improve intrusion detection systems to better detect when assets have been compromised and to detect patterns of behaviour indicative of an adversary formulating an attack.
·??????Are we creating attack response plans? Determine how AI attacks are most likely to be used, and craft response plans for these scenarios.
·??????Are we increasing research funding of methods to defend against AI attacks and the creation of new robust AI algorithms? Mandate the inclusion of a security assessment on all AI-related research grants.
·??????Are the regulators alerting various stakeholders regarding the existence of AI attacks and preventative measures that can be taken?
Conclusion
Artificial intelligence today presents seismic unknowns. In the future, technical advancements may one day help us to better understand how machines can learn, and even learn how to embed important qualities such as thought & intelligence in technology. But today still we are not there.
The current AI algorithms are, intrinsically vulnerable to manipulation and poisoning at every stage of their use: from how they learn, what they learn from, and how they operate. This is not an accidental mistake that can be easily fixed. It is embedded deep within their DNA.
The unchecked building of AI into critical aspects of society is weaving a fabric of future vulnerability. In high-risk application areas of AI, compliance can be mandatory and enforced by the appropriate regulatory bodies. In low-risk application areas of AI, compliance can be optional so that we don’t discourage innovation in this rapidly changing field.
The warning signs of AI attacks may be written in bytes, but we can see them and how they can impact. We would be wise to not ignore them.
About the Author
Raghuveeran Sowmyanarayanan is Artificial Intelligence & Analytics Leader heading Differentiating Delivery Office at Cognizant and was heading AI&A Healthcare practice and has been personally leading very large & complex Enterprise Data Lake & AI/ML implementations. He can be reached at [email protected]
#AI,#aia , #cognizant , #artificialintelliegence , #ml , #machinelearnig #ethicalai #responsibleai #amazon #anthropic #googlecloud #inflectionai #meta #microsoftazure #openai
?
This article is pretty simple to understand to layman and the expert alike. The flow of thought is pretty lucid with the contents presented in terms of identifying potential threats and risks to AI based systems while creating awareness. Very well articulated, Raghu!
AI attacks are real risks! Policymakers must act swiftly, identifying vulnerable systems, and implementing measures to safeguard against potential damages.