The New Frontier
Marcel Lammerse GAICD
Automates up to 90% of security compliance | Cybersecurity Leader and Strategic Advisor | IRAP Assessor | Neurodiversity Advocate | Founder, Cyber Sense | CISM, CISA, CRISC, CISSP
Artificial Intelligence (AI) and Machine Learning (ML) technology have revolutionized the way we interact with technology and have enabled unprecedented levels of automation, personalization, and efficiency. However, the adoption of this powerful and unregulated dual-use technology comes with inherently unique cybersecurity risks that are sometimes poorly understood.
Conversely, trying to avoid these risks through non-adoption in a highly competitive global marketplace comes with its own set of business and strategic risks. So how can organisations successfully navigate the fast-evolving AI/ML technology landscape and minimise exposure?
In this article, we'll touch on some of the risks associated with AI/ML and highlight resources that may assist risk practitioners as well as those without specialized knowledge or training in this field to better manage AI/ML risks.
The AI/ML Risk Continuum
The spectrum of AI/ML risks can be referred to as the "risk landscape" or the "risk continuum." These terms describe the range of potential risks that an organization may face, spanning from low likelihood and low impact risks to high likelihood and high impact risks. Depending on their origin, risks can be categorized as either internal or external to the organisation.
Internal Risk: Internal risks are factors or events that arise within an organization or entity. These risks are typically under the control or influence of the organization and can be managed through internal measures.
External Risk: External risks, on the other hand, are factors or events that arise from sources outside the control of an organization. These risks may pose a significant challenge to an organization's operations.
It's important for organizations to identify and assess both internal and external risks to develop effective risk management strategies. Internal risks can often be mitigated through internal controls and organizational practices, while external risks may require advance planning and governance to minimize their impact.
Social Engineering Attacks
In his written testimony and during the U.S. Judiciary Committee hearing on May 16th, OpenAI founder and CEO Sam Altman called for increased government regulation and oversight, expressing concerns that while tools like ChatGPT have great potential to benefit society, they can also be used for widespread misinformation and cyber attacks.
AI has long been used to generate audio and video deepfakes for years, sometimes with benign intent to entertain such as in Chris Ume's viral video showing Hollywood actor Tom Cruise running for president in the 2020 presidential election. Other times by generating political messages, with the potential to mass-manipulate the public, something that is particularly concerning in light of the upcoming 2024 U.S. presidential election.
Indeed, Verizon highlighted in its recently released 2023 Data Breach Investigations Report (DBIR) that 74% of all breaches include the human element through error, privilege misuse, use of stolen credentials or social engineering. And a quick Google search will show that there is no shortage of AI tools, often free or at low cost, available on the Internet.
AI-startup voice.ai develops software that, according to its website, enables you to ".. analyze, modulate and correct anyone's voice before turning it into a real-time impression for a target voice in real-time." on platforms such as Skype, Zoom, Discord and Google Meet, to name but a few.
During an incident in China last month, a man fell victim to a deepfake scam where hackers employed AI software to create a video call imitating his friend's appearance and voice. The scam convinced the man to transfer a staggering amount of 4.3 million Yuan ($US622,000 or A$863,500) intended for his friend's bidding deposit, but the money was diverted to a fraudulent account instead. Similar cases were reported here and here.
Although most of the stolen funds have been recovered, the incident underscores the escalating global trend of AI-enabled scams, prompting discussions on the potential vulnerabilities posed by deepfake technology.
Detecting DeepFakes
If humans are unable to detect that they are interacting with AI due to our limited ability to perceive differences between generated and real content, then we need a computer to assist. Ironically, our best defense against AI/ML-driven attacks is to develop AI/ML-driven defenses;
Intel's DeepFakeCatcher [research paper here] looks for biological blood flow signals in video content that are not preserved in fake content and is reportedly able to detect fake videos with a 96% accuracy rate, in real-time.
Microsoft’s Video Authenticator Tool detects blending boundaries and grayscale elements that are undetectable to the human eye.
AWS, Facebook, Microsoft, the Partnership on AI’s Media Integrity Steering Committee, and other academics created?the 2020 Kaggle Deepfake Detection Challenge. The winning model, released as open source, detected deepfakes from Facebook’s collection about 82% of the time; when the same algorithm was run against previously unseen deepfakes, it detected about 65%. Further important research in this field is still ongoing.
Machine Learning Attacks
Adversarial attacks on machine learning algorithms can be performed with the goal of stealing information, manipulating the decision-making process, or causing harm. These attacks can be classified into two main categories: training-time attacks and evasion attacks.
Training-time attacks involve manipulating the training data used to develop the machine learning algorithm. For example, an attacker may inject misleading or malicious data into the training set, causing the algorithm to learn incorrect patterns.
Evasion attacks involve manipulating the input data to the machine learning algorithm at runtime.
For example, adversarial attacks on autonomous vehicles involve modifying the input data in a way that causes the machine learning algorithm to make incorrect decisions. These attacks are particularly concerning as they could lead to accidents and potential loss of human life.
One way that attackers can perform adversarial attacks on autonomous vehicles is by using digital images or videos. By adding specific patterns to the digital image or video, attackers can cause the machine learning algorithm to misclassify objects or even see objects that are not there. An attacker could create a digital image of a stop sign with a particular pattern that causes the machine learning algorithm to misclassify it as a speed limit sign.
Adversarial attacks on autonomous vehicles are particularly concerning because they can be performed from a distance without needing to have physical access to the vehicle. Additionally, these attacks can be challenging to detect, as they may not be visible to human observers.
领英推荐
MITRE ATT&CK
The MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) Matrix is a framework for describing and categorizing adversarial tactics and techniques based on real-world observations of cyber attacks. The framework was originally developed for traditional cyberattacks, but in recent years, MITRE has extended the framework to include machine learning-specific threats.
The MITRE ATT&CK Matrix for Machine Learning Threats (also known as the MITRE ATLAS Matrix) is a knowledge base of machine learning-specific threats and techniques that can be used to defend against those threats. The matrix consists of multiple tactics, each of which is further divided into multiple techniques.
For example, the matrix includes tactics such as "Data Poisoning" and "Model Evasion," with each tactic containing several techniques, such as "Adversarial Examples" and "Membership Inference." Each technique is described in detail, including its purpose, relevant examples, and potential mitigations.
The MITRE ATLAS Matrix is a valuable resource for organizations that are developing or deploying machine learning models, as it provides a structured way to identify and understand the various threats that can be used against those models. By understanding these threats, organizations can develop better defenses and improve the security and reliability of their machine-learning systems. More information is available here.
NIST's draft whitepaper on Adversarial Machine Learning - A Taxonomy and Terminology of Attacks and Mitigations provides a framework that can be used to formulate policy and standards and standardize ways to communicate vulnerabilities that may be exploited by adversaries to compromise Privacy (Confidentiality), Integrity or Availability of information systems.
A Risk-Based Approach
While the terms AI and ML are sometimes used synonymously, they represent different fields in computer science and different types of risks depending on context and origin.
Artificial Intelligence refers to the development of computer systems and algorithms that can perform tasks that typically require human intelligence. AI aims to simulate and replicate human cognitive abilities, such as learning, reasoning, problem-solving, perception, and decision-making, using computational methods.
Risks unique to AI:
"AI risks – and benefits can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed." - NIST
Managing the risks associated with AI requires a multi-faceted approach that encompasses technical, ethical, legal, and societal considerations.
Governments around the world are now racing to discuss how to regulate AI and significant work has already been done in this space.
The U.S. established its Blueprint for an AI Bill of Rights, the Australian government released its discussion paper on the topic of responsible AI and established the National AI Centre's Responsible AI Network, the EU formed the European AI alliance, and the Chinese government released its proposal to regulate generative AI.
Machine Learning is a subset of artificial intelligence (AI) that focuses on the development of algorithms and models that enable computers to learn from data and make predictions or take actions without being explicitly programmed. ML systems are designed to automatically learn and improve from experience or training, allowing them to adapt to new situations and handle complex tasks.
Risks Unique to ML:
It's worth noting that AI and ML are closely related fields, and many risks can apply to both. However, the risks listed above are generally more closely associated with the unique characteristics and challenges of AI or ML, respectively.
NIST AI Risk Management Framework
The National Institute of Standards and Technology's (NIST) AI Risk Management Framework (AI RMF) is a collaborative effort between the private and public sectors, designed to manage risks associated with artificial intelligence (AI) to individuals, organizations, and society; The framework is voluntary and aims to enhance the ability to integrate trustworthiness considerations into the design, development, usage, and evaluation of AI products, services, and systems.
The framework aids organizations in better identifying, managing, and mitigating AI risks while creating more trustworthy AI systems. The AI RMF is designed to be adaptable across the AI lifecycle and emphasizes both minimizing negative impacts and maximizing positive outcomes, which enhances the reliability of AI systems and cultivates public trust.
A companion playbook offers further implementation guidelines, and a 'crosswalk' shows how the AI RMF aligns with other international AI guidelines and regulations.
Conclusion
In conclusion, while artificial intelligence and machine learning have many benefits and advantages, there are also potential risks and vulnerabilities to be aware of. Adversarial attacks on machine learning algorithms can have serious consequences, and protecting against these attacks requires a multi-faceted approach that involves both technical and organizational measures. Careful consideration of data quality, transparency, accountability, and the use of advanced defense techniques can all help to mitigate the risks of adversarial attacks on machine learning algorithms.