AI  Incidents: Essential Steps for Managing an AI  Incident(s)!
AI incidents by year OECD Monitor

AI Incidents: Essential Steps for Managing an AI Incident(s)!

"It's harder to repair a damaged reputation than to build a good one in the first place." — Frank Sonnenberg

AI strategy is not solely about the positives and benefits of AI but also about the negatives. It is crucial to ensure accountability is maintained and preparations are made for an AI incident, not if it happens, but when it happens.

(1) Introduction :?

AI incidents are increasingly prevalent due to the expanding integration of AI in daily life across areas like chatbots, robotics, and the Internet of Things and in various business domains such as healthcare, finance, manufacturing, logistics,etc with predictions indicating that this trend will accelerate annually. While AI is employed for various purposes such as gaining a competitive advantage, reducing costs, and enhancing productivity, it also carries unforeseen repercussions. These AI-related incidents can manifest in forms ranging from minor issues, such as loan refusals or malfunctioning chatbots, to severe outcomes including injuries or even fatalities in autonomous vehicles. This highlights the unintended consequences of technology utilisation, underscoring the need for careful management and oversight.

(2) What is an AI incident?

An AI incident refers to any unexpected or unintended outcome arising from the use of artificial intelligence systems, which can negatively impact individuals, groups, or broader societal structures. These incidents can manifest in various forms, including algorithmic biases leading to discriminatory practices, system failures resulting in significant financial losses, accidents involving autonomous vehicles causing injuries or deaths, and violations of ethical guidelines (as mentioned in the introduction above). Both direct outcomes, such as discrimination or bias, and precursor events like data mismanagement or opaque decision-making processes can contribute to these incidents.

Distinguishing between AI misuse—whether intentional or unintentional—is crucial. Intentional misuse involves the deliberate exploitation of AI systems for harmful purposes, whereas unintentional errors may stem from flaws in design, implementation, or unforeseen interactions with users or other systems. Regardless of the cause, the outcome for a company might be the same or worse; for instance, an AI system being hacked and causing an incident has the same detrimental outcome as an incident due to AI design flaws, but for different reasons.

The above demonstrates that there are already happening events called “AI incidents” and a potential arising to become an AI incident called “AI hazard.”

(3) Why AI incident happen :?

As discussed, AI incidents arise due to various reasons linked to the rapid implementation of new technologies, which inherently carry risks. Here, we can summarise the primary causes of AI incidents in seven key points:

  • Design and Development Flaws: AI systems often exhibit inherent flaws due to biases in the algorithms or the training data, leading to unfair or biassed outcomes if the data set lacks diversity or comprehensiveness.
  • Lack of Transparency: Many AI systems function as "black boxes," with opaque decision-making processes that are difficult for users and regulators to understand, complicating efforts to identify or correct potential issues.
  • ?Inadequate Testing and Validation: AI systems are frequently deployed without adequate testing across diverse real-world scenarios, resulting in unexpected behaviours in untested conditions.
  • Complex Integration: The integration of AI into complex environments involving multiple technologies and human interactions can lead to unpredictabilities, particularly when unforeseen interactions with other systems or human inputs occur.
  • Regulatory and Ethical Shortcomings: The rapid advancement and deployment of AI technologies often surpass existing regulatory and ethical frameworks, leading to actions that, while technically feasible, are not legally or ethically constrained.
  • Human Error and Manipulation: AI systems can be prone to errors introduced by human operators, whether unintentionally through poor setup and management or intentionally through manipulation of AI data inputs or operational parameters.
  • External Threats and Adversarial Attacks: AI systems face significant risks from external threats like cyber-attacks and adversarial AI attacks, which include exploiting model vulnerabilities, data poisoning, evasion attacks, and model inversion and extraction, aiming to disrupt system integrity or privacy.

(4) Categories of AI incidents :?

"An event where the development or use of an AI system results in actual harm is termed an 'AI incident', while an event where the development or use of an AI system is potentially harmful is termed an 'AI hazard'. " OECD

An AI incident is an event, circumstance or series of events where the development, use, or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:

(a) injury or harm to the health of a person or groups of people;

(b) disruption of the management and operation of critical infrastructure;

(c) violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour, and intellectual property rights;

(d) harm to property, communities, or the environment.

An AI hazard is an event, circumstance, or series of events where the development, use, or malfunction of one or more AI systems could?

plausibly lead to an AI incident, i.e., any of the following harms:

(a) injury or harm to the health of a person or groups of people;

(b) disruption of the management and operation of critical infrastructure;

(c) violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour, and intellectual property rights;

(d) harm to property, communities, or the environment.


Examples of? of AI hazard and AI incident:

  • AI Incident Examples:

(a) Injury or Harm to Health: An autonomous vehicle malfunctions due to a software error, resulting in a collision that causes injuries to pedestrians.

(b) Disruption of Critical Infrastructure: An AI-controlled power grid system fails due to a bug, leading to a widespread power outage affecting hospitals, security systems, and other essential services.

(c) Violations of Human Rights: An AI surveillance system used in public areas incorrectly identifies and tracks individuals, leading to false arrests and breaches of privacy rights.

(d) Harm to Property, Communities, or the Environment: An AI system managing a chemical plant fails to detect abnormal conditions, resulting in a chemical spill that harms the local community and environment.

  • ?AI Hazard Examples:

(a) Injury or Harm to Health: An AI medical diagnostic system shows a pattern of misdiagnosing a serious condition, which could potentially lead to untreated health issues if not addressed.

(b) Disruption of Critical Infrastructure: An AI system designed to control traffic lights shows intermittent glitches that could potentially cause traffic accidents or gridlocks.

(c) Violations of Human Rights: An AI recruitment tool exhibits bias in screening candidates, potentially leading to discriminatory hiring practices if used unchecked.

(d) Harm to Property, Communities, or the Environment: An AI-driven investment model starts to show signs of erratic behaviour, risking significant financial losses for individuals and institutions, potentially impacting the wider economy.


The challenge is that AI hazards are difficult to detect and might also be ignored as just one-offs, hence they are likely to manifest into an incident.


(5) AI Incidents In Numbers :

As we can see from the table and the graph exponential increase in AI incidents primarily due to the increase in implementation. This trend is likely to continue year on year until the technology matures, and government regulations and standards are established to measure, manage, and prevent incidents.


AI Incidents Since 2014 - Source OECD - Cumulative up to February 2024


The chart shows the percentage of incidents attributed to technical failures in a specific domain.


As we can see from the data and charts above, AI incidents are concentrated in certain technologies or use cases like chatbots, deepfake, voice, etc., which can help us to focus on preventative actions.

(6) Why standard incident management doesn't work in AI incidents?

There are several reasons why standard incident management might not work in AI incidents. Hence, incident management and problem management processes may need adjustments to account for AI incidents. The table below lists some challenges, but is not limited to:

standard incident management doesn't work in AI incidents


(7) Impact of AI incidents :

AI incidents can have profound impacts on enterprises that go beyond immediate technical challenges and extend into broader organisational, ethical, and societal realms. Below are some of the impacts of? incidents on an enterprise:

Economic Impact:

  • Direct Costs: Financial losses due to AI incidents can include the costs of incident response, investigation, system downtime, and repair or replacement of the AI system.
  • Indirect Costs: There may be long-term financial repercussions such as loss of market share, increased insurance premiums, and the need for additional training or hiring specialised personnel.
  • Brand and Reputation: Consumer trust can be significantly damaged by AI incidents, especially if they result in privacy breaches or other harms to customers, leading to brand devaluation.
  • Impact on third party relationship? and commercial engagement?

Ethical Impact:

  • Fairness and Bias: AI systems can perpetuate or amplify biases present in their training data, leading to unfair treatment or discrimination against certain groups or individuals.
  • Privacy Concerns: AI systems that process personal data may inadvertently expose sensitive information, raising concerns about individuals’ privacy rights.
  • Autonomy and Manipulation: AI that influences human decision-making, such as recommendation algorithms, can undermine personal autonomy or manipulate behaviours, leading to ethical dilemmas.

Legal Impact:

  • Liability: Determining liability for AI incidents is complex, especially when multiple parties are involved in the development, deployment, and operation of AI systems.
  • Regulatory Compliance: Enterprises must navigate a rapidly evolving landscape of AI regulations and standards, and incidents can highlight gaps in compliance.
  • Intellectual Property: AI can generate outputs that challenge existing intellectual property laws, creating legal uncertainties for enterprises.

Organisational Impact:

  • Governance: Organisations must establish clear AI governance structures to manage risks, ensure ethical use of AI, and respond effectively to incidents.
  • Workforce: AI incidents can necessitate a reevaluation of workforce skills and competencies, highlighting the need for ongoing education and training in AI literacy and ethics.
  • Strategic Decision-Making: High-profile AI incidents can force an enterprise to reconsider its strategic use of AI and potentially scale back or alter its AI initiatives.

Societal Impact:

  • Public Trust: Repeated AI incidents can erode public trust in technology and institutions, leading to resistance against AI deployments.
  • Social Harmony: AI incidents that affect social dynamics, such as those involving surveillance or social scoring, can impact societal norms and expectations.
  • Global Competition: Enterprises in regions with strict AI incident reporting and response requirements may find themselves at a competitive disadvantage to those in less regulated environments.

There is also the impact of many incidents on enterprise ESG (Environmental, Social, and Governance) strategy, which can severely damage reputational and market standing of the enterprise.

Addressing these impacts requires a multi-faceted approach that includes robust risk management, ethical AI frameworks, compliance with legal standards, and strong organisational strategies for AI governance. It also underscores the importance of building AI systems with accountability and transparency at their core, ensuring they are equitable, respect privacy, and are aligned with societal values.

(8) Preparation for an? AI incident? :

“It wasn't raining when Noah built the ark.” ~Howard Ruff

Preparing for an AI incident is critical to managing the repercussions responsibly and minimising impact. Here’s an elaboration of? steps, which are essential for robust preparation:


RACI and Team Composition:

Define Roles and Responsibilities: Use a RACI (Responsible, Accountable, Consulted, Informed) matrix to clearly assign roles and responsibilities within the incident response team.

Cross-Functional Team: Ensure the team includes members from various departments such as IT, legal, PR, HR, EGS,MLOps, third party and operations to address all facets of an incident.

Scenario Planning and Contingency:

Simulate Various Scenarios: Consider different types of AI incidents, from data breaches to biassed output, and plan for each scenario.

Develop Contingency Plans: Have backup systems and manual processes in place in case the AI system needs to be taken offline.

Communication Plans:

Internal Communication: Create protocols for informing all stakeholders within the organisation swiftly and clearly.

External Communication: Prepare templates for press releases and customer notifications to ensure prompt and accurate dissemination of information, each AI system implemented may need different kinds of communication.?

Social Media Strategy: Plan for active monitoring and engagement on social media to manage public perception and counter misinformation.

Testing and Exercises:

Regular Drills: Conduct drills simulating AI incidents at different times, including after hours and holidays, to ensure readiness.

Continuous Improvement: Use insights from these exercises to refine and improve the incident response plan.

Operational Readiness:

24/7 Response: Since AI incidents can happen at any time, ensure there is always a team ready to respond.

Holiday Schedules: Have a special protocol for incidents that occur during public holidays when staff might be reduced.

Fallback Plans:

Virtual Systems: Establish clear procedures for switching to backup systems if the primary AI systems become compromised (MLOps)

Third-Party Coordination:

Communication Channels: Set up dedicated lines of communication with third-party vendors who manage your AI systems.

Inclusion in Drills: Involve third-party providers in your incident response exercises to ensure seamless coordination.

Technical Alerts and Forewarnings:

Monitoring Systems: Implement advanced monitoring for unusual activity (Anomaly Detection) in AI? systems that might indicate an impending incident.

Customer Feedback:

Have a system in place to quickly evaluate and act on reports from customers regarding AI system performance.

Security Measures:

Ethical Hacking: Employ white-hat hackers (adversarial AI ) to regularly test AI systems for vulnerabilities.

Continuous Learning:

Stay informed about the latest AI security practices and implement them.

This preparation enables an enterprise to respond to AI incidents with the necessary speed and efficiency, maintaining control over the situation and upholding the confidence of customers and stakeholders.

(9) AI Incidents documentation? at a global level :

There is an urgent requirement for enhanced documentation and analysis of AI incidents to promote safer and fairer AI technologies. The current limitations of AI incident databases mainly involve recording AI failures without sufficiently exploring their causes or effects. This deficiency may obstruct the ability to derive lessons from previous incidents and to develop best practices aimed at averting similar future issues.

To rectify these shortcomings, several policy recommendations have been suggested to improve the effectiveness of AI incident documentation. These suggestions include establishing a government-managed database for standardised incident reporting, allowing for anonymous submissions to shield contributors from possible adverse consequences, and creating proactive databases that monitor AI systems before any incidents arise. Researchers stress that these enhancements in AI incident documentation are crucial for comprehending the dynamics of AI system failures and for formulating effective strategies to reduce these failures, thus facilitating the progression of safer AI technologies.

(10) Conclusions:

In conclusion, the safe and ethical use of AI is a dynamic and multifaceted challenge that requires ongoing commitment and adaptation. As AI technologies become increasingly integrated into every aspect of society, the need for comprehensive AI risk management and governance becomes more critical. The inherent evolution of AI systems over time due to changes in the data they process or as a result of continuous learning and adaptation presents unique challenges. This evolution can lead to model drift, where an AI's performance may begin to degrade (drift) if it no longer accurately reflects its training environment. Regular updates and retraining, managed through MLOps practices, are necessary to maintain the accuracy and relevance of AI models but can introduce new vulnerabilities.


The process of constantly updating and retraining AI systems introduces a layer of complexity, where even well-tested updates may contain hidden flaws that only manifest under specific conditions or in live environments. Moreover, changes made by MLOps can introduce AI hazards—potential sources of future incidents—that may not be immediately evident. These hazards could alter decision-making processes in critical areas such as healthcare and finance, potentially leading to significant incidents. The manifestation of such AI incidents can be unpredictable, linked to changes made weeks or months earlier, and may complicate troubleshooting and mitigation efforts due to their delayed and dispersed effects.

Through collaborative efforts among all stakeholders—including policymakers, industry leaders, researchers, and civil society—we can ensure that AI serves the public good, advancing a future where technology respects human rights and promotes global well-being. Addressing these challenges requires not only continuous monitoring and adaptive response strategies but also a commitment to developing robust testing regimes and maintaining regular communication with all stakeholders to understand and effectively manage the evolving landscape of AI technology. Emphasising transparency in the changes made by MLOps and evaluating their potential to introduce AI hazards are crucial steps in preventing these hazards from escalating into full-blown incidents.


Sources :

https://arxiv.org/pdf/2310.06269.pdf

https://incidentdatabase.ai/

https://arxiv.org/pdf/2403.07879.pdf

https://oecd.ai/en/

https://dl.acm.org/doi/pdf/10.1145/3600211.3604700






Love this comprehensive overview. Try implementing dynamic risk assessment algorithms that adapt over time with each incident, ensuring a more resilient AI governance framework.

Jamie Hu

Data and AI Specialist at Microsoft | ?AI Advisor | Speaker | Disruption Digest Podcast Host | ??Digital Transformation with Azure

5 个月

Excellent post, Haydar Jawad. Very interesting to read how traditional incident management techniques are often inadequate for AI incidents - wasn't aware of this.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了