Pre-Mortem of an A.I. Scandal(s): Anticipation of Future Hazards
Sean Lyons

Pre-Mortem of an A.I. Scandal(s): Anticipation of Future Hazards

1. The Post-Mortem of a Corporate Scandal

In recent times it seems that a corporate scandal is now an everyday occurrence and something which is far too frequent. The causes of a corporate scandal are also far too predictable (e.g. Carillion: A Case Study of a Corporate Defense Fiasco).

Not surprisingly it is common for post-mortem investigations into the causes of corporate scandals to typically identify deficiencies and weaknesses in the corporate defense programs of the organization(s) in question ... Typically, examples of these issues include failures in corporate governance, poor risk management, compliance failures, unreliable intelligence, inadequate security, insufficient resilience, ineffective controls, and failures by assurance providers. The existence of more than one of these issues in any given organization tends to exacerbate the initial problem and can eventually result in exponential collateral damage to stakeholder value. When these types of issues become systemic within an industry or business sector, it will very often result in some form of a broader crisis within the industry or sector, and, in some cases, this will spill over into the broader economy. - Corporate Defense and the Value Preservation Imperative: Bulletproof Your Corporate Defense Program

A forensic post-mortem investigation into the cause of any corporate scandal or failure will identify a number (or perhaps all) of these deficiencies and weaknesses.

2. Is A.I. a Scandal(s) in the Making?

Artificial Intelligence (A.I.) technology as it evolves (i.e. Narrow A.I., General A.I., and Interactive A.I. etc) is certain to contribute to the creation, preservation, and destruction of stakeholder value in the coming weeks, months, and years. In terms of value creation, digital and smart technologies are already pervasive and A.I. in its many forms (i.e. machine learning, natural language processing, and computer vision etc) has the potential to leverage from this in order to add significant value, to make enormous contributions, and to create long-term positive impacts for society, the economy, and the environment. It has the potential to solve complex problems and create opportunities that benefit all human beings and their ecosystems. Unfortunately, A.I. systems also have the potential for tremendous value destruction, and to cause an unimaginable level of harm and damage to human ecosystems (business, society, and planet).

Given the [value preservation] deficiencies and weaknesses described above in relation to everyday corporate scandals, one does not have to be a rocket scientist at NASA to predict that these same issues are also likely to arise in relation to A.I. technology. It is therefore incumbent upon our leaders to consider the potential serious impact, consequences, and repercussions which could emerge in relation to the development, deployment, use, and management of A.I. systems.??

3. Anticipation of Future A.I. Hazards

An A.I. defense cycle can be viewed in terms of the corporate defense cycle, with the same unifying defense objectives representing the four cornerstones of a robust A.I. defense program.

Sean Lyons
Anticipation refers to the timely identification and assessment of existing threats and vulnerabilities and the prediction of future threats and vulnerabilities. - An Executive Guide to Corporate Defence Management (2006)

Prudence and common-sense would suggest that it is therefore considered both logical and rational to anticipate the following deficiencies and weaknesses in relation to A.I. technology and to fully consider their potential for value destruction.

Failures in A.I. Governance

The current lack of a single comprehensive global A.I. governance framework has already led to inconsistencies and differences in approaches across various jurisdictions and regions. This is likely to result in potential conflicts between stakeholder groups with different priorities. The lack of a unified approach to A.I. governance can result in a lack of transparency, responsibility, and accountability which raises serious concerns about the social, moral, and ethical development and use of A.I. technologies. The ever increasing lack of human oversight due to the development of autonomous A.I. systems simply reinforces these growing concerns. Prevailing planet governance issues are also likely to negatively impact on A.I. governance. (Example of A.I. Governance framework: Singapore Model Artificial Intelligence Governance Framework)

Poor A.I. Risk Management

Currently there appears to be a fragmented global approach to A.I. risk management. Some suggest that this approach seems to overemphasize a focus on risk detection and reaction and underemphasize a focus on risk anticipation and prevention. It can tend to focus on addressing very specific risks (e.g. bias, privacy, and security etc) without giving due consideration to the broader systemic implications of A.I. development and its use. Such a narrow focus on A.I. risks also fails to address the broader societal and economic impacts of A.I. and overlooks the interconnectedness of A.I. risks and their potential long-term consequences. Such short-sightedness is potentially very dangerous as it fails to address and keep pace with the potential damage of emerging risks while also failing to prepare for already flagged longer-term risks such as those posed by superintelligence or autonomous weapons systems etc. (Example of A.I. Risk Management framework: NIST Artificial Intelligence Risk Management Framework)

A.I. Compliance Failures

A.I. compliance consists of a patchwork of A.I. laws, regulations, standards, and guidelines at national and international levels. This lack of harmonization of laws and regulations means that they are not in clear alignment, meaning they can be inconsistent in nature. This makes them both confusing and ineffective, making it difficult for stakeholders to comply with, and for regulators to supervise and enforce (especially across borders). This lack of clear regulation and the lack of appropriate enforcement mechanisms makes it difficult to hold actors to account for their actions and can encourage non-compliance, violations, and serious misconduct leading to the potential unsafe, unethical, and illegal use of A.I. technology. The existence of algorithmic bias can result in a lack of fairness and lead to an exacerbation of existing inequality, prejudice, and discrimination. A major concern is that the current voluntary nature of A.I. compliance and an over reliance on self-regulation is not sufficient to address these potentially systemic issues. (Example of A.I. Legislation: EU Artificial Intelligence Act)

Unreliable A.I. Intelligence

Unreliable intelligence can ultimately result in poor decision making in its many forms. Many A.I. algorithms can be opaque in nature and are often referred to in terms of a "Black Box" which hinders the clarity and transparency of the development and deployment of A.I. systems. Their complexity makes it difficult to interpret or fully comprehend their algorithmic decision-making and other outputs. It is therefore difficult for stakeholders to understand and mitigate their limitations, potential risks, and the existence of biases. This can further contribute to accountability gaps and make it difficult to hold A.I. developers and users accountable for their actions. A.I. development can also lack the necessary stakeholder engagement and public participation which can mean a lack of the required diversity of thought needed for the necessary alignment with social, moral, and ethical values. This lack of transparency and understanding can expose the A.I. industry to the threat of clandestine influence. (Example of A.I. Intelligence guidance: UK ICO Explaining Decisions Made with Artificial Intelligence) ??

Inadequate A.I. Security

The global approach to A.I. security also appears to be somewhat disjointed. Data is one of the primary resources of the A.I. industry and A.I. systems collect and process vast amounts of data. A.I. technologies can be vulnerable to cyberattacks which can compromise assets (including sensitive data), disrupt operations, or even cause physical harm. If A.I. systems are not properly protected and secured, they could be infiltrated or hacked, resulting in unauthorized access to data and this could be used for malicious purposes such as data manipulation, identity theft, or fraud. This raises concerns about data breaches, data security, and personal privacy. Indeed A.I. powered malware could help malicious actors to evade existing cyber defenses thereby enabling them to inflict significant destruction to supply chains and critical infrastructure (e.g. damage to power grids and disruption of financial systems etc). (Example of A.I. Security framework: ISO/IEC - Cybersecurity - Artificial Intelligence)

Insufficient A.I. Resilience

The global approach to A.I. resilience is naturally impacted by the chaotic approach to some of the other areas noted above. Where A.I. systems are vulnerable to cyberattacks, this can allow hackers to disrupt operations leading to possible unforeseen circumstances which are difficult (if not impossible) to prepare for. This can impact on the reliability and robustness of the A.I. system and its ability to perform as intended in real-world conditions and to withstand, rebound, or recover from a shock, disturbance or disruption. A.I. systems can of course also make errors, incorrect diagnoses, faulty predictions, or other mistakes. Where an A.I. system malfunctions or fails for whatever reason, this can lead to unintended consequences or safety hazards that could negatively impact on individuals, society, and the environment. This may be of particular concern in critical domains such as power, transportation, health, and finance.?(Example of A.I. Resilience guidance: UK CETaS Strengthening Resilience to AI Risk)

Ineffective A.I. Controls

The global approach to A.I. controls also seems to be somewhat disorganized. Once A.I. systems are deployed, it can be difficult to change them. This can make it difficult to adapt to new circumstances or to correct mistakes. There are therefore some concerns that an overemphasis on automated technical controls (such as bias detection and mitigation etc) and not enough attention given to the importance of human control can create a false sense of security and mask the need for human control mechanisms. As A.I. systems become more sophisticated, there is a real risk that humans will lose control over A.I. leading to situations where A.I. may make decisions that have unintended consequences that can significantly impact on individuals’ lives with potentially harmful consequences. Increasing the autonomy of A.I. systems without the appropriate safeguards and controls in place raises valid concerns about issues such as ethics, responsibility, accountability, and potential misuse. (No similar example of guidance specifically focused on A.I. Controls)

Failures by A.I. Assurance Providers

There is currently no single, universally accepted framework or methodology for A.I. assurance. Different organizations and countries have varying approaches, leading to potential inconsistencies. The opaque nature and increasing complexity of A.I. can make it difficult to competently assess A.I. systems, creating gaps in assurance practices, and thus hindering the provision of comprehensive assurance. The expertise required for effective A.I. assurance is often a scarce commodity and may be unevenly distributed which in turn can create accessibility challenges for disadvantaged areas and groups. The lack of transparency, ethical concerns, and the lack of comprehensive A.I. assurance can lead to an erosion of public trust and confidence in A.I. technologies which can hinder its adoption and potentially create resistance to its potential benefits. Given all of the above, the provision of A.I. assurance can be a potential minefield for assurance providers. (Example of A.I. Assurance framework: UK DSIT Introduction to AI Assurance)

4. A.I. Value Destruction and Collateral Damage

Should any assurance provider worth their salt undertake to benchmark these eight critical A.I. defense components to a simple 5 step maturity model (e.g. 1. Dispersed, 2. Centralized, 3. Global (Enterprise-wide), 4. Integrated, 5. Optimized) then each one of them individually (and collectively) would currently be rated as being only at step 1, Dispersed. This level of immaturity in itself represents a recipe for value destruction

Sean Lyons

Each of these eight critical A.I. defense components are interconnected, intertwined, and interdependent as individually each impacts on, and is impacted by, each of the other components. They represent links in a chain where the chain is only as strong as its weakest link. Collectively they can provide an essential cross-referencing system of checks and balances which helps to preserve A.I. stakeholder value. Therefore, the existence of deficiencies and weaknesses in more than one of these critical components can collectively result in exponential collateral damage to stakeholder value.

Examples of Potential Value Destruction

Misuse and Abuse: A.I. technologies can be misused and abused for all sorts of malicious purposes with potentially catastrophic results. They can be used for deception, to shape perceptions, or to spread propaganda. A.I. generated deepfake videos can be used to spread false or misleading information, or to damage reputations. Other sophisticated techniques could be used to spread misinformation and be used in targeted disinformation campaigns to manipulate public opinion, undermine democratic processes (elections and referendums) and destabilize social cohesion (polarization and radicalization). ??????

Privacy, Criminality, and Discrimination: A.I. powered surveillance such as facial recognition can be intentionally used to invade people’s privacy. A.I. technologies can help in the exploitation of vulnerabilities in computer systems and can be applied for criminal purposes such as committing fraud or the theft of sensitive data (including intellectual property). They can be used for harmful purposes such as cyberattacks and to disrupt or damage critical infrastructure. In areas such as healthcare, employment, and the criminal justice system A.I. bias can lead to discrimination against certain groups of people based on their race, gender, or other protected characteristics. It could even create new forms of discrimination potentially undermining democratic freedoms and human rights.

Job Displacement and Societal Impact: As A.I. technologies (i.e. automobiles, drones, and robotics etc) become more sophisticated, they are increasingly capable of performing tasks that were once thought to require human workers. A.I. powered automation of tasks raises concerns relating to mass job displacement (typically the most vulnerable), and the potential for widespread unemployment which could impact on labour markets and social welfare, potentially leading to business upheaval, industry collapse, economic disruption, and social unrest. A.I. also has the potential to amplify and exacerbate existing power imbalances, economic disparities, and social inequalities.

Autonomous Weapons: A.I. controlled weapons systems could make decisions about when and who to target, or potentially make life-and-death decisions (and kill indiscriminately) without human intervention, raising concerns about ethical implications and potential unintended consequences. Indeed, the development and proliferation of autonomous weapons (including WMDs) and the competition among nations to deploy weapons with advanced A.I. capabilities raises fears of a new arms race and the increased risk of a nuclear war. This potential for misuse and possible unintended catastrophic consequences could ultimately pose a threat to international security, global safety, and ultimately humanity itself.

The Singularity: The ultimate threat potentially posed by the A.I. singularity or superintelligence is a complex and uncertain issue which may (or may not) still be on the distant horizon. The potential for A.I. to surpass human control and pose existential threats to humanity cannot and should not be dismissed and it is imperative that the appropriate safeguards and controls are in place to address this existential risk. The very possibility that A.I. could play a role in human extinction should at a minimum raise philosophical questions about our ongoing relationship with A.I. technology and our required duty of care. Existential threats cannot be ignored and addressing them cannot be deferred or postponed.

5. A.I. Value Preservation Imperative

Under the prevailing circumstances the occurrence of some or all of the above A.I. related hazards represent both an unacceptably high probability and impact, with potentially catastrophic outcomes for a large range of stakeholder groups. Serious stewardship, oversight, and regulation concerns have already been publicly expressed by A.I. experts, researchers, and backers. It represents an urgent issue which requires urgent action. This is one matter where a proactive approach is demanded, as we simply cannot accept a reactive approach to this challenge. In such a situation "prevention is much better than cure", and it is certainly not a time to "Shut the barn door after the horse has bolted".

Sean Lyons

Addressing this matter is by no means an easy task but it is one which needs to be viewed as a compulsory or mandatory obligation. Like many other challenges facing human beings on Planet Earth this is one that will require global engagement and a global solidarity of purpose.

The challenges facing Planet Earth range from complicated, to complex, to wicked, and the solutions to these challenges will require improved international unity, collaboration, and cooperation among dispersed and wide-ranging stakeholder groups in order to help ensure that our collective action is strategically aligned, tactically integrated, and operating in unison toward a common global purpose. - Preservation of Planet Earth (P.O.P.E.)

A.I. value preservation requires a harmonization of global, international, and national frameworks, regulations, and practices to help ensure consistent implementation and the avoidance of fragmentation. This means greater coordination, knowledge sharing, and wider adoption in order to help ensure a robust and equitable global A.I. defense program.

6. A.I. Defense and Value Preservation Due Diligence

A.I. Defense is Still in its Early Stages

A.I. is a rapidly developing field with a complex and evolving landscape. As such the concept of A.I. defense is still in its infancy. There is currently no single, unified, globally agreed upon approach to collectively defending A.I. stakeholder value as different countries and regions have varying frameworks, regulations, and priorities. As a result the genuine efforts which are being made in all of the above areas appear to be organic in nature rather than being systematic or by design.

All that being said, there has been a certain degree of progress in this area , with some promising developments in terms of frameworks, regulations, and methodologies in practically all of the above areas (see guidance examples noted above) and also of a more general nature. (Examples of General A.I. guidance: OECD Recommendation of the Council on Artificial Intelligence and ISO/IEC - Information technology - Artificial intelligence - Management System)

A.I. Value Preservation Due Diligence

Sean Lyons

Our global leaders have a duty of care to safeguard against the potential damage of this impending A.I. value destruction. This duty of care represents their social, ethical, and moral obligation to their stakeholders, while at the same time also recognizing the enormity of this challenge. It will require a much higher, more robust, and more mature level of A.I. value preservation due diligence than is currently on display. This needs to begin with a much greater appreciation and understanding of the nature of A.I. value dynamics (creation, preservation, and destruction) in order to help foster responsible innovation. Sooner rather than later, the approach to due diligence needs to include adopting a holistic, multi-dimensional and systematic vision that involves an integrated, inter-disciplinary, and cross-functional approach to A.I. value preservation. Such an approach can help contribute to a more peaceful and secure world, by creating a more trustworthy, responsible, and beneficial A.I. ecosystem for all.

This pre-mortem simply cannot be allowed to develop into a post-mortem!


NOTE: A project to help futureproof the A.I. ecosystem by addressing these value preservation challenges is currently in progress. This project aims to develop an overarching program which is intended to help preserve, protect, and defend A.I. stakeholder value at a global, international, national, and organizational level. Interested parties can contact me directly for further details.


Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

6 个月

Let us not forget the environmental impact of AI's energy consumption. The ugly truth behind ChatGPT: AI is guzzling resources at planet-eating rates - Mariana Mazzucato https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment

回复
Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

6 个月
回复
Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

7 个月

Holistic A.I. Defense and A.I. Defense Due Diligence https://www.dhirubhai.net/pulse/holistic-ai-defense-due-diligence-sean-lyons-uexqe/?

回复
Dr Raj T.

Living Adventurously in a World on Fire. Happy to connect IF we share interests. (So don't just send me a request out of the blue without bothering to say why you want to connect. Thanks.)

7 个月

Paris Marx Roger Miles PhD FRSA Paul Biggar

回复
Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

8 个月
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了