Understanding the Potential Harms of AI Systems: Part II
Photo by D S from Pixabay

Understanding the Potential Harms of AI Systems: Part II

Earlier this month, I started covering the potential harms of AI systems. The previous newsletter explored the harm injection points more generally and specifically addressed harms to individuals. In this edition, I will consider the potential harms to organisations, institutions, society and ecosystems.

This article does not attempt to provide a comprehensive taxonomy of relevant harms and does not go into many technical details that are already very well covered by existing frameworks, such as those from ISO and NIST. Rather, it is a broader reflection on the topic, and perhaps slightly from a different angle.

I. Harms to Organisations

AI systems may be extremely helpful in the context of organisations. At the same time, these systems’ usual characteristics — reliance on data, probabilistic nature, architectural opacity, and ability to operate autonomously — predetermine certain specific failure scenarios.

In particular, poor data quality — be it the training data or the data processed during subsequent deployment — will very likely result in the system providing sub-par or unintended results.

The probabilistic nature of the system will mean that there will be always a margin of error in the outputs, and the system will necessarily fail at certain edge cases not envisaged during model training — and if there is no meaningful human control and oversight over the system — harms will propagate unchecked.

Architectural opacity of AI systems means that the deploying organisation’s officers will not necessarily be aware of how these systems function and how to meaningfully address issues that arise.

And the ability to operate autonomously means that these systems may bring about more harm in less time, sometimes bringing about harm that cannot be undone.

Some of the harms resulting from AI systems deployment may affect the organisation itself, and some will affect people and entities outside the organisation. In the case of a commercial enterprise, this may be, for example, clients, business partners, or third parties that accrue the negative externalities of the organisation’s poor AI governance.

When the harms spread outside the organisation, this will likely cause material, economic and reputational harm.

Material and economic harm may come about in the form of unmet business objectives, defective deliverables, administrative fines in case of regulatory violations, as well as in the form of lost business opportunities, penalties and claims for damages in case contractual or law-imposed obligations are breached.

Reputational harm may manifest in the negative experience shared by people, in particular on social and in traditional media, and loss of confidence among clients, business partners and other stakeholders.

Cultural harm is another concern: inappropriate and unskilful adoption of AI can backfire on the organisation’s culture. Employees or collaborators may feel threatened by the automation of their tasks, or they may see that the automation of certain tasks is inappropriate and/or brings about unintended negative consequences, internally or externally. As a result, unskilful or inappropriate adoption of AI systems may lead to internal resistance and decreased morale.

Like any innovation, the adoption of AI systems by organisations is not intrinsically valuable, but is worthwhile only to the extent it can bring about positive internal and broader societal effects without significant negative effects and externalities.

II. Harms to Ecosystems

AI systems are, obviously, versatile products. As such, the specific harms they may cause will differ by sector and area of application. Yet, there are likely to be some commonalities.

In particular, the deployment of AI systems is usually predicated on the promised optimisation of certain functions that lead to commercial and operational efficiency, including cost efficiency and economic gains.

However, when unmediated by wider sustainability concerns, this optimisation, when these systems are deployed at scale, may, especially in the long term, cause negative environmental outcomes, overproduction, overconsumption, increased waste and premature resource depletion.

Negative societal effects may also manifest to the extent AI-driven optimisation affects the supply chain, such as by impairing traditional and possibly more sustainable production methods, race-to-the-bottom labour or contractor practices, labour displacement and cascading disruptions in interconnected systems that may grow dependent on limited number of leading AI model providers enjoying monopoly or oligopoly status.

Other concerns may stem from the inherent properties of the AI systems, or their design and development, such as high energy and water consumption for the data centres that are used for the training and deployment of the top-of-the-market systems, which are increasingly predominant.

This is not even to mention more trivial issues like the effects of AI systems malfunctioning, that is when they operate not as intended. Needless to say, these concerns must be nevertheless considered in the context of any real-world evaluation and risk assessment for a concrete AI-based product — just as for any non-AI-based one.

III. Harms to Democracy and Public Trust in Institutions

The proliferation of AI systems poses several risks to these.

1) Societal Polarization. AI-driven social media have arguably already increased societal polarization by creating filter bubbles and echo chambers. Algorithms may be - by design or by negligence -recommending only content that reinforces users' pre-existing beliefs and biases, and in the worst case can also lead to the consumption of increasingly extreme content. This fragmentation may undermine reasoned debate, the ability to compromise and social cohesion.

2) Mass Surveillance and Nudging the Population. AI systems, such as those used for real-time biometric identification, may enable mass surveillance. Supplemented by powerful predictive analytics, these may enable governments to use AI to identify and exploit citizens' personal preferences, beliefs and cognitive biases when designing policies and public messaging, to further normalise surveillance and to nudge people’s behaviour towards policymaker’s preferences. This could undermine people’s autonomy and democratic deliberation.

3) “Hypernudging” Policymakers. On the other hand, the ability of AI systems to uncover previously unnoticed correlations in large datasets raises concerns about how this could affect policy choices when policymakers use AI systems for decision-making support. By highlighting statistical relationships between data points that would normally not be observable, AI developers can carefully sculpt the informational context in which policy decisions occur. This allows them to channel attention and decision-making towards their preferred outcomes, effectively enabling sophisticated manipulation of policymakers. And even if individual policymakers retain autonomy, the informational context shaped by the AI system guides their focus in desired directions.

4) Totalitarianism On Demand. Extensive automation of state governance makes it more feasible to envisage a sudden transition to “totalitarianism on demand”. Traditional democratic governance mechanisms operate based on multi-institutional checks and balances so that we avoid concentrating too many competencies and extensive decision-making authority within any one particular institution. In a dystopian scenario where these traditional mechanisms are replaced by centralized and opaque automated decision-making, a malicious actor could potentially update the system overnight to enable authoritarian control.

5) “Democratizing” disinformation. Affordable deepfake technology and AI-enabled disinformation generation could be used to influence elections and undermine trust in democratic institutions. The ability to use generally available AI systems to generate convincing fake videos/audio of important events or public figures combined with microtargeted disinformation poses risks of manipulating voters. This could destabilise democracy.

IV. Measures to Consider

To address the harms that the proliferation of AI systems may bring about, policy leaders should consider taking a thoughtful, multi-stakeholder approach to developing policy that balances innovation and commitment to democratic governance principles to realise the benefits of AI while mitigating these risks.

Sectoral and transversal regulation, empowered governmental oversight and market surveillance bodies, mandated impact assessments and independent audits for high-stakes AI uses and continued public engagement can help align the development and deployment of AI systems with democratic values.

At the same time, corporate leaders should consider the following measures:

1) Make AI safety, transparency, and ethics the central pillars of the organisation’s AI strategy,

2) Establish and continually improve organisational AI and data governance policies and processes,

3) Carry out AI system impact assessments - before and following the deployment,

4) Maintain full accountability and disciplinary responsibility of corporate officers for negative outcomes,

5) Maintain own and attract third-party capabilities for monitoring, auditing, and overseeing deployed AI systems,

6) Foster ethical AI use culture through training and incentives,

7) Collaborate with peers, clients, business partners, civil society, regulators and other stakeholders to jointly perfect AI and data governance.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了