Automating Ambiguity: Navigating the Challenges of AI Governance
A Synthesis of the Global Dialogue - November 12, 2024
Demystifying AI's Reality
The dialogue facilitated by Gerard ("Gerry") Salole brought together three distinct yet complementary perspectives on artificial intelligence and its governance: Abeba Birhane providing critical technical and empirical analysis, Abigail Gilbert examining workplace and social implications, and Jodi Starkman offering insights from organisational implementation and human resources. Together, they painted a picture of AI that stands in stark contrast to popular narratives, revealing both its limitations and its profound societal implications.
The Technical Reality: Pattern Recognition, Not Intelligence
Abeba Birhane opened with a crucial demystification of AI, describing it not as true intelligence but as "algorithmic systems that uncover patterns from massive amounts of data through optimisation processes."
She emphasised the limitations of these systems, noting that "generative systems have been dubbed as stochastic pirates, bullshit generators, great pretenders and glorified copy paste machines."
This technical reality has profound implications, as these systems:
A striking example came from recent Australian corporate regulatory research, where AI summaries scored 47% compared to human summaries at 81% - highlighting the gap between AI hype and reality.
The Hidden Costs of AI Infrastructure
The dialogue revealed the often-overlooked physical and environmental impacts of AI development. As Birhane pointed out, "The total amount of energy required to run GPUs and data centres is more than the total amount of energy required to sustain Irish households." This stark reality is reflected in:
Military Applications and Ethical Concerns
Joseph Elborn shared recent firsthand conversations with defence contractors. His observation captured the ethical stakes: "If something happens like, I don't know, they're targeting a car, and then suddenly it turns out there's kids in it, the automation won't stop the Kill Decision." His account revealed how AI is already operating in combat zones, with systems that switch from human control to automation when encountering jamming.
Philosophical and Anthropological Critique
Arturo Escobar, Professor Emeritus of Anthropology and Political Ecology at UNC Chapel Hill, enriched the discussion by proposing an "ontological critique" of AI. As he explained, "AI extends and deepens through its invasion of most aspects of everyday life... the Western capitalist, patriarchal ontology that is anthropocentric."
This critique identified how AI reinforces problematic aspects of Western capitalist ontology through its anthropocentric foundations, the predominance of white male developers, and its implicit mind-body separation.
Embedded Biases and Societal Implications
A central theme was how AI systems encode and amplify existing societal biases. As Birhane noted, "A lot of the data tends to represent identities, concepts, geographies and so on, in a very negative or in a cliched, stereotypical way." This manifests in:
The Reality of Workplace Implementation
Jodi Starkman emphasised that despite the media narrative about rapid AI transformation, organizations are still in early stages of adoption. As she noted, "There is a lot of noise about the speed of change. Which, generally, is true. But when it comes to generative AI, even leading tech companies are in the early stages of adoption and are doing a LOT of experimenting."
This early stage presents both opportunities and challenges:
The Duality of AI Impact
Jodi Starkman characterised the potential workplace impacts of AI as "A Tale of Two Cities":
She stressed that it is important to remember that "this is not either/or. It is both/and. And we have choices; decisions that policymakers and business leaders make around AI and its implementation will play a significant role in shaping its future impact on workers."
Learning from Past Technology Implementations
A crucial insight from Jodi Starkman organisational experience is that many of today's AI implementation challenges mirror previous technological transitions. As a result, many organisations have, in fact, already developed valuable knowledge about building trust and involving workers as stakeholders in technology design, selection, and implementation.
However, as she points out, "We seem to be very resistant to applying what we learn. Or at least to making it stick." Organizations have historically struggled to maintain lessons learned about worker involvement, trust-building, and inclusive implementation processes. This pattern of forgetting or failing to apply past lessons represents a significant risk in the current AI transformation.
The Transformation of Work
Abigail Gilbert challenged simplistic narratives about AI and employment, noting that "those who have some kind of say or status within any regime will protect their own interests when they feel under a certain type of threat."
She identified several keys "automation archetypes" that suggest AI is fundamentally restructuring work relationships and power dynamics rather than simply eliminating jobs.
领英推荐
The Future of Work and Economic Justice
The dialogue engaged deeply with questions of automation and economic justice. Abby, drawing on her research with Max Casey, emphasised that "you won't get equality as a result of meritocracy." Her response to questions about Universal Basic Income emphasised:
Democracy and Power Dynamics
The dialogue revealed deep connections between AI deployment and democratic processes. As Abigail Gilbert gail observed, "Algorithms centralise control and power. This is happening at a societal level, but it's also happening within organisations at the individual level."
This centralisation manifests in:
Pathways Forward: Governance and Accountability
The conversation identified several crucial areas for action. Abby explained how "the sandbox allows us to get under the bonnet of some of the data sharing agreements... and look at what's going on to some extent in the value chain," suggesting practical approaches to governance including:
The Role of Human Choice and Agency
A central theme emerging was the critical role of human agency in shaping AI's impact. Jodi emphasised that despite the media narrative about rapid AI transformation, organizations are still in early stages of adoption: "There is a lot of noise about the speed of change. Which, generally, is true. But when it comes to generative AI, even leading tech companies are in the early stages of adoption and are doing a LOT of experimenting."
The pandemic experience demonstrated our capacity for rapid, positive change when necessary. "If there were any silver linings from the tragedy of the pandemic, perhaps one was the digital pivot that so many companies adopted in just a matter of days and weeks after struggling to do so for years." (Jodi Starkman) However, the tendency to revert to old patterns, particularly visible in current "return to office" mandates, "seems to be more about outdated mindsets than informed decision making."
This moment of AI implementation presents both "a huge opportunity for us to get better at that. And a huge risk that we won't." Success requires:
A Critical Moment for Action
The dialogue concluded with Abeba Birhane's stark observation that positive outcomes require "untangling AI from the current capitalist business model." This pointed to a fundamental challenge: how to harness AI's potential while addressing its structural implications for human dignity, environmental sustainability, and democratic governance.
Joe Elborn challenged the speakers to identify potential positive applications of AI for democracy and civic engagement, pushing the conversation beyond critique toward constructive possibilities. Jodi Starkman responded with examples of AI tools being developed to facilitate group deliberation and find common ground, suggesting potential pathways for technology to enhance rather than undermine democratic processes.
This interplay between critique and possibility, grounded in both theoretical understanding and practical experience, characterised the richness of the dialogue and pointed toward the complex work ahead in shaping AI's role in society.
?
Additional Resources & References
Links from the chat
Dr. Joy Buolamwini?(MIT) - ?author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines; see the documentary film, Coded Bias; she is also the founder of Algorithmic Justice League – mission is a cultural movement towards equitable and accountable AI
Two Charter research project “Playbooks” on AI in the workplace that are downloadable from IRC4HR website:
Catalyzing Safe and Equitable Use of Artificial Intelligence in Home Health Care Work An IRC4HR in-progress research project on the use of AI with home health care workers that reflects on accountability, governance, use cases, concerns, multi-stakeholder perspectives (workers, agencies, union, medical community, patients, families). Report due mid-2025
?