Narratives, Pathways, and Policy: The AI Mosaic

Narratives, Pathways, and Policy: The AI Mosaic

The emergence of Artificial Intelligence (AI) as a device in the discourse, and increasingly, practice, of societal transformation is undeniable. Progress in AI has been swift and has particularly taken those outside of the field by surprise, driven by enablers like vast amounts of training data, the affordability and prevalence of enhanced computing power and connectivity, and deep learning systems advances.

As the rhetoric goes, there is an opportunity to profoundly influence sectors critical to human progress, such as healthcare, education, and agriculture. However, simultaneously, we face significant risks and challenges that call for robust, nuanced, and foresighted deliberation and governance. Balancing these dualities — capitalising on benefits while mitigating risks — is essential to ensuring that AI's evolution aligns with democratic values, human rights, and societal well-being.

The geopolitical arena is a testament to a spectrum of regulatory philosophies, with powerhouses like the US, the EU, and China forging distinct paths in AI oversight.[i]. These variances are not merely administrative; they reflect deep-rooted ideological, cultural, and economic divides, challenging the feasibility of a unified global AI doctrine. The concentration of AI development in certain economies poses risks of global imbalances, necessitating international cooperation to democratise AI benefits and manage cross-border risks. It also risks a race to the bottom where some actors will exploit regulatory gaps in some markets, offsetting the benefits of well-designed guardrails elsewhere. Indeed, there is the heavy emphasis on the "arm's race" posture, and ‘AI supremacy’ as the basis for economic and geopolitical power in the 21st century, providing support for framing regulation and innovation as opposing interests. Industry interests have been cynically promoting these contests as a basis for advocating against further regulation and efforts to continually monopolise and monetise data (exemplified by Mark Zuckerberg, in reference to enabling innovation, that consent requirements for facial recognition in the USA creates the risk of “falling behind Chinese competitors”[ii]).

Despite the existence of multiple international forums attempting to standardise AI policies within institutions such as the OECD, the G20, the G7, the Council of Europe, and the United Nations, significant challenges still need to be addressed due to divergent national interests and values. This landscape of divergence not only underscores the complexity of harmonising AI governance, but also invites an urgent introspection of the foundational principles that guide policy and ethics. It is imperative to dissect the core assumptions that inform these policies, assumptions that often operate silently within the frameworks yet hold monumental sway over how we integrate new capabilities into society.

AIs Dual(ling) Narratives

The discourse surrounding artificial intelligence (AI) often oscillates between two extremes. The first is an overly optimistic, techno-centric narrative that celebrates AI as a universal solution to societal challenges. This viewpoint, however, tends to gloss over the growing inequalities of our times, ignoring how disparities in income, race, gender, and geography can significantly influence who benefits from AI and who is sidelined. By failing to address the unequal nature of society, this optimistic perspective risks perpetuating and even exacerbating existing injustices, creating a technology-led future that favours the privileged while leaving marginalised communities further behind.

On the other hand, there exists a dystopian narrative that portrays AI as a precursor to disaster, an entity poised to disrupt societal structures, devalue human dignity, and potentially trigger devastating consequences (from war, to science fiction and the Terminator scenario). While this perspective correctly highlights the dire need for strong ethical standards and regulatory measures, it does little to mitigate the immediate, emerging, and structurally ingrained risks the technology poses, which can be subtly pervasive and violent. By focusing largely on catastrophic outcomes, it often fails to address the more subtle, insidious, and already evident[iii], ways AI can perpetuate and amplify existing societal and structural inequities, injustices, and violence, thereby hindering efforts to confront and rectify these critical issues effectively.

Recognising the limitations of these binary narratives is crucial. They encapsulate essential hopes and concerns but also simplify the complex implications of AI. The polarised nature of these prevailing narratives significantly hinders the depth and specificity necessary for a more informed, transparent dialogue around AI. While capturing the public imagination, they often obscure the subtleties and realities of AI, from its technical underpinnings and data biases to its real-world applications and fallibilities, thereby impeding constructive critique, accountability, and inclusive innovation.

Problematically, this muddied discourse allows an underlying current of techno-determinism likely towards "a fait accompli" where technologies are developed and possibly widely implemented before proper societal debate, ethical considerations, or regulatory frameworks can be established to govern them. This fatalistic view, however, undermines the critical necessity for proactive, informed, and stringent regulatory frameworks. It negates the potential for public policy to steer technological innovation on ethical paths and for governments to safeguard against unbridled developments that could have irreversible impacts on society. It is crucial to challenge this techno-determinist narrative, advocating for the indispensable role of robust, agile governance that can adapt to and guide the evolution of AI, ensuring it aligns with the broader interests of humanity and the principles of equity and justice. Dismissing the power of regulation in the age of AI concedes the future to a handful of tech companies, potentially at the expense of global populations whose lives will be profoundly affected by these advancements.

Moving forward requires more transparency to “democratise AI”. I refer unambiguously to the notion of the democratisation of AI governance here, that is, the careful consideration about how diverse stakeholder interests and values can be effectively elicited and incorporated into well-reasoned AI governance decisions. We must transcend these polarities, fostering a discourse that encourages critical assessments, demands transparency, and prioritises diverse perspectives. This approach will facilitate a more realistic understanding of AI's capabilities and limitations and promote a more democratic, inclusive, and responsible development trajectory for future technologies.

State Spaces and Future Pathways

Addressing the complexities and implications of these perspectives recognises that the fulcrum of this debate is not "artificial intelligence" as the delineation of a set of specific technological advances but rather our current sociotechnical development path. We must look at this technology as part of our development paradigm, and the expansive state space in which these technologies operate and the potential future trajectories they create only in conjunction with the other necessary components of our sociotechnical existence. This means mapping possible trajectories or state spaces - the myriad possible states our societal, ethical, and existential conditions might occupy as AI continues to advance, along the lines of a eudaimonic society (one without relations of oppression, domination and exploitation, where 'the free flourishing of each is the condition for the free flourishing of all') if this is indeed the society towards which humanity is arguably striving to reach. For example, we should look at possible configurational outcomes correlated with intrinsic human experience aspects: self-realisation, agency, societal capabilities, and societal cohesion. Below, I present opportunities across these dimensions, counterbalanced by risks that represent fundamentally different state spaces emerging from undesirable paths.

Alternative paths

Opportunity: AI can catalyse human self-realisation, akin to how past technological advancements have liberated time for more intellectually and socially enriching pursuits.

Risks: The rapid advancement and integration of AI into everyday life and various employment sectors present a double-edged sword. On one side, it threatens to accelerate the obsolescence of specific skills and professions, leaving significant portions of the population behind in an ever-evolving job market. This rapid shift not only destabilises traditional employment markets but also challenges human identity and societal roles, potentially leading to societal unrest or increased levels of inequality and mental health issues. On the other side, a deepening reliance on AI systems heightens societal exposure to new vulnerabilities, such as large-scale system failures, cyber-attacks, or manipulation, the consequences of which could range from individual inconveniences to catastrophic systemic breakdowns.

Agency:

Opportunity: AI, viewed as a "smart agency," can significantly amplify human capabilities, enabling individuals to achieve more efficiently and effectively.

Risks: Entrusting AI with increasing levels of responsibility, especially when the decision-making processes are not transparent or explainable, raises grave concerns. This over-reliance risks creating a disconnect between human values and automated decisions, potentially leading to outcomes that are ethically questionable or in direct conflict with societal norms. It also threatens to undermine the essence of human accountability, as the rationale behind AI decisions can be complex and opaque, making it difficult to attribute responsibility or implement effective oversight. This scenario potentially paves the way for a loss of personal and collective agency, where humans become overly dependent on technologies they can no longer fully control or understand.

Societal Cohesion:

Opportunity: AI's data-driven solutions can significantly enhance global coordination and response to complex challenges like climate change, fostering greater societal unity.

Risks: AI's potential to shape societal norms and individual behaviours, while seemingly benign or even beneficial, carries the profound risk of eroding personal autonomy. By subtly influencing or predicting decisions through data analysis and predictive algorithms, AI systems could inadvertently compromise the fabric of societal cohesion. They may reinforce biases, diminish the diversity of thought, and undermine the organic, sometimes chaotic, nature of human evolution that's crucial for societal resilience and innovation.

?

These diverging paths point to the need for governance and policy that assumes a role not merely of regulation but of stewardship—safeguarding human dignity, autonomy, and societal solidarity amidst the whirlwinds of technological innovation. This imperative should seem at least clear. However, policy formulation of this nature will necessarily find itself entwined at the crossroads of what is technologically feasible and what is ethically, morally, and socially desirable, determined by deep-rooted ideological, cultural, and economic divides. The next section argues that as a result of this, a cornerstone of efficacious policy is an in-depth understanding and interrogation of the underpinning assumptions of AI's role in society. These foundational beliefs and hypotheses—often unspoken—directly influence the architecture of policy, guiding decisions that will shape the lived experiences of billions and the global trajectory of technological evolution.

?

Unravelling Assumptions in AI Policy Formulation:

?

Underlying AI policy and development assumptions are not just theoretical starting points; they are powerful catalysts that set specific path dependencies into motion, effectively narrowing our trajectory through future state spaces. These assumptions, often deeply embedded within the techno-economic paradigms and political ideologies, implicitly dictate the allocation of resources, the direction of innovation, and the governance frameworks. By favouring certain possibilities and discounting others, they shape the landscape of opportunities and risks, creating a self-reinforcing cycle of decisions and consequences that can be difficult to alter once set in motion. On the other hand, recognising the path-creating capacity of these assumptions offers an opportunity to consciously steer development towards more equitable, diverse, and inclusive futures.

Below, I delineate various assumptions underpinning AI's trajectory, each presenting distinct opportunities yet counterbalanced by risks, indicative of divergent future states contingent upon the paths we choose, desirable or otherwise.

The Intelligence Assumption: Misconceptions of AI Capabilities

While advanced, AI technology fundamentally diverges from human prevalent intelligence, notably lacking in cognitive flexibility and reasoning. Overestimating AI's capabilities risks misinformed policy-making and impractical application expectations, especially when these policies forecast an unrealistic level of AI competence. Overconfidence manifests in initiatives expecting AI to seamlessly adapt to new information and handle tasks beyond its original programming. The term "artificial intelligence" often instils an overly optimistic view of the technology's capabilities, underscoring the need for more precise language that mirrors technical realities and accountability. Discussions around AI policies should be accompanied by clear delineations of the AI type, its developers, purpose, and deployment stage to maintain clarity and realism. Expectations could lead to catastrophic outcomes, especially in safety-critical domains like healthcare and autonomous transportation, where practical inadequacies and unpredictable behaviours eclipse AI's theoretical advantages.

The Policy Miscalculation Risk:

Policy discussions tend to equate AI's potential benefits with its associated risks without sufficient empirical grounding. When we focus predominantly on anticipatory benefits, we're essentially looking at a utopian (techno-deterministic) vision of what AI can achieve. While optimism isn't inherently detrimental, the danger lies in prioritising these anticipatory benefits over immediate, real-world risks.

The Ethical Assumption: ?AI's Fairness and Machine Objectivity

At its core, AI is predicated on data and mathematical models. Often, this perceived objectivity is confused with a plane of neutrality, devoid of human biases or subjectivity. This data, however, is not generated in a vacuum; it is culled from human interactions, inherently absorbing our biases and prejudices. By equating computational efficiency with objectivity, we risk enshrining and amplifying these biases rather than mitigating them. A machine's lack of conscious subjectivity doesn't automatically bestow it with pure objectivity. Instead, we are left without a complex matrix of moral reasoning, intertwined with emotions, societal norms, empathy, or an intrinsic sense of justice.

The Policy Miscalculation Risk:

?In the arena of public policy, the allure of AI as a tool for solving intricate ethical issues is tempting. The potential speed and scale of automated decisions promise efficiencies. However, it also is a pathway toward ethical and moral naivety. Similarly, it promises a slope of eroding human accountability, providing the possibility of hiding behind the veneer of algorithmic complexity and abdicating responsibility.

The "Universality” Assumption:

While the hype and technological innovations are the most inseparable of bedfellows, AI has been hailed as a modern marvel, often equated to electricity in its ability to reshape the contours of societal functioning. The universality of electricity has come from its universally applicable force, with little variation in its core utility. AI, however, is not as uniform. Its applications vary vastly depending on the data it's trained on, the problem it's solving, and the context in which it's deployed. Treating AI as a one-size-fits-all solution, much like electricity, neglects its intrinsic complexity.

The Policy Miscalculation Risk:

The unchecked enthusiasm, stemming from the electricity analogy, may result in national strategies that disproportionately promote AI across sectors. This aggressive push can eclipse necessary evaluations of whether AI is the optimal solution. Governments worldwide are funnelling substantial resources into AI, enticed by promises of economic prosperity. However, the actual benefits realised by enterprises remain inconsistent, and many AI projects fail to reach deployment. This trend raises concerns about over-investment and the potential neglect of fundamental sectors crucial for sustainable development, especially in lower-income nations.

?

The "Data Dependency" and the "Data as the New Oil" Assumptions

The contemporary view of data as indispensable to AI is increasingly contested due to its intricate implications and an unfortunate desire for 'datafication,' or the conversion of life into quantifiable data. The usefulness of data varies significantly based on its context and relevance to a specific AI application. It's essential to shift from a mere extractive perspective to one that prioritises quality, ethics, and sustainability. A balanced approach would recognise the value of data while ensuring responsible and ethical use.

The Policy Miscalculation Risk:

The presumption that more data invariably leads to better AI is grounded in a fraught extractivism, quantity over quality mindset that can misguide resource allocation. National strategies emphasising data collection and centralisation may overlook that data's utility for AI is context-dependent, and not all data sets contribute to AI efficacy. By overemphasising the importance of data collection, there's a risk of undermining the rights of individuals. The unchecked accumulation of data can inadvertently erode privacy rights and overlook the importance of informed consent. Moreover, the fixation on data availability can heighten cybersecurity risks with far-reaching consequences, from identity theft to national security threats. An over-reliance on centralised data storage increases vulnerabilities. Similarly, the accumulative and extractive mindset can foster a monopolistic environment where only a few entities amass and control vast data reservoirs, leading to a skewed distribution of benefits and power.

Considerations:

So, what do we have to consider;

Valuing progress. Yes, AI applications bring significant value, however, the groupies of the failed maxims of "move fast and break things" and the well-meaning but overconfident "move fast and fix things" need to recognise that we are currently unprepared to handle the risks of automated fake news, automated cyber warfare, or for that matter, existing risks like burgeoning inequality, which is not just a financial issue but a multifaceted social crisis that continues to threaten the fabric of our societal and political order. We don't know how to reliably control advanced AI systems, and we don't currently have mechanisms for preventing their misuse. Optimists or market believers should look at our failures to regulate decades-old social media platforms, the opioid crisis or indeed the dramatic resourcing shifts necessary to address growing climate risks as proof of our inability to address issues and mitigate the harms of wrongly placed value drivers. Bender et al. effectively posed the question, ?rather than unquestioningly equating these technologies with progress, we should questions whether, instead of how, society should be constructing them at all[iv].

Give up on the deceit of innovation and governance but be at odds. The tension between innovation regulation and governance is rooted in deep-seated misunderstandings of the necessary synergy between these two levers in progress and development across the realms of technology, society, and economics. At its core, innovation represents the birth of fresh technological solutions, novel ideas, or transformative processes, standing as a testament to human ingenuity. On the other hand, regulation and governance need to stretch beyond mere technological checks and balances, aspiring to craft holistic frameworks that safeguard economic inclusion, champion social fairness, nurture quality of life, and steward the environment for future generations. Governance, therefore will be a core structure that enables innovation in directions that resonate with societal ethos, fostering equal distribution of technological dividends, constructing safeguards against potential tech-induced dislocations, and endorsing an overarching sustainability approach. Effective regulation and governance affords a dual advantage to innovation. Effective regulation and governance in AI provide a dual advantage by enabling organisations to harness socially acceptable and valuable opportunities while proactively avoiding or minimising costly errors, thereby fostering innovation that is both responsible and valuable.

Accept and be comfortable with critiquing the historical the trajectory of technological innovation as having been disproportionately guided by a relatively homogeneous group: predominantly wealthy males from the Global North, often with backgrounds in tech-centric disciplines. This skew is not just a matter of representation; it fundamentally influences the state spaces that are envisioned, pursued, and ultimately realised, as these individuals and groups carry with them not only their specific cultural and socioeconomic perspectives but also their conscious and unconscious biases, interests, and aspirations. One significant consequence of this homogeneity is the manifestation of a 'saviour' complex, wherein 'tech bros' and elites within technology circles, can lead to an overemphasis on technological solutions to problems that also require socio-political, cultural or economic interventions, neglecting the nuanced realities of issues like poverty, inequality, or systemic discrimination and discounting potential adverse effects or ethical dilemmas. Moreover, these vested interest groups often operate within echo chambers that reinforce their own beliefs and priorities and, even more so, reveal their real values in the profit-driven, market-oriented solutions that align with existing power structures and economic systems, which these groups inherently benefit from. It remains of great concern that ego-centric 'saviour' figures like Mark Zuckerberg and Sam Altman, are so prominent in discussions where their expertise is limited as neither ethicists, sociologists, developmental economists, or representatives of the nth degree of the spectrum of communities AI will effect. If recent events like the failure of crypto (aside from its very real success as a Keynesian casino[v]), the derisive and brain-numbing, anxiety inducing feeds of social media[vi] and the long overdue backlash on the consulting industry[vii] have demonstrated collapsing complex socially valuable outcomes into faux expertise and barley veiled value-flattening optimisers (profit-maximising individuals)?is a recipe for harm. A crucial pivot is needed away from this undue dependence on and exaltation of tech magnates as infallible architects of societal change, recognising them instead as one part, and only one part, of a broader, more diverse dialogue necessary for holistic and equitable progress.

These issues continue to remind me of David Collingridge’s 1980 book[viii],

“Ask technologists to build gadgets which explode with enormous power or to get men to the moon, and success can be expected, given sufficient resources, enthusiasm and organisation. But ask them to get food for the poor; to develop transport systems for the journeys which people want; to provide machines which will work efficiently without alienating the men who work them; to provide security from war, liberation from mental stress, or anything else where the technological hardware can fulfil its function only through interaction with people and their societies, and success is far from guaranteed (pg. 15)”.

and the AI Now report of Whittaker et al. 2019[ix],

“the gap between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller. There are several reasons for this, including a lack of government regulation, a highly concentrated AI sector, insufficient governance structures within technology companies, power asymmetries between companies and the people they serve, and stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed. These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm (pg. 7)”.

?

(My premature) Conclusions

The trajectory of AI's development and the impact on the world will ultimately be the mirror of society, and the values we hold, whether these align with our rhetoric or not. This is already clear in the recognition of the implicit (or otherwise) biases introduced through data sets and training which lead to outputs where?AI reflects and even amplifies pre-existing gender, racial, ethnic and other biases exacerbating inequalities and discrimination[x]. It is even more problematic as George Estreich argues that this is a self-reinforcing cycle that “[b]ecause . . . products need to answer to existing demand, they will reproduce the values of the society in which they are sold. To the extent that they are adopted, they will translate those values into human populations (pg. 5).”[xi] Similarly recent research from Deusto University has suggested that people may not only adopt the errors of?biased information provided by an artificial intelligence model, but may continue to make these errors when no longer using the model.

To realign the discourse on AI, we need to critically evaluate our guiding assumptions, anticipate potential pitfalls, and genuinely assess the worthiness of specific AI applications:

  1. Acknowledge Assumptions: We must actively discern when we're basing policies on assumptions rather than undeniable truths. It's crucial to provide frameworks that evaluate the ramifications of these assumptions and their potential deviations from reality.
  2. Reflect on Representativeness: Assumptions in policy-making are hardly neutral. Recognising the interests they cater to can shine a light on the ideological underpinnings of the AI narrative, prompting us to question the inclusivity of the resultant policies.
  3. Broaden Policy Foundations: Anticipatory governance should embrace a diversity of assumptions, ensuring they're rooted in reality and inclusive in nature.
  4. Anticipate Outcomes: While aspiring for AI's best potential, it's equally paramount to envision and safeguard against its worst manifestations.

However, recognising the imbalance in AI's direction isn't the endpoint—it's a clarion call for active engagement. The deep-seated 'saviour' mentality among tech magnates warrants dismantling. These industry interests, aren't infallible navigators of societal evolution. Similarly, the rhetoric and its lexicon forming our AI discourse also demands refinement. Words shape perceptions; hence, the terminology should be accurate, devoid of exaggeration, and truly representative of the technology essence.

I look forward to delving into the underlying structures, interests, assumptions and rhetorical frames underlying AI policy and the continued sociotechnical development and deployment. My work will be expanding across a range of materials addressing if and how these path dependent and creating features are implemented through concrete actions and policy instruments.


[i] Bradford, A. (2023).?Digital empires: The global battle to regulate technology. Oxford University Press.

[ii] Lomas, Na (2018) “Zuckerberg Urges Privacy Carve Outs to Compete with China,” TechCrunch.

[iii] Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

[iv] Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

[v] Panetta, F. (2023, June 23). Paradise lost? How crypto failed to deliver on its promises and what to do about it. Speech presented at the 22nd BIS Annual Conference panel on the future of crypto, Basel.

[vi] Stempel, J., Bartz, D., & Raymond, N. (2023, October 24). Meta's Instagram linked to depression, anxiety, insomnia in kids - US states' lawsuit. Reuters. https://www.reuters.com/legal/dozens-us-states-sue-meta-platforms-harming-mental-health-young-people-2023-10-24/

[vii] Mazzucato, M., & Collington, R. (2023). The big con: how the consulting industry weakens our businesses, infantilises our governments, and warps our economies. Penguin.

[viii] Collingridge, D. (1980).?The social control of technology. St. Martin's?Press

[ix] Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, L., Mills, M., Morris, M. R., Rankin, J., Rogers, E., Salas, M., & West, S. M. (2019). Disability, bias, and AI. AI Now Institute. https://ainowinstitute.org/publication/disabilitybiasai-2019.

[x] Ulnicane, I., & Aden, A. (2023). Power and politics in framing bias in Artificial Intelligence policy. Review of Policy Research, 40(5), 665-687.

[xi] Estreich, G (2019) Fables and Futures: Biotechnology, Disability, and the Stories We Tell Ourselves, Cambridge, MA: MIT Press.



要查看或添加评论,请登录

社区洞察

其他会员也浏览了