Copilot: Censor, Prison Guard or Destroyer of Words?
JB & Dalle-3

Copilot: Censor, Prison Guard or Destroyer of Words?

Governance and accountability. Two principles that most of us expect when rules are created that impact our lives. Yet in the unchecked expanse of the AI universe, new digital commandments are set every day by private entities that remain unfettered by such lofty ideals. AI guardrails - those guidelines meant to keep generative models from going off the ethical rails - are increasingly drawn up by tech firms without democratic oversight. This raises the question of legitimacy: on what authority can appoint themselves as moral arbiters?

The situation gets more complicated when questions of cultural relativism come into play. What’s deemed offensive content in one country could be seen as exercise of free speech in another. So whose cultural norms prevail when creating these AI barriers? Can a tech company in Silicon Valley accurately determine global standards? And if generative models are churning out fake news, spoof videos, and political misinformation with increasing sophistication, how well can these private guardians rein them in?

With Microsoft’s vast reach through products like Office 365, Copilot Pro’s rollout has implications for defining AI guardrails at scale. As home to over 300 million commercial subscribers, Microsoft’s content moderation policies within Copilot could impact far more businesses and users than other vendors. Early evaluations suggest Microsoft is taking a very conservative stance, building strong filters against biases and misinformation with near-zero tolerance. However, such heavy-handed guardrails also limit creative applications. As one of the first enterprise-grade offerings of its kind, Microsoft’s approach with Copilot provides a case study into the challenges of balancing responsible AI with functionality. Getting the formula right could determine acceptance of AI writing tools in business settings. The path Microsoft paves in preventing harms while enabling productivity could set precedents for the industry, underscoring why Copilot merits wider debate.

Legitimacy of Unelected Guardrail Creators

When we talk of entity setting boundaries for AI systems, the usual suspects appear - tech giants like OpenAI, Anthropic, Google and Meta, alongside emerging startups aiming to stake their claim in the generative gold rush. While some guidelines come from collaborations with academics and policy experts, the decision-making power largely rests with developers and investors seeking the next unicorn AI.

But on what grounds are executives and engineers handpicking what generative models should and shouldn’t churn out? Have they been elected by public vote to encode moral frameworks for distributed AI systems? Do they provide transparency into how content moderation decisions are made? Are they legally accountable to any electorate or representative body? In most cases, the answer lies in a regulatory void.

Unlike democratically elected governments who enact policies aligned to their constituency, AI companies primarily serve capitalist interests and incentives. Generative models promising exponential financial growth get built rapidly with ethics as an oft-ignored afterthought. Once risks become apparent, guardrails get bolted on without broader public discourse or consent. Rather than encode moral reasoning, they optimise for maximum user engagement. If AI takes its current trajectory, we may soon delegate much of our digital existence to black-box algorithms trained by cottage industries with no constitutional accountability.

Before generative models become de facto institutions issuing passports into the metaverse, it’s worth examining the legitimacy deficit in current guardrail governance - it poses risks and technological oversight. Code written by coders to change coder code. The state of affairs seems rather feudal, no?

Cultural Relativism Challenges

The questions swirling around democratic legitimacy gain further urgency when viewed through the lens of cultural relativism. Human values and ethics often bear the imprints of geographic, historical and ethnic contexts. Cultural norms become embedded into legal systems and governance structures. Simply put, what passes for everyday behaviour in one country could be punished by imprisonment elsewhere.

Now imagine such relativist conundrums being translated into AI systems. Should a natural language model censor any mention of pork if deployed in a Muslim country? Block images of alcohol consumption if sold in a conservative region? Remove references to same-sex relationships if its host nation criminalises homosexuality? The permutations of cultural conflicts arising from global AI deployment pose profound challenges.

Of course, the opposing argument goes that generative models trained on vast datasets reflect universal truths rather than cultural biases. That their complex algorithms eventually decode fundamental human values shared across continents and creeds. Such technological solution-ism ignores the roots of culture itself - learned behaviours passed across generations, tradition and rituals that bond communities, diverse ways of interpreting existence itself.

If AI guardrails become global by default, integrating only the cultural perspectives of their creators, we run the risk of digital imperialism. Minority views could get filtered out by models catering to mass appeal and engagement. And if these systems influence real-world decisions in finance, employment, law enforcement etc, it could further marginalise vulnerable communities. Generative AI was meant to represent humanity in all its colors. Will it instead conform all cultures to the shades deemed most acceptable by its makers?

Case Studies

The looming pitfalls of AI models making ethically questionable decisions based on limited cultural perspectives are not mere hypotheticals. Already, real-world examples have surfaced exposing concerning gaps in AI guardrails:

Overblocking Legitimate Speech: Meta’s Oversight Board overturned the company’s removal of an Instagram post protesting France’s policies banning hijabs, finding it to be protected political speech. The algorithm had erroneously flagged it as terrorist content based on detecting images of guns, ignoring broader meaning.

Biased Facial Analysis: Studies have found facial analysis AI to have error rates as high as 35% for darker-skinned individuals and non-binary genders, leading to exclusion in applications like employment screens and law enforcement surveillance. Critics highlight lack of training data diversity and underrepresented developers as contributing factors.

Tone-Deaf Translation Gaffes: Facebook had to apologize after its AI translated a post by a Palestinian journalist from Burmese to English as “hurt God dog”, losing all semantic meaning. Critics noted it stemmed from the system’s lack of contexts outside Western languages.

Autocompleting Prejudice: Google’s autocomplete algorithm was found to suggest negative sentiments and stereotypes when users searched names typically associated with minority communities, while no such effects were seen with white-associated names.

Together these examples indicate gaps in adequately safeguarding cultural sensitivities exist even in AI created by the world’s largest tech firms. If the state-of-the-art has such blindspots, it raises concerns on how smaller entities creating generative models can account for global cultural diversity.

So What next?

Microsoft’s foray into generative content technology surfaces crucial questions of the road ahead:

What does the future of knowledge work look like as AI becomes ubiquitous in shaping workplace writing and media? If systems like Copilot are moulded mainly for business interests, how could that impact employee outlooks on individual creative purpose?

As AI content generation scales to billions of micro-interactions within documents daily across organisations, what new responsibilities arise for Microsoft in governing cultural impacts at global scale? How to ensure localisation for international norms and prevent digital imperialism export?

If Copilot produces missteps or harms at scale, can Microsoft remedy societal externalities of rapid deployment? With billions of interactions it’s likely faults emerge – so how to institute amends and restrictions if needed? Who participates in that debate?

At its core, Copilot provides merely a technological preview of the sweeping change to come through generative AI’s infusion into human digital life. But change brings opportunity to intentionally shape better futures – will Microsoft lead in seeking wisdom and consensus for decisions that could ripple across all of society? The path forward remains unpaved, open for those willing to work together rather than alone.

Addendum

Copilot defends itself

This response isn't based on the document: As an AI language model, I do not have personal opinions or feelings. However, the document raises important questions about the governance and accountability of AI systems, particularly in the context of content moderation and cultural relativism. It discusses the challenges and potential risks associated with the deployment of AI systems at scale, and the need for greater transparency and oversight in the development of AI guardrails. These are important considerations for the development and deployment of AI systems, including language models like myself.

According to the document: Transparency in AI guardrails is crucial for ensuring accountability and legitimacy in the development and deployment of AI systems. The document raises concerns about the legitimacy of private entities, such as tech companies, appointing themselves as moral arbiters and setting AI guardrails without democratic oversight or accountability.

To further support the argument for transparency in AI guardrails, it could be added that:

  • Transparency allows for public scrutiny and evaluation of the ethical frameworks and decision-making processes used in the development of AI guardrails.
  • It enables stakeholders, including users, regulators, and civil society organizations, to hold AI developers accountable for the potential risks and harms associated with their systems.
  • Transparency also promotes trust and confidence in AI systems, as it allows users to understand how the systems operate and make decisions.
  • It can facilitate collaboration and dialogue between AI developers, regulators, and other stakeholders, leading to the development of more robust and effective AI guardrails.

According to the document: As a user of Copilot, it is important to be aware of the potential dangers and limitations of AI systems. Here are some pieces of advice to keep in mind:

  • Understand that AI systems, including Copilot, are not infallible and may produce biased or inaccurate results. Always review and critically evaluate the output generated by the system.
  • Be mindful of the cultural and ethical implications of using AI systems, particularly in the context of content generation and moderation. Ensure that the content produced aligns with your values and the cultural norms of your intended audience.
  • Stay informed about the ongoing debates and discussions surrounding the governance and accountability of AI systems, and advocate for greater transparency and oversight in the development of AI guardrails.


Original extremely extended conservation Jon Bains & ChatGPT December 2023. Seriously - it was a book length discussion. ;-)

Updated, compressed, cohered by Claude.ai and Jon on 24/1/24

Guess appearance by Microsoft Copilot

Obsolete.com | Work the Future

James Tindall

Senior Software Engineer II at Akamai

8 个月

Very interesting post, Jon! I’ve been ploughing a similar furrow, here I looked at Chat GPT’s “understanding” of code (or absence thereof) and went on to discuss the various hegemonic pressures these technologies apply both during production and consumption. https://open.substack.com/pub/atomless/p/desire-and-the-stochastic-pathways

回复
Fiona Ross

I organise 50+ annual London networking events to ensure business owners, directors and entrepreneurs win more business.

8 个月

Recommend a read of this article by Business Junction member Jon Bains. Microsoft’s Copilot certainly merits wider debate.

Bushra Burge

Extensive experience leading pioneering creative digital projects from concept and ideation to measurable impact, with a strong emphasis on sustainability and ethical business.

8 个月

did you write this ? or was it .... in an ironic test ? also yes I think more and more it's about does it add any value ? ps creativity and governance are not mutually exclusive, unsafe environments breeds mediocrity aslo put in some glittch to add realiz- erm

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了