AI and the Law of Unintended Consequences

AI and the Law of Unintended Consequences

One of my favourite chapters in Transition Point is Chapter 17 called 'AI Unleashed'. The chapter opens with a quote from Steve Jobs; ‘there are downsides to everything; there are unintended consequences to everything.’?

If you were to read the chapter, you’d understand the pertinence of this quote, highlighting seven ways that AI could slip from our control through misaligned goals, poor programming, biased datasets, undesired outcomes through taking instructions too literally and without an understanding of second-order effects, malicious human programmers, or simply realising that its carbon-based creators are the only ones who could pull the plug, so to speak. It uses examples of AI already gone awry, from Microsoft’s disastrous launch of the Chatbot Tay to warnings about Skynet-like developments where autonomous weapon systems are using reinforcement learning to create battle strategies we cannot hope to anticipate or react to.

I was reminded of this chapter as I watched last week's disastrous launch of the latest version of Google’s next-gen multi-modal LLM AI system, Gemini (an updated and renamed version of Bard). Touted to be much more powerful than the GPT 4 system that powers ChatGPT, Gemini 1.0 was launched in Dec 2023 with a press release from Sundar Pichai, CEO of Google, and Demis Hassibis, CEO and co-founder of Deepmind.

The latest version, Gemini 1.5, was released on February 15. It did not go well.

The Gemini 1.0 press release states that Gemini was ‘Built with responsibility and safety at the core’. We also found out that Gemini 1.5 was built with something else in its core – ideological bias.

On Thursday, Google had to pause Gemini’s image generator after they found out that it really, really didn’t want to acknowledge the fact that white people – especially white men – exist or have ever existed. X (formerly Twitter) was quickly filled with tweets showing Gemini’s output when asked for images of everything from medieval knights to Vikings, Popes, and the US founding fathers, all featuring a rich and diverse group of people that included every race but the ones that they should have - white Europeans. ?

Worse, Gemini happily complied with requests to provide information and images on people it deemed worthy (i.e. non-Whites, non-right leaning political) while stubbornly refusing to participate in the generation of any text or images about people it considered ‘problematic’ (i.e. white Europeans, people with conservative political leanings, etc).?

This highly volatile, politically divided world has increasingly become a place absent of nuance and context. We used to generally agree about the goals but disagree about the methods to achieve them. Now we live in a world where anyone who is not in complete ideological lockstep with the current opinion is 'literally Hitler'. Don't believe me? Spend five minutes looking at the comments section of any political post on X.

As they come from fellow humans, we can recognise these childish, emotional responses for what they are. We can always come offline and ignore anonymous trolls and small-minded, ideologically captured people. But what if this mindset is programmed into the AI systems that run our world? Does that sound far-fetched? What if I said that, according to Gemini, Elon Musk's tweets and ownership of X make him just as bad an influence on society as Adolf Hitler? You probably won't believe me, right?

Wrong.

If I’m feeling generous, I would say this is the unintentional outcome of trying to do good. However, I would have to be prepared to forget the 2017 media storm that surrounded Google’s firing of James Damore for his memo “Google’s Ideological Echo Chamber: How Bias Clouds Our Thinking About Diversity and Inclusion”, in which he highlighted his concerns that real biases were being introduced at Google to counter perceived ones. I would also have to ignore the fact that Gemini’s Product Lead, Jack Krawczyk, has a long history of seriously offensive anti-white tweets. So, despite my generosity, to accept this as accidental, I would have to suspend my belief system and ignore what is almost certainly a reality – that Google has a significant progressive bias in its leadership and development team and that this bias has been programmed into Gemini.

As I discuss in Chapter 17 of Transition Point, many AI systems are trained on datasets that reflect the people who generate this data. Until recently, the primary generators of data have been countries in the Anglosphere, Japan, South Korea and Western Europe. As a result, the data collected is not as diverse as the world around it, resulting in outcomes biased towards lighter-skinned people. I assume that Google would state that the bias it has injected into Gemini is a well-intentioned attempt to redress that balance. Still, in doing so, they have exposed several real dangers with AI systems, many of which I covered in Chapter 17.

Rewriting history

The first is the most pernicious one – historical revisionism. The first of the seven issues I highlight in chapter 17 is that ‘AI lacks real-world experience and reflects the limitations of its programmers and the environment it was developed in’. Gemini 1.5’s launch provided a perfect example of this. The programmers at Google saw no issue with the system’s revisionism and injection of different races into places that historically didn’t exist because that reflected their existing beliefs, biases, and political objectives. It didn’t present as a problem to them as they had programmed the system to be radically diverse and progressive in its responses and content creation, and that was precisely what Gemini was doing. So, it never showed up as an issue until it was released to the public, who, unlike the Google team, did have a problem with their history being erased.?

Gemini’s launch proved that what they called Artificial Intelligence was, in reality, Programmed Intelligence.

The danger is that Gemini presents to the users a view of the world that is incorrect, one that projects the developer’s political leanings and thus is being used as an ideological tool of subversion. Extraordinary claims require extraordinary evidence, so let me explain. Google is the largest search engine in the world, being used to undertake 92.05% of internet searches, and Google Chromebooks are the largest supplier of laptop hardware to schools. ?

Gemini is designed to enhance and power Google Search, the default search engine on all Chromebook laptops and Android phones. Do you see the problem yet? Though I’m sure the programmers would violently reject the comparison, this ‘progressive’ rewriting of history is simply a more technologically advanced form of ideological control and propaganda than that used in the past by Fascists and Communists.

Google may declare its intentions honourable, but we have enough historical evidence now to know that someone’s good intentions always pave the road to hell. While you may approve of Google's ideological stance and feel that the 'woke AI' reaction is over-blown, imagine if this same technology was in the hands of those you don't agree with. Because it will be.

As Winston states in Orwell’s 1984; ‘Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.”

Now imagine such a Party armed with an AI system like Gemini and installing them into the operating system of every laptop and every phone. They would become tools of ideological subversion; tools used by every child throughout their life. The truth wouldn't stand a chance.

Unintended Consequences & Second Order Effects

Speaking of authoritarians...here lies the second issue with the programming of ideology into AI systems, one which I highlight in Point 5 of Chapter 17 - ‘AI may decide to interpret instructions literally, or in ways we didn’t intend’.

In this section, I explain that AI systems are likely to interpret instructions too literally, resulting in behaviour similar to the brooms and mops Mickey Mouse brings to life to do his work for him in Disney Fantasia’s The Sorcerer’s Apprentice. While they initially do the things you asked, soon their strict adherence to your instructions and lack of ‘common sense’ means they are out of control and doing things you most definitely didn’t want them to. This means don’t be surprised when the AI you programmed to replace white people with a more diverse cast also replaces them in historical situations that you didn’t intend.

This is precisely what happened this week when Gemini produced images of multi-ethnic Nazis.

Now Google did have a problem with Gemini, and the image generator was instantly withdrawn for ‘modifications'.

Lesson: if you want your AI system to revise history for you, it might revise it exactly as you programmed but not as you wanted. Be careful what you wish for.

We might be laughing now…

We need to tread very carefully here. Last week, I showed how AI can create amazing life-like videos just from text prompts. It would, therefore, be really easy to create life-like videos that do not represent historical fact but do represent ideological fiction. As time passes and new generations are born into the AI age, their ability to distinguish between fact and fiction will blur and finally disappear. ?

Final thought / fun fact: when I went to publish this article, at the bottom was the following prompt:

Tick tock.


Michal Janicki

Senior Manager - Leading Digital Solutions and Analytics Team as part of Plan Centre of Excellence at Ansell

8 个月

Interesting read Sean - got intrigued by the Chapter 17 that you referred to. Can you share it please?

Josh Ingber ??

CEO & Founder @ EU 3PL "Hero Fulfillment" ??| Logistics, Warehousing & Fulfillment services - from your online store to your clients door ?? - ?? - ????♂?

8 个月

Your insights on the potential risks of AI are thought-provoking and important for our evolving technology landscape. It's crucial to address both the advancements and the potential pitfalls in this field.

Stephen Nickel

Ready for the real estate revolution? ?? | AI-driven bargains at your fingertips | Proptech Expert | My Exit with 33 years and the startup comeback. ???????

8 个月

Such a thought-provoking discussion! How can we navigate the AI future responsibly? ?? Sean Culey

John Lawson III

Host of 'The Smartest Podcast'

8 个月

Exciting discussions around AI advancements and concerns! ??

Dheeraj Haran

Learning Manager @ Deloitte || 11+ Years of Experience in Learning Delivery, Organizational and Professional Development & LMS, ERP Management

8 个月

Exciting to see the progress with OpenAI's Sora! Looking forward to your insights on AI fears. ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了