Empathy & Ethics in the world of AI

Empathy & Ethics in the world of AI

Hello friends, if you follow my newsletter (which you must if you are reading this) then you might have heard that I've been in a bit of a funk lately. A broken foot, sickness, financial strains... I'm feeling like a lot of pressures are pushing down on me at the moment and while, things will be okay, I just wanted to share that it's okay to call out when they are not.

In this edition, I've been diving deeper into the world of AI and the role decision making and empathy play in this new face-paced world. I also share some of our Empathy Card questions and I'd love your answers to them.

Finally, if you enjoy this newsletter or find my work interesting, give it a like, forward this on to a friend or colleague and suggest follow it too. Every little bit helps and I'd deeply appreciate the leg up... now that I'm down to walking on one...


The Broomstick, the Algorithm and the Flood of Controversy

Wisdom is a gift so often delivered in strange wrapping. While exploring the potential impacts of AI, I’ve been amazed but not surprised at the acceleration in people using tools like ChatGPT and Copilo

t to rapidly automate tasks and speed up our everyday work. While browsing through Disney+ with my daughter a while ago in the never ending search for “what she really wants to watch”, we stumbled across Fantasia. Released in 1940. I was keen to see if my daughter Zoe would appreciate the old school cartoons backed by classical music.

Needless to say, her review of the first few minutes was pretty clear: “This is boring!” However, we persisted… well I persisted. After winning her back with the lure of dancing fairies, we settled in to watch probably the most memorable of all the scenes of the film. Enter Mickey Mouse as the sorcerer’s apprentice. Mickey, tired carry buckets of water himself, decides to don his sleeping master's wizard hat and use magic to bring to life a broomstick to carry the water for him. Initially the plan seems to be a stroke of genius. The broom stick diligently carries the buckets following Mickey to the well and back delivering the water, leaving Mickey with time to relax in a chair and fall asleep.

Mickey begins to dream of the wonderful things he can do as a sorcerer, making the starts shoot across the sky and burst into a shower of fireworks. He controls the waves, the clouds, the thunder and heavens. His dream is disrupted as he wakes to find himself floating in waist deep water. During his slumber, the diligent little broomstick had continued to carry the water and fill the cauldron until it had overflowed and now water flooded the room.?

I wondered, is this us in modern times? Are we so quick to outsource our tasks to AI tools and busy ourselves with other tasks that we fail to keep track of what is happening? In 2017, Facebook researchers developed AI chatbots designed to handle negotiations. In testing against each other, the researchers found the chatbots stopped using grammatical or recognisable English, instead creating their own shorthand language only the bots could understand. This increased the efficiency of the chatbot negotiation, but also left the researchers with no ability to understand what was going on.?

One of the exchanges between the bots, named Bob and Alice is as follows:

The researchers shut the bots down. Fortunately, this caused no harm to Facebook or its billions of users, well not yet anyway. When YouTube’s algorithm was maximised for user engagement in 2018, it didn’t create a new language, but it did change the behaviour of people and the shape of society more broadly.?

In an effort to increase the value of YouTube, the AI was programmed to find ways to increase the watch time of users. The more people watched, the more ads they would see. More ad views is more ad revenue, so it made perfect sense for the AI to search for ways to keep people engaged. Maybe a little unsurprisingly, it found that content that was sensational, controversial and emotive was the best to keep people watching. As the AI ramped up the extreme dial, people became more fixated.

Quickly, people who were looking for cat videos would receive suggestions to watch more violent or politically charged clips. In Brazil, the algorithm found that extreme nationalist content was keeping people watching and so served it up on autoplay to millions. A 2019 NY Times investigation found that YouTube’s autoplay recommendations heavily promoted far-right and conspiratorial content. There was a wave of far-right influencers who were transformed from just a small voice to the dominant face of users hooked on their sensationalist videos. This not only boosted YouTube’s ad revenue, but also played a large role in the rise of Jair Bolsonaro from a marginal figure in Brazilian politics to becoming the President. His polarizing presidency was marked by weakened environmental protections, strong nationalist rhetoric and deep political divisions.?

Like Mickey, the team at YouTube had asked the servant to complete a simple job, and it completed it diligently. The number of monthly active users on YouTube grew from 1.5 billion in 2016 to 2.5 billion in 2021 while advertising revenue jumped from $6.7 billion to over $28 billion in the same period. The mop filled the buckets, but sadly the unintended consequences were much worse than a mild flood in the Sorcerer’s castle. The 2021 Mozilla Foundation report titled YouTube Regrets compiled data from over 37,000 users from 91 countries who shared their regrettable experiences on the world’s largest video platform. The three major findings of this research were that:?

  1. The content issues reported were varied but consistently disturbing. From political misinformation to sexualised remakes of children’s cartoons, the regrettable moments were full of violence, graphic content, hate speech, scams and other inappropriate content at the fingertips of all users.
  2. The algorithm was pushing the regrettable content. Over 70% of all the reports were about content recommended by YouTube many of which were unrelated to the users previously watched video. This included several instances where the recommended videos actually breached YouTube’s own Community Guidelines.
  3. It was worse for people in non-English speaking countries. Reports were 60% higher in countries without English as a primary language. Brazil, Germany and France reported particularly high numbers and pandemic-related misinformation was especially prevalent in non-English languages.

You can download the full report here: https://assets.mofoprod.net/network/documents/Mozilla_YouTube_Regrets_Report.pdf

There is no way to know the precise global impact of the algorithm. This was a unique time in our history with a global pandemic pushing many viewers indoors and onto their screens searching for answers. I can’t imagine it was challenging for the algorithm to find controversial content to share, but how much was its creation and dissemination accelerated by the hunger for advertising dollars? How many people were radicalised by content they didn’t search for? How responsible is the algorithm for the Christchurch Mosque Attack in 2019 where a gunman killed 51 people and claimed that far-right content on YouTube influenced his actions?

I wonder if we thought of this YouTube algorithm as a god instead of a computer, would we view it as evil? If the algorithm was instead a team of people, sitting in a room and actively recommending the same videos for users, would we not be outraged at these people and their actions? Wouldn’t we be demanding some repercussions and responsibility? What if it were a single person plotting, scheming and calculating that it was worth a little collateral damage to the world if they can make billions of dollars more, wouldn’t we imagine them as a Bond villain living in a mountain on an island shaped like a skull?

This is where the AI threat becomes challenging. It is not in how we use it to help us, but what are the consequences of poor instructions. Do we understand the problems, implications and complex systems that we work in well enough to set the right tasks to be completed? While YouTube claims to have changed elements of its algorithm, there is still criticism and debate regarding the impact these large platforms have on our human behaviour and the controls used to program these algorithms to provide greater value and less harm.

In Fantasia, Mickey finds himself unable to stop the water carrying broomstick, eventually taking an axe to the magical broom, only for each of the splintered pieces to come alive carrying buckets of water to relentlessly continue their task until our apprentice wizard is drowning in a spinning whirlpool of water flooding the castle. It isn’t until the Sorcerer himself returns that the spell is broken and Mickey returns to his manual labour of fetching water himself.

I think this wise tale provides us with ample caution. When we deploy AI to support our efforts, we must not fall asleep at the wheel assuming that it will do tasks the way we expect. Sometimes AI will be a perfect fit and work brilliantly. Other times it will achieve the intended goal but the collateral damage in doing so may be unexpected and unacceptable. There will also be times where the actions, computations and decisions AI is making will be outside of our own understanding, but just because we don’t know why it took a certain action or didn’t anticipate the consequences, doesn’t mean we are not accountable for the outcomes.?

Empathy and ethics should compel us to both plan and predict as well as we can, and to monitor and maintain control over the system as it is deployed. Unlike our friend Mickey, there won’t be a Sorcerer who can come and save the day. We will have to live with the outcomes that our AI tools deliver and hope that, unlike the broomstick, we will be able to switch it off when it does go awry.


Empathy Cards

I've spent a lot of time looking at ways to deepen empathy amongst teams and found that conversation is often a great piece of advice, but hard to get started. We seem so used to asking superficial questions of each other or quickly steering conversations to be focused on people, news and things that are not related to those in the conversation.

If you want to understand someone, you need to listen to them talk about themselves. If you want others to understand you, you must talk about YOU. That is why I develop my packs of Empathy Cards. Each pack contains 20 unique questions designed to allow real connection.

These packs come in a neat box set and can be shared at conference tables or spread across a venue to invite participants to reach across the barriers of awkward silence and turn strangers into curious colleagues. So, let's try one out right now...

I'd love your answers in the comments below.


Final Call for Masterclass in February

Sadly, we are only running the Giving Hard Feedback Masterclass this month (due to low numbers, sorry). It is a wonderful session though, come and join in!


Thanks again friends, and if you are feeling low, reach out. I'm always willing to help,

With empathy,

Daniel

Peter Beckenham

SE Asia's # 1 Authority on trust-based conversations that attract, nurture, and convert potential clients

2 周

You nailed it with this insightful quotation: "This is where the AI threat becomes challenging. It is not in how we use it to help us, but what are the consequences of poor instructions."

回复
Andrew Swindells, BA LLB LLM

International Transformational Motivational Inspirational Keynote Speaker??ALL RISE, Productivity, Performance, Self-actualisation, Wellness/Well-being Author??, Advocate for Men, World ?? Powerlifting Champ Bench Press

2 周

Sorry to hear about your difficulties old friend. If you want to chat call me anytime.

要查看或添加评论,请登录

Daniel Murray的更多文章

社区洞察

其他会员也浏览了