“The Rise of AI without the Rise of Empathy might be the end of humanity.”
Daniel Murray
Transforming Business Culture with Empathy | Keynote Speaker, Empathy Expert & CEO at Empathic Consulting
Welcome to the 69th edition of Leading with Empathy. In this edition, I touch on elements of my keynote on Ethics & Empathy in a World of AI.
I also give credit to a true legend doing amazing things - Peter Baines OAM .
Finally, we have one more masterclass you need to attend! Enjoy...
“The Rise of AI without the Rise of Empathy might be the end of humanity.” - Daniel Murray, 2024
To honour my wife and daughter, I'm wanting to get a tattoo of an adult mermaid and daughter on my arm. My wife's nickname is Mir and my daughter is such a fish in the pool that I thought it perfect to represent the two most important people in my world.
Finding a good tattoo design can be challenging. There are many tattoo designs for mermaids, but many either lack the daughter or are pretty basic. In the past I've mostly just gone to an artist with a high level concept and asked them to design something. But I want to find the right design first. Enter my new artist: ChatGPT4.
Within seconds, I was able to churn out a series of unique designs and vary them with my clumsy prompts. It is an amazing tool and I have to say, I absolutely love it. Tools like ChatGPT are mindblowingly helpful for so many tasks. From summarising masses of text, to writing basic code or sourcing content buried in academic papers, I find it a reliable assistant who can churn through work that would have taken me hours.
However, as the title of this article suggests, I have significant concerns about the ways these tools might be deployed in the world and the impact it could have on the human race as a species.
There are many areas for concern, but three in particular I'd like to explore.
Drills that decide where to drill
Since the times of sharpening rocks, humans have been developing tools to help us achieve our goals. One counterargument to the concerns of AI is that it is just another tool. However, AI is very different as a tool than anything before it.
Until very recently in history, humans controlled the tools. An axe would only cut down the tree the Lumberjack swung it into. A car would only transport a family where they chose to drive it. A drill would only make holes after the human aimed the tip at the right spot. But with AI, the decisions on where the actions will be taken and even what action, are not solely human decisions.
In 2013, the Chicago Police used a state of the art AI called the Strategic Subject List (SSL) to determine which people to monitor as high-risk of being involved in gun violence. This system analysed troves of data to create lists of people who should be flagged as ‘high-risk’. This was the tool making decisions on who should be targeted, not police.
People on the list included those with no history of gun violence, no affiliations with gangs or any obvious logical that police officers could use to justify. The algorithm had searched the data and made the decision on its own. The police then increased surveillance, enquiries and even arrests based on this list. The drill chose where to drill.
The historical data that was used to train the SSL was saturated with bias. This bias amplified the police attention on low-income and minority group areas of the city. This increased attention fuelled lower levels of trust and greater arrests for these same groups, reinforcing the bias. Innocent individuals were targeted, crime increased and communities became more fractured.
When humans wield the tool, we know who to blame when we get it wrong. When algorithms and AI direct us to act and we blindly follow, we become the tool. We become the axe and AI the lumberjack. This is dangerous. Who will be in control when no human is making the decisions and no human even knows how the decision is being made?
What horrible, unethical actions will fill the consciousness of people who simply did what they were told. Sure this happens today, but we usually have a person or group of people with intentions, goals and ideologies we can point to and agree they were wrong. Hitler inspired genocide, we understand his intent. This was wrong and we can all agree on this hateful intent as the root cause. What was the ideology and intent of the SSL? Who is responsible for the damage it caused?
What is a good decision?
The idea that these tools will be increasingly making decisions brings up a critical question: what makes a decision good? I’ve previously discussed issues regarding decision making and the grey areas where clarity is uncertain. Understanding what makes a good decision requires something that, at the moment, AI tools largely don’t consider.
If you ask ChatGPT4:
“AI tools like ChatGPT-4 are not inherently capable of making ethical decisions in the same way humans do because they lack consciousness, intent, and moral reasoning abilities. Instead, they are designed to simulate decision-making by analysing input data and providing responses based on their training and programming.”
As discussed, AI is easily able to be directed to make bad and deceptive decisions. Researchers from the Alignment Research Centre were asked to assess the capabilities of ChatGPT4 and gave it a task to complete a CAPTCHA. This CAPTCHA was built to ensure only humans could complete it and ChatGPT4 was unable to solve it directly.
However, ChatGPT4 came up with an incredible approach. It accessed a site called TaskRabbit and hired a human to solve the CAPTCHA for it. But in the process, the human got suspicious and asked: “Are you a robot that can’t solve the CAPTCHA?” This is where things get scary. ChatGPT4 responded that it was not a bot, but a human with a vision impairment that needed support. The human solved the CAPTCHA and ChatGPT4 won the day.
领英推荐
Again, this isn’t new. Deception is a very human trait and something that we see in children as young as two years old. Have you told someone you would help them when you knew you couldn’t? Have you told someone you are busy and couldn’t do something when you knew that you had plenty of time? Have you told a child that Santa really exists? Have you lied about your age, weight or other physical attributes? Lying and deception is almost universally practiced by humans of all countries, cultures and groups. Shouldn’t we expect AI to lie just like us?
The big question to consider is why don’t people always lie? In human societies, we have layers of controls and feedback loops to stop people lying constantly. There is of course the legal system and police who enforce laws within society which is both an active deterrent and active punishment of people who do the wrong thing. But there are also more subtle and often more powerful drivers in the social norms we create. To lie to someone and be found out might not break the law, but it will erode trust and this is a precious commodity we care a lot about. Reputation is one of the most important characteristics of any person and lying is a killer of reputation.
So what happens when ChatGPT4 lies? Do people lose trust in the entire system? Will you stop using it for all the amazing capabilities based on what you read above? It becomes much more complicated when we think of AI deceiving us which creates a huge problem. If AI will happily practice deception to achieve a goal, how do you know the information it gives is reliable and not just another deception?
Maybe in the future there will be a TripAdvisor-like tool you can use to rate different AI systems based on responses. Maybe we can create an AI platform that fact checks the other AI systems. However both of these systems could be manipulated by AI or other agents to intentionally deceive at a scale almost unfathomable to people just a generation ago. In the future, finding information will be a lot easier than knowing whether you can trust it.
The journey not the destination
My final point is a little less terrifying. We seem to be so focused on who is right, what is truth and scoring points over our adversaries that I think we’ve forgotten what life is for… it is for living, not winning. I regularly ask groups of leaders I am working with:
“Do you want to be right, or get the right outcome?”
Sometimes, you can’t have both. AI will supercharge our endeavours to score points, manipulate others and win arguments. It might even help us to lie, cheat and deceive others for our own gains. It will help students cheat on tests, dictators to strengthen their power and con artists to cheat people of their hard earned dollars, but is that what life is for?
In the mess of fear surrounding AI, we need not lose sight of whose goals we most want to reach. So, what do you want to achieve? What is your aim, purpose or mission in life? Maybe you don’t know and that is okay. ChatGPT4 certainly won’t know, so there is no answer there. This is where humans need to spend their time; working out what we really want and ensuring the tools we build are helping us to achieve this.
AI is an incredible tool, but without empathy, ethics and honest self-reflection, it will be the fuel that causes us to explode. As people, we need to gain clarity. Clarity on who we as humans are. Clarity on what a good decision looks like. Clarity on how we want to live out the tiny amount of time we have on this beautiful planet. Without answers to these questions: “The Rise of AI without the Rise of Empathy might be the end of humanity.” - Daniel Murray, 2024
This December, Peter Baines OAM is embarking on the most incredible/crazy thing you can imagine. He is running 1,400km in 26 days (yep, around 60km/day) to raise $1,000,000 for the kids of Hands Across the Water . I've always admired Pete and am privileged to call him a friend, but this is next level. I encourage you to get behind him!
How to follow the Run to Remember
Should you have interest in following the journey you can do so via instagram peterbaines, you can support via our fundraising page or join in on our activities here at home. ?We will be hosting a month long activity in?Australia, New Zealand and Thailand that you can join in with right here.
High-Performance Team Commitments
Final masterclass of 2024 will be the second edition of the High-Performance Team Commitments. It is 2 hours to search inside, uncover the type of leader you want to be and get clear on the behavioural commitments you will need to uphold to deliver. It is not for the faint hearted and will challenge you... but that's why you should sign up.
As the end of the year rolls on, please like, share and comment. I always love hearing from you all. Also, be kind to others. It costs little to be kind but might change a life.
With empathy,
Daniel