AI Ethics: One FAQ, Two Concerns and a Hope

AI Ethics: One FAQ, Two Concerns and a Hope

In the last two weeks, I was honoured to be invited to sit on two panels on the topic of AI ethics and responsiblity, one as part of a training course within HSBC, and one as part of the Google Next 2019 conference.

I was asked a similar question in both panels, which went something like this: how does the combination of my academic background in ethics and my professional background in enterprise technology shape my perspective on AI ethics?

I’ve been asked this question often enough for it to count as an FAQ, so I thought it might be helpful to share my answer here.

Before I share it, though, I have to give one big caveat. Studying philosophy and attempting to keep up with the latest developments in technology teach me both humility and the depths of my ignorance, so I am aware that any thoughts I share here are provisional, personal and have a strong likelihood of being wrong.

Putting that caveat to one side, my answer to the FAQ is my combined background in enterprise technology and ethics gives me two causes for concern and one cause for hope.

The first concern arises from the stark difference between the pace of change in the fields of ethics and technology. In technology, whatever metrics you choose to represent technology advance tends to assume the shape of an exponential curve: a shape which reminds us that, however fast things seem to be changing today, it will be faster tomorrow. In philosophy, by contrast, we are still having active debates with thinkers who have been dead for over two millennia.

Now, I don’t believe (although some philosophers would disagree) that we need to have fully figured out formal ethical theory before we can develop an ethical response to new technology. However, the difference in pace between the two fields illustrates that it takes time for us to figure out what we think about new technologies and, increasingly, by the time we have figured it out, technology will have moved on again. We are struggling to keep up.

(For a wonderful exploration of how we may need to change to address this concern, try reading Shannon Vallor’s book, Technology and the Virtues.)

My second concern is that even when we do try to respond to new ethical problems, we might not bring the right tools to bear.

In my professional life I am a technology architect, a business person and, at heart, a software engineer. When I address problems, I apply a set of mental tools and attitudes which I believe are common to most people who design, build and run the technology which now shapes the world. I apply a set of heuristics and patterns, rules and principles. I try things which have worked for me before. Most importantly, I try to optimise the achievement of a particular set of outcomes, and I approach problems with the view that it is possible for me to solve them once and for all, even if I don’t find the perfect solution on this occasion.

I don’t believe that this is the right way to address ethical problems. Many ethical problems, especially if they are new and complicated, are not amenable to the application of simple rules. We can go badly wrong if we approach ethical problems with the goal of optimising for particular outcomes. And we will find it frustrating if we believe that we can solve ethical problems once and for all.

Yet we seem to persist in treating ethical problems as if they were engineering problems.

We can illustrate this by consider a thought experiment known as the trolley problem. You may have come across this problem already, as it has been much discussed recently. In summary, in the thought experiment, you are asked to imagine that there is a runaway trolley car which, if left unchecked, will collide with and kill five people. Fortunately, you are standing by a switch which you can throw to divert the trolley car onto another track. Unfortunately, there is one person standing on that track who will definitely die if you throw the switch.

What do you do?

It is very tempting to approach this as an engineering problem, and to optimise for the obvious dimension of human life. Five is a greater number than one, so we should throw the switch. Problem solved, and we can now go on to solve all sorts of other, similar problems. Perhaps we build these rules into autonomous cars.

But this is to think about the problem in the wrong way. The point of the trolley problem, as originally framed by Philippa Foot in 1967, is not to find the right answer: it is not to find a way to solve the problem. It is to show that, if all we think about is optimisation, we miss some of the most important ethical dimensions of a situation. We may conclude (as many people do) that the right thing for us to do is to throw the switch. But we are unlikely to conclude that this is so categorically right and simple that it is not ethically difficult, that the choice to kill another human being does not carry an awful ethical weight. The point of the trolly problem is not to find a solution but to remind us, as is often the case in philosophy, that things are more complicated than that.

So, we have two concerns: that technology changes too fast for us to figure out an ethical response; and that the mental tools which those of who design, build and run new technology use to solve problems are not good tools for solving ethical problems.

What’s the cause for hope?

Hope lies in the very prompt for this blog post. The FAQ that I have attempted to address because it is a question that is asked frequently, and it is only one of many questions which are being asked frequently about AI ethics. Both of the panels I sat on in the last two weeks had full, active, engaged audiences. Technology professionals and business people evidently recognise that this is a field of enquiry with which they need to engage, and they are engaging enthusiastically.

My final analogy is with the concept of unconscious bias, the idea that, no matter how hard we try to avoid bias, we have unconscious biases which will influence our decision making if we let them. There are plenty of training courses on unconscious bias available, but they do not attempt to eliminate unconscious bias: that may turn out to be impossible. Instead, they teach us when we are operating in a context where unconscious bias may influence us, to be aware, and to act to avoid unconscious bias. The current debate around AI ethics is similar. It is unlikely that we will solve all the problems of AI ethics in the near future, and many of the problems of AI ethics may not be the type of problem to which there is a solution. But the act of engaging with the topic reminds us that we are operating in an ethical context. Even if we are using engineering tools to solve engineering problems, if we are building technology which affects people’s lives, we are operating in an ethical context, and we owe it to ourselves and others to be aware of this context, and to ask and reflect on difficult questions. 

Andy Clarke

Helping companies flourish | Business acceleration through alignment | Growth | Transformation | Import | Export | Go to market

5 年

I attended ?a similar talk last week entitled "The only way is Ethics" hosted by Edit?with a great talk given by Steve Fuller FRSA of The House, both based in Bath. And others I hasten to add. Notably @Ed Grattan?of @Triodos Bank. It's good to see the ethics movement gathering pace.? #Data?#Ethics?#Creative?#Tech?#Agencies?#Independent?#Network?#Purpose

Andy Hawkins

C-3PO (Chief People, Planet, and Purpose Officer) at Business On Purpose and B Leader for B Corp - Improving Social & Environmental Impact through Better Business

5 年

Thoughtful insights - it's good to engage - there may not be a quick end point answer or solution but we will all hopefully learn as we journey.

Santhosh Nair

Vice President & Global Head - FS Consulting, Cloud Services and Innovation & IP

5 年

Insightful post, David

Merlyn Mathew

Chief Operating Officer

5 年

Fascinating take on thr trolly problem David Knott .

Brilla Q.

Lead Architect at Hsbc Software Developement Limited.

5 年

Can’t agree more and to my personal experience, AI is more like - I’m have enough practice for BAU tasks and interpret into simple guidance for automation. Then I have time to learn new things and raise questions to others which falls into someone’s BAU scope. I gain knowledge and best practices through learning and asking questions, as well as get connected with external people, expand my career. When I find something really interesting I may go deeper then become a SME.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了