How will we translate shared values into operational guidelines for Ethical AI?
Image: Dall-E II, Prompt: Positive image of abundance / Sci-Fi / AI / Garden / Drawing

How will we translate shared values into operational guidelines for Ethical AI?

In an interview with Kevin Macnish, we deep-dive into the challenging issue of operationalising our values, to create safe and positive applications of AI technologies.

Kevin Macnish is an international speaker on Digital Ethics. His work focuses on a human-centred approach to understanding ethical concerns and mitigating at a cultural and societal level, while seeking to develop ethical opportunities through technology. Kevin leads Sopra Steria’s Ethics and Sustainability practice.

Formerly an Analyst and Manager at GCHQ and the US Department of Defense, Kevin was Professor of Ethics in the UK and The Netherlands, where he reviewed over 100 projects for the European Commission′s regulatory and ethical compliance. He is a major contributor for the EU’s SHERPA — Ethics of AI international project and has published thought-leadership in digital ethics including three books, and a multitude of academic and public articles on the subject.

Kevin, thanks so much for taking the time to speak to me today. Can you tell me, what is top-of-mind for you in relation to the broad and rather exciting field of Ethics and AI?

Lately, I′ve been interested in how we might ′operationalise′ ethical AI.

Ethics and AI have been discussed to some degree over the past 15 to 20 years, however in the last eight years or so, the technology has really taken-off. Now we′re seeing an intensification of discussion and debate as well as a multitude of new PhD′s and Masters Programmes.

I think from our early work, we have a good understanding (at least from sort of liberal democratic perspective) of our common values. We can agree, for example, that privacy and impact on society, transparency and human agency are all very important issues.

The interesting challenge lies in getting to the next stage: operationalising our values. When we want to make sure AI will operate for good rather than causing harm, what guidelines and regulatory constraints will we need? This is not an easy question to answer.

We can expect at least two important pieces of legislation to be coming into play over the coming months. The AI Act is likely to be signed later this year, or early 2024, along with the AI Liability Directive. Both pieces of legislation will be taking us some way in terms of operationalizing our concerns and wishes around ethical AI.

The AI Act is going to make us think much more carefully about the technologies we make and use because it will force us to carry out impact assessments and risk analyses before putting machine-learning or other algorithmic systems to use.

Separately, the AI Liability Directive sets focus on legal responsibilities in the event any harm is being caused by AI. Under the AI Liability Directive, an individual or group who feel they have been harmed can make a claim. The onus will be on the maker of the AI tool, to prove that no harm has been done. This is very significant because it would be much more difficult (and costly) for an individual to prove they had actually been harmed by the AI in the first place.

Both Acts represent a tremendous leap forward, and will significantly shape the nature of innovation and experimentation in the AI space. They will also contribute to a common understanding of what responsible AI looks like in a wider societal context.

Can you go a little deeper into how the AI Liability Directive will work? If an AI tool has been used to harm me in some way, I can then make a case against the producer?

If a person feels they have been harmed by an AI tool, they will be able to make a claim, and the onus will be on the maker to prove that harm has not been caused. AI harm in general will be managed in a similar way to GDPR where each country will have a dedicated office who′s responsibility will be to handle such claims.

In terms of makers′ liability, they will need to prove they have not caused any harm. Or they may prove that their tool has been mis-used by a third party. In such a case, the third party (and not the maker) will have to prove that no harm was caused.

The maker may still be required to prove they had carried out a suitable risk assessment, and they would have to show they had taken reasonable measures to safeguard against harmful misuse. If they have not done so, the maker may still be found negligent under the AI Liability Directive.

Take for example, an AI surveillance tool that follows individuals around a city by tapping into a multitude of data sets, sensors and cameras. The tool might have been developed for use by law enforcement or the intelligence service. However, the same tool in the wrong hands can be used for malicious stalking. What is interesting in such a case, is that there may be ethical grounds to develop a tool — while unintended outcomes can be extremely harmful or un-ethical. Risk of misuse is definitely something that should be picked-up by the maker during the design phase. The onus is then on the maker to disclose the risk from the outset, and to create reasonable barriers to prevent use by persons outside of the law-enforcement or intelligence service.

As AI begins to ramp-up within advertising, what are your views on ethical use?

Interesting question, and if we′re talking about creating a tool to make people buy things, we might start by asking is it ethical to sell that product in the first place. Because even without AI, we′ve had advertising for a long time.

So I wouldn′t say the AI tool is necessarily ′unethical′, unless the end to which it is applied is unethical. In many countries, it is no-longer acceptable to advertise tobacco or alcohol in certain contexts. In such situations, it would obviously be wrong to use AI to sell them, because the business of advertising these products was deemed unethical. But remember, it took us a long time to agree on these advertising bans.

Do you think we can agree on the concept of ethical business?

Well firstly I think we agree on a lot more than we don’t agree on, if that makes sense? And I think the most interesting areas of ethics and debate are precisely where we disagree. We no-longer debate about whether murder, theft, incest or rape are right or wrong. But issues such as gender fluidity and identity, or climate-crisis, drilling for new oil, and the mass-consumption of disposable goods — these topics are still hot, because historically speaking they are rather new to us (even though they may have existed longer than we first imagine). I think that’s just an inevitable aspect of society that we need time to talk, reflect and to debate before reaching a common ground, and the same applies to the ethics of business.

Unfortunately, at a time when we need healthy debate on difficult themes, and when our technology could facilitate such conversations, we are seeing an alarming shift towards polarization and extremism. And when we can′t have debate, it becomes increasingly difficult to reach consensus. I think we need to resist polarization and to be more comfortable with disagreement and uncertainty for the sake of healthy and explorative discourse.

Recently, I′ve been wondering about the consequences of AI being used to win-over human attention, which is perhaps a similar question to the one on advertising…. But what worries me, is the possibility that machines will do it way better than we might have imagined possible. Is that something that worries you?

Well yes, when we apply learning-machines to ′solve′ human attention, we are likely boost our success rates, and in ways we hadn′t thought of. Actually that reminds me of a cautionary tale proposed by the philosopher Nick Bostrom about a machine that had been programmed to produce paperclips, and ended up doing it disastrously better than we could have imagined, turning pretty much everything into a paperclip (eventually killing humans who try to stop it from making paper clips). When we allow machines to ′solve′ human attention, we ought to consider the risk of it doing far too good a job of it.

Politically, the application of machine-learning and algorithms also comes with serious risks, and it is something that I have spent quite a bit of time thinking about. Within the political space, we need to define a tipping-point where the use of tools becomes inappropriate, unethical or illegal and this is not easy. In fact, this challenge inspired me to publish ′Big Data and Democracy′ precisely because we wanted to figure-out what was wrong with using big data in political campaigns, and exactly when does it become un-acceptable?

I had noticed during the Cambridge Analytica scandal in 2018, that we were rarely discussing the Obama campaign six years earlier where they had used huge quantities of data to inform their political messaging. I remembered during the Obama campaign, people were saying how wonderful it was that we could use big data to good ends. And so here was an element of apparent double standards I wanted to explore. What made it okay for Obama′s team to use data in one way, while it seemed wrong for Cambridge Analytica to assist the Trump campaign in the way they did?

There is too much detail to go into here and now, but we found the main difference between the two types of campaigns, was that the Obama Campaign was using big data to learn about important issues which influenced their wider public message, whereas the Trump Campaign was using algorithms to directly target individuals and minority groups with messages that would never be visible to the wider public. For example they were sending images of a big wall and a bible, to convince undecided Christians that an anti immigration policy was a Christian stance. The sinister nature of this approach, was that only some targeted Christians were seeing such imagery.

But where can we draw a line for all future campaigns? This requires consensus and clear definition of our ethical standards.

To bring this conversation back to the practical day-to-day, can you give me a little peek into your ongoing work on Ethical Impact analysis? What does due diligence look like in the world of AI?

That’s a good question. I think that’s something we’re probably still working out. To meet the new legislation, we have been developing standardized methodology to analyse potential impact and risks to the best of our ability. There′s something called the Collingridge Dilemma, which points to a challenge with some technologies: That impacts cannot be easily predicted until the technology is released into society. And once it is released into society, it might be too late to mitigate against the risks.

I think that’s a very interesting challenge, and so there′s a lot of ongoing work into things like anticipatory technology ethics, responsible innovation and value-centred design.

Lately our team has been working with the Scottish Government, who are looking at establishing a new Scottish National Digital Guardian that will protect people’s rights and interests in relation to technology. As consultants, we′re seeing some companies ask us to help them prepare for future compliance, and in other cases they are driven by their own sense of morality and responsibility to their customers.

When we see the regulations come into play, I imagine there will be those of us who never quite understand what it means for us on a practical level. We all notice for example cookie warnings on websites[MK1] , but not all of us know exactly what we′re saying ′yes′ to when we click the OK button. What do you think that will look like when it comes to AI?

There will definitely be quite a high threshold for consumer understanding of what it all means. On some levels it can be easy enough, for example companies might have to declare on their website that they are using algorithms to pre-qualify job applicants. This gives the job applicant an informed choice to either accept the algorithm, or to seek employment elsewhere.

It then becomes problematic when our choices become eroded by mass adaptation. While only few are using it, we can read a warning and make a choice, but if all job applications are being pre-vetted by algorithms, our choice becomes eroded, and the warning becomes nearly meaningless.

Having our CV′s vetted by an algorithm is relatively easy for most of us to understand. But what do you think of our general ability to understand future disclosures within the AI space?

In terms of our actual understanding of the disclosures, and what they mean: well that′s going to be complicated, because the breadth and variety of algorithms and machine-learning applications will be both staggering and complex.

It′s worth considering the analogy of the car industry and it′s regulation. Currently, I wouldn′t understand even half the regulatory constraints in play when designing and producing my car. But I do place my trust in other experts to design for compliance, so that the cars I drive are safe. In a similar way, many of us will never understand the risks or regulations that lay behind the digital technology we use, and yet it need not be a problem.

I like the analogy of the car, and yet cars have remained quite similar over the past 50 years, whereas the technologies we use are in a constant state of flux.

Yes indeed the technology is moving fast, and because we often do not fully understand the risks before the technology is released, we may unfortunately find ourselves in reactive mode, answering problems we hadn′t foreseen.

Precisely because technology is being developed at such speed, we need to lift the public conversation to the level of ethical principles.

And I want to come clear on something, because with the analogy of the car I mentioned that we (the public) may not need to understand all of the technology. I do think it is extremely important that people understand the wider ethical issues and principles that are being debated on the level of government and legislation. To that end, I would hope that our governments are held responsible for communicating clearly around the developments within this field. A healthy public debate is to be encouraged, these issues will shape our world in the years to come.

— -

Thanks for reading. Please ′like′ share with others! I also weclome your comments or questions, ideas for other interviews. This article is also available on medium.

To reach out directly to Kevin Macnish you can Kevin Macnish, PhD, CIPP/E


Arran McLean

Technology Leader Digital Transformation

1 年

Very interesting and what kind of ethical dilemmas the use of new technology brings

Trond Eldristad

Smidig coach/Prosjektleder/Prosessleder/Produktomr?deleder hos Sopra Steria

1 年

Very interesting Gordon. There are some ethical dilemmas we need to consider moving forward

Tor Gunnar Berland

Manager/Architect, Technology Management at Sopra Steria

1 年

Look at Autonomous Weapon Systems. A weapon system that, once activated, can select and engage targets without further intervention by a human operator. If we reflect a bit on the AI Liability Directive and compare it to GDPR. Who would be liable if someone got harmed by AI? AI is built on a massive number of different technologies and innovations - are they all to blame? Or is the responsible one packaging it all together as a final product? If you break the General Data Protection Regulation (GDPR), you will get a heavy fine for breaking the rules (if you get caught). For Autonomous Weapon Systems, this might be too late? Also, I think we all in reality already have been harmed by AI via social media? Do we see a class action against these companies soon, or alternatively against all companies behind the technology they are using?? A bigger factor here is: if human intervention is required to ensure no one is harmed - will this not stop progress (also, will people make better decisions than machines)? Alternatively with a free for all and no liability - look at the movie The Terminator from 1984.? Are we doomed regardless? Or do we might see AI making better decisions than humans?

I am also fascinated by the idea that tehcnologists will need to define the playbooks for navigating ethical dilemmas like the trolley-problem when creating the systems for autonomous cars and the like. Not what I expected when signing up for an education in IT! Do you priorotize the safety of the passengers or pedestrians? How do you weigh the costs?

Frank Langva

Senior Manager, Sopra Steria

1 年

Very interesting perspective on Ethichal Impact Analysis and scenarios where there may be ethical grounds to develop a tool, while unintended outcomes can be extremely harmful or un-ethical. The risk of unintentional misuse of AI can be overlooked by the maker during the design phase, but how big hazard is there? Any thoughts on this matter, Inga Strümke and Anders H. Lier?

要查看或添加评论,请登录

Gordon Ryan的更多文章

  • 3 easy ways to engage your public during COVID-19 restrictions.

    3 easy ways to engage your public during COVID-19 restrictions.

    What to do when you've been forced to close your doors to the public? It impacts your visitors, who have become stuck…

  • Every Shop is an experience

    Every Shop is an experience

    Okay, I'll let you all be the judge of whether or not 'every experience is a selfie' but there's no question about the…

  • Treacy, Wiersma and thoughts on Design Business

    Treacy, Wiersma and thoughts on Design Business

    Where do you stand? Treacy and Wiersma (see ref. below) suggest that to succeed in a marketplace, we should…

  • Beyond Screens

    Beyond Screens

    Without needing to quote from numerous academic findings about screen exposure, mobile phone usage, and their…

    1 条评论

社区洞察

其他会员也浏览了