Ethical AI leadership: Incorporating ethical principles and human values into AI technologies

Ethical AI leadership: Incorporating ethical principles and human values into AI technologies

With AI continuing to expand at an astonishing pace and to reshape almost every aspect of our lives, we all need to consider the long-term implications for humanity of these rapid developments. But while governments, regulators and ethicists continue to debate and grapple with the known and unknown implications of AI, many increasingly believe that business leaders should also play their part in designing AI systems that promote ethical AI.

On 14 October 2024, the World Economic Forum released an article entitled 'Why corporate integrity is key to shaping the use of AI', which asserted that

"businesses have a key role to play by applying a strong code of ethics to ensure there is the necessary level of responsibility and accountability."

Of course, from a commercial perspective, this approach makes sense. Without a strong set of principles for ethical AI, or effective guardrails, AI concerns erode consumer trust. While on the flip side, organisations that can demonstrate ethical AI, are more likely to be commercially successful.?

Gastón Tourn , Chief Growth Officer, Oddbox highlighted the commercial link between corporate principals, trust and commercial success at our recent Enabling Ethical AI Forum. He said,

“Every business relationship is a trust-based relationship. Consumers buy your product or service because they trust you. The question you need to ask yourself as a Chief Marketing Officer or CEO is, do you trust AI to handle those relationships? If you don’t, then don’t do it because you are going to break the trust of your consumers and once you break the trust of your consumers the next thing your business is gone.“

So what are the most common concerns about AI ethics? Feedback from Chief Disruptor members at our activities points to four main areas that organisations need to factor into their AI strategies to engender trust. These are: transparency as to how AI is being built and operated, concerns about the potential for bias in data leading to discriminatory outcomes, accountability and redress for mistakes, and privacy concerns about how data is captured and stored.?

As we head towards 2025 this important conversation is gaining real momentum. We wanted to address this critical issue in our latest ‘Insights with Impact’ poll and so in October we asked members, “What is the most important principle for ethical AI?”

The leading response to our poll was ‘transparency’. Transparency about how AI is being built, where the data comes from and how the AI makes decisions, is vital for fostering accountability and trust. As AI algorithms become increasingly sophisticated and autonomous, their decision-making can become opaque, making it difficult for individuals to understand how these systems are shaping their lives. Transparency helps to ensure that only unbiased data is used to build the system and that the results will be fair. Ange Johnson De Wet , Director and Head of Engineering, NatWest Group, another of the panellists at our recent Enabling Ethical AI Forum, raised this important issue in conversation with the Chief Disruptor Founder, Emma Taylor . She said,

“The problem with bringing magic into your firm is that it isn’t magic. It’s usually a SaaS service solution and those solutions are often not necessarily known. Suddenly you have a SaaS estate in your company that you are unaware of… the reason why this is relevant to AI is that most of these SaaS instances have AI and/or ML within them and it’s all hidden from the organisation and from the person who purchased it, so it's very difficult to see if it’s transparent or fair or accommodating for bias.“

This takes us neatly to the second highest result in our poll, ‘removing bias’.??

AI systems are trained on massive amounts of data, but as we know only too well, the systems are only ever as good as the data that they’re trained on. That means that AI can sometimes reflect or even amplify the biases of the data they are trained on, leading to discriminatory outcomes that impact most already marginalised groups. Leaders need to prioritise tackling bias, not only for reasons of fairness but also to generate better quality results. But just like in the real world, eliminating bias in AI-generated data is incredibly difficult.?

Gastón Tourn , Chief Growth Officer, Oddbox shared a fascinating example of the challenge of data bias when he worked for the dating website, Bumble. He told us,

“We know from data that people are more likely to match with someone from their own ethnicity. If you leave the AI and ML algorithms to do the optimisation of what kind of profiles to present to users, they will reinforce that bias. Humans are already biassed, it's not that AI is bringing a new bias. What the AI is doing is just amplifying and giving more scale to that bias and it definitely reinforces that natural bias of just selecting people from your own ethnicity so those were very ethical decisions for our company. … Should we let the data run that algorithm and reinforce that selection bias or impose our own algorithm?

The third most popular response to our poll was ‘accountability’, which goes hand in hand with transparency and explainability. As AI makes more decisions that impact our lives, organisations will need clear mechanisms for assigning responsibility and providing redress when things go wrong.?

Moving on to ‘maintaining data privacy’, which was the lowest response in our poll, and is undoubtedly the cornerstone of gaining customer trust: As AI usage expands, concerns arise regarding how that data is collected, stored and used, organisations will need to put in place robust safeguards against data breaches and unauthorised access to sensitive information. And as AI becomes more sophisticated and far-reaching in terms of data capture and analysis, the lines between security and surveillance are also becoming increasingly blurred. From facial recognition to smart home devices, the potential for invasion of privacy is of increasing concern for many.??

Conclusion

Most people would agree that, when used responsibly, AI can be a force for good with the potential to positively impact society in several ways including improving patient outcomes, creating exciting new opportunities in the workplace or accelerating progress on sustainability goals. Although the challenges outlined in this article are great, organisations must tackle them proactively, establishing clear principles, guidelines and processes. If you are leading an AI initiative in your organisation, then the 4 principles in our poll are a great place to start; forming the basis of a framework that sets out your organisation’s moral compass and encourages continuous monitoring and scrutiny of those principles. Human intervention is key to the practice of ethical AI. Professor Alan Brown of the University of Exeter & the Defence Data Research Centre raised this important issue when talking about the importance of ‘human in the loop’ in scenarios such as recruitment at our recent Chief Disruptor Forum. He said,?

“We have got ourselves into really tricky waters right now as we decide where the automation fits, where it supports human decision making and even how we present that information so that human decision making isn’t overly influenced by what we do.”

So to finish this month’s Insight with Impact article, I wanted to share some food for thought from Ange Johnson De Wet , Director and Head of Engineering, NatWest Group, at our recent Enabling Ethical AI Forum, that really resonated.?

“In my role, I have always been thinking, “how can I make this more effective, more efficient and influence everyone to my way of thinking so I can get what I want done?”? AI undoubtedly helps us to do that but I think we need to pause and think, “actually are we all better off if everything is faster, more effective and I’m able to influence large groups of people?” Is that best ultimately for me, is it best for our country and is it best for our society?”??



Kerensa Jennings

Chair | NED | Strategic Adviser | Executive Coach | Storyteller

3 个月

Excellent article and a helpful framework for anyone wanting to use #AI responsibly.

要查看或添加评论,请登录

Chief Disruptor的更多文章

社区洞察

其他会员也浏览了