How Do We Use Artificial Intelligence Ethically?
How Do We Use Artificial Intelligence Ethically?

How Do We Use Artificial Intelligence Ethically?

I’m hugely passionate about artificial intelligence (AI) , and I'm proud to say that I help companies use AI to do amazing things in the world.

But we must make sure we use AI responsibly, so we can make the world a better place. In this post, I’m going to give you some tips for making sure you apply AI ethically within your organization.

1. Start with education and awareness about AI.

Communicate clearly with people (externally and internally) about what AI can do and its challenges. It is possible to use AI for the wrong reasons, so organizations need to figure out the right purposes for using AI and how to stay within predefined ethical boundaries. Everyone across the organization needs to understand what AI is, how it can be used, and what its ethical challenges are.

2. Be transparent.

This is one of the biggest things I stress with every organization I work with. Every organization needs to be open and honest (both internally and externally) about how they’re using AI.

One of my clients, the Royal Bank of Scotland, wanted to use AI to improve some of the services they provide to their clients. When they began their initiative, they were (and continue to be) transparent and clear with their customers about what data they were collecting, how that data was being used, and what benefits the customers were getting from it.

When I look at the recent Cambridge Analytica scandal, I feel like a big part of the problem was Facebook’s lack of transparency about how they were using AI and how they were collecting and using their customers’ data. A clear AI communication policy could have solved a lot of problems before they even happened.

Customers need to trust the companies they work with – and that requires full transparency about how AI fits into the company’s overall strategy and how it affects customers.

3. Control for bias.

As much as possible, organizations need to make sure the data they're using is not biased.

For instance, Google created a huge database of facial images called ImageNet. Their data set included far more white faces than non-white faces, so when they trained AIs to use this data, they worked better on white faces than non-white ones.

Creating better data sets and better algorithms is not just an opportunity to use AI ethically – it’s also a way to try to address some racial and gender biases in the world on a larger scale.

4. Make it explainable.

Can your artificial intelligence algorithms be explained?

When we use modern AI tools like deep learning, they can be “black boxes” where humans don’t really understand the decision-making processes within their algorithms. Companies feed them data, the AIs learn from that data, and then they make a decision.

But if you use deep learning algorithms to determine who should get healthcare treatment and who doesn't, or who should be allowed to go on parole and who shouldn’t, these are massively big decisions with huge implications for individual lives.

It is increasingly important for organizations to understand exactly how the AI makes decisions and be able to explain those systems. A lot of work has recently gone into the development of explainable AIs. We now have ways to better explain even the most complicated deep learning systems, so there’s no excuse for having a continued air of confusion or mystery around your algorithms.

5. Make it inclusive.

At the moment, we have far too many male, white people working on AI. We need to make sure the people building the AI systems of the future are as diverse as our world. There is some progress in bringing in more women and people of color (POC) to make sure the AI you’re building truly represents our society as a whole, but that has to go far further.

6. Follow the rules.

Of course, when it comes to the use of AI, we must adhere to regulation.

We are seeing increasing regulation of AI in Europe and in parts of the US. However, there’s still a lot of unregulated parts that rely on self-regulation by organizations. Companies like Google and Microsoft are focusing on using AI for good, and Google has its own self-defined AI principles.

When I work with organizations, we often put together an ethics council for AI that acts as the North Star for AI ethics concerns for that company. Whenever an organization identifies a use case for AI, the ethics council evaluates it for ethical concerns.

The Organization for Economic Co-operation and Development (OECD), was founded in 1961 to stimulate economic progress, and it includes 37 member countries. The organization created the OECD AI principles, which are a great starting point for thinking about how your organization can use AI in ways that benefit people and the planet.

Then AI should be designed in a way that respects laws, human rights, democratic values, and diversity. AI must function in a robust, secure, and safe way, with risks being continuously assessed and managed. Organizations developing AI should be held accountable for the proper functioning of these systems in line with these principles.

The 17 Sustainable Development Goals of the United Nations can also be a great resource for you as you’re establishing your AI use cases .

If the way you're using AI aligns with OECD principles and the UN Sustainable Development Goals, you're probably well on your way to ensuring that you're using AI ethically.


For more information on the ethical and responsible use of AI, check out my YouTube channel and my website , where you can find hundreds of videos and articles on all of these topics.


For more on the topic of artificial intelligence, have a look at my book ‘The Intelligence Revolution: Transforming Your Business With AI ’.

Thank you for reading my post.?Here?at LinkedIn ?and at?Forbes ?I regularly write about management and technology trends. To read my future?posts simply?join my network here ?or click 'Follow'. Also feel free to connect with me via?Twitter ,??Facebook ,?Instagram ,?Slideshare ?or?YouTube .

About Bernard Marr

Bernard Marr ?is a world-renowned futurist, influencer and thought leader in the field of business and technology. He is the author of 18 best-selling books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

In order to consider and address fundamental ethical questions about AI, human beings need to the central concern. Especially, in understanding the implications of AI social systems on human beings. Please refer to the below article: https://theconversation.com/social-robot-or-digital-avatar-users-interact-with-this-ai-technology-as-if-its-real-229798

回复
Marco Mendola

Legal Technologist at TLT LLP - I build relationships to foster innovation.

3 年

This is a great piece Bernard Marr - to the point!

回复
Maka Kama

Unlock Your Purpose From The Inside Out

3 年

Great article :-)

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了