Developing AI with Trust: The AI Ethics Codex

Developing AI with Trust: The AI Ethics Codex

The evolution of AI is often met with worry, skepticism, but also hope for a better future. The media has often contributed to the AI theatrics, making the AI safety debate more controversial than it actually is.

Amidst the public furor, a number of innovators are jumping at the opportunity to advance the cause of AI. AI is after all, touted as the paragon for tomorrow’s ideas, taking humanity into exciting new frontiers that, quite simply, is not possible without advanced data analytics and machine learning. 

Yet AI companies and startups alike should prepare for the worst and must not leave AI unchecked. We’d be among the first to admit that AI, for all the good that it can do, can just as easily wreak havoc without a fail-safe mechanism in place. This is why Bosch has signed up to the High-Level Expert Group on AI (AI HLEG), a commission put forward by the European Commission to promote ethical AI practices. 

This article is authored by Ronald van Loon, sponsored by Bosch, and aims to explore the challenges and possibilities that AI will bring in the future. 

Breaking Through the Barriers of AI Misconception 

One of the most common urban legends surrounding AI is its role in eliminating human jobs. It is important to separate fiction from reality when understanding AI. One survey by the Forrester Wave states that AI will replace 7% of human jobs by 2025, “AI impacts all facets of customer service operations – it relieves agents from repetitive predictable tasks or takes over those tasks completely”.

It is worth noting that the Forrester Wave does make the important admission that AI ‘relieves’ agents from repetitive work. In other words, AI-backed systems empower human operators to make informed decisions by saving them the time spent going through literature reviews and filtering variants. 

In many cases, the AI algorithm operates like ‘black boxes’. This means that some deep learning models are so complex that it is unclear how they arrived at a decision. For example, if a predictive algorithm discovers that a patient will contract a genetic disorder based on previous records, it is only natural that the patient and doctor would want to know what the prediction is based upon. 

This is definitely true in the case of medtech products. The data itself is generated by humans, which often presents various forms of bias, which can create ethical problems. The ideas of ethics, trustworthiness, transparency, and fairness become crucial when we deal with important areas like healthcare, transportation, finance, and even law enforcement. 

Bosch, in this regard, is a company that wants to create AI products that combine the quest for innovation with a sense of responsibility. The company has taken the decision to set up a set of guiding principles and to take a stance in the public debate about AI. “AI will change every aspect of our lives,” says Bosch CEO, Volkmar Denner. “For this reason, such a debate is vital.”

By defining its AI code of ethics, Bosch acts well in advance of binding EU standards. It is based on the following maxim: humans should retain control over any decision the technology makes. 

Bosch wants companies and people alike to be able to trust in its AI products, while making them safe, robust, and explainable. All products are guided by Bosch’s ‘Invented for Life’ ethos which improves the quality of life, rekindles innovation, and conserves natural resources. “It was important for the company to set up rules and guidance for how AI should fit in our engineering process and in our paradigm of ‘Invented for Life.’ Not to endanger it, but to support it, and come up with even more interesting products which have adaptability, learnability, and AI embedded in that,” Christoph Peylo, Global Head of the Bosch Center of Artificial Intelligence, explains.

Bosch’s code of ethics is based on the 3 approaches for decision making in AI-based products: 

Humans-in-command (HIC): The AI product is used as a tool and humans get to decide when and how to use it.  

Human-in-the-loop (HITL): Humans can directly change and influence the decisions made by an AI product. 

Humans-on-the-loop (HOTL): Humans can define certain parameters for AI during the design process and can also review any decision that was carried out by the product. 

My personal view is that all organizations should establish a similar codex when developing AI solutions that claim to help humans. 

Establishing Trust in AI with Ethical Principles

Unfortunately when most people think of the words ‘artificial intelligence,’ one of the first things that spring to mind is a sentient machine that can outwit and outgun any human or military on Earth. AI is perceived as a frightening force to be reckoned with. There is an AI fear pendulum: on the one hand, there is the fear that it will strip us of our livelihood and jobs, and at the other extreme, that it will be the undoing of the human race.

AI, unlike what most works of fiction will have you believe, can only perform simple tasks in its applications. It does not have the ability to mimic the human mind, at least not yet. AI is extremely good at identifying patterns in enormous data sets and processing more data than a human being reasonably can. 

Peylo succinctly describes AI as an intelligent agent that follows its code according to principles of rationality to arrive at its goals. “This is perhaps what AI is: being able to interact with an environment by means of perception, planning, and acting. This is the basic architecture of an AI system. But there is no moral consideration in that. It’s purely rational, but not ethical, because it’s not trained on that. So values could be at stake. And for society, when people act, there’s a responsibility attached to that. And so we have to find ways for AI, as well.”

AI is not a human competitor, but a ‘workhorse’ responsible for doing all the grunt work while the humans put their decision-making skills to good use. Of course, AI companies will have to comply with regulations, such as the EU’s GDPR, to stay ethical. Also, there are currently initiatives to set up EU’s own ethical standards for AI to help regulate risks, build trust, and establish AI with values that protect fundamental human rights. With guidelines like this in mind, all companies should have ‘red lines’ that they must never cross. This includes not allowing AI to pervade the privacy of human lives. 

AI and Humans Working Together for a Better World

Humans and AI are working together to make the world a better place. The idea is to effectively augment and restructure machines and business processes to support this partnership. Humans must be able to train machines to perform certain tasks and explain the outcomes of those tasks.

They must be able to tell if the results of AI processes are beneficial or counter intuitive. Humans will always have a crucial role in teaching machine learning algorithms how to perform the work they’re designed to do. The systems are given huge amounts of training data for example to detect diseases, support financial decision making, and give tailored recommendations on engineering tasks.

The only problem comes down to processes that are more opaque (also known as ‘black box’ problems). These must require human trainers to explain the behavior of the AI to non-expert users. The explainers will play an important role in industries that require evidence to function, such as the law or medicine, where the practice needs to understand how the AI arrived at a certain conclusion. 

Ethical AI is playing an important role in driving modern communities and institutions, including online retail, streaming services, and making people’s work easier so they can focus on more important and complex tasks. In the case of law enforcement, explainers will play an important role in helping the law understand why an autonomous car took certain decisions that resulted in a collision. 

AI companies will have to continuously refine and improve AI to ensure they are functioning properly, responsibly, and safely. As Peylo asserts, “If we have intelligent agents which can behave freely in the field, they have to obey some rules to fit in our societal framework.” 


Michiel Croon

Leiderschap in data gedreven werken | Inspireren | Adviseren | Implementeren | Funderen|

4 年

‘This means that some deep learning models are so complex that it is unclear how they arrived at a decision.’ —> Just like most humans make decisions.

Karsten vom Bruch

Gemeinsam statt einsam: #ZukunftsSchw?rmer

4 年

Dear Ronald van Loon and Christoph Peylo, I am very pleased to hear, that the development of this codex was no one man show. We should develop such a codex with active participation of all stakeholders in society. Where can one find the complete codex to get inspired by it?

Dr. Rainer Matiasek

Renewable Energy: Leading Towards Sustainability | | Burgenland Energie , Ex-McKinsey , Ex-Benteler Group , Partner for >40 C-Level Transformations

4 年

Ronald, absolutely! Proactively addressing this controversial and highly innovative topic is the key. Apparently Bosch is a very responsible company leader in this domain - great example for others to join!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了