The five reasons for explainable AI
Silvie Spreeuwenberg
Strategy developer, Entrepreneur & Software designer – MSc AI, MBA
Artificial Intelligence (AI) is increasingly popular in business initiatives in the healthcare and financial services industries, amongst many others, as well as in corporate business functions such as finance and sales. Did you know that startups in AI raise more money? and that within the next decade, every individual is expected to interact with AI-based technology on a daily basis?
AI is more than machine learning
AI is a technology trend related to any development to automate or optimize tasks that traditionally have required human intelligence. Experts in the industry prefer to nuance this broad definition of AI by distinguishing machine learning, statistics, IT and rule-based systems. The availability of huge amounts of data and more processing power - not major technological innovations – make machine learning for better predictions the most popular technique today. However, I will argue that other AI techniques are equally important.
AI contributes to human decision making
The consequences of AI innovations for humanity have been huge and were, at the time, difficult to oversee. There were pioneers, visionaries, investments and failures needed to get us where we are today. I am so grateful with the result. Every day I use a computer, a smart phone and other technology to provide me travel advise, ways to socialize, recommendations on what to do or buy and help me memorize and acquire new knowledge. Many of these innovations are related to technology developed by researchers in Artificial Intelligence and the full potential has not yet been exploited.
But there are also concerns.
Artificial intelligence solutions are accepted to be a black box: they provide answers without an explanation, like an oracle. You may already have seen the results in our society: AI is said to be biased, governments raise concerns about the ethical consequences of AI and regulators require more transparency. You should embrace the potential improvements that AI can bring to improve human decision-making in companies, but instead, people have become skeptical about AI technology. Not only because they fear for losing their job, also because, as an expert, you are aware of all the uncertainties surrounding your work, how can an AI algorithm deal with those aspects?
Examples of AI biases.
AI systems have been demonstrated to be gender– promoting males for job offers – and ethnicity biased – classifying pictures of black people as gorilla’s -. These biases are a result of the data used to train the algorithms – containing less female job seekers and more pictures of non-colored people. Let’s not forget that this data is created and selected by humans who are biased themselves.Perhaps you need to make choices and guide your company to compete using AI.
What approach could you follow without losing the trust of your own employees or customers?
Now that AI technology is at the peak in the hype cycle for emerging technologies more conservative businesses want to use the benefits of AI based solutions in their operations. However, they require an answer to some or all of these – above mentioned - concerns.
To benefit from the potential of AI the resulting decisions must be explainable.
For me this is a no-brainer since I have been promoting transparency in decision making using rule-based technology for years. In my vision, a decision support system needs to be integrated in the value cycle of an organization. Business stakeholders should feel responsible for the knowledge and behavior of the system and confident of its outcome. This may sound logical and easy, but everyone with experience in the corporate world knows it is not. The gap between business and IT is filled with misunderstandings, differences in presentation and expectations.
It takes two to tango.
The business, represented by subject matter experts, policy makers, managers, executives and sometimes external stakeholders or operations, should take responsibility using knowledge representations they understand, and IT should create integrated systems directly related to the policies, values and KPI’s of a business. Generating explanations for decisions plays a crucial role. We should do the same for AI based decisions: Choose AI technology when needed and use explanations to make it a success. That is, explainable AI – known by the acronym ‘XAI’.
Five reasons to ask for explainability
The five reasons why XAI solutions are more successful than an “oracle†based on AI, or any black box IT system, are as follows:
- Decision support systems that explain themselves have a better return on investment because explanations close the feedback loop between strategy and operations resulting in timely adaption to changes, longer system life time and better integration with business values.
- Offering explanations enhances stakeholder trust because the decisions are credible for your customer and also makes your business accountable towards regulators
- Decisions with explanations become better decisions because the explanations show (unwanted) biases and help to include missing, common sense, knowledge.
- It is feasible to implement AI solutions that generate explanations without a huge drop in performance with the six-step method that I developed and technology expected from increased research activity.
- It is preparation for the increased demand for transparency based on concerns about the ethics of AI and the effect for the fundaments of a democratic society.
Upcoming book: XAIX
In my upcoming book named XAIX: Explainable Artificial Intelligence Explained, I will detail each reason and provide examples or practical guidance. After reading you will have a good understanding what it takes to explain solutions that support or automate a decision task and the value explanations add for your organization.
Connect to me and send me a message if you want to get a notification when the book is available.
Note: by liking and sharing this post you will receive a discount code for the upcoming book.
I would like to thank Patricia Henao for being my editor for both the book and articles series.
Developing and delivering knowledge based automated decisioning solutions for the Industrial and Agricultural spaces.
5 å¹´David we are going to have to talk about your use of the term "knowledge based". Rules are a knowledge representation schema so by definition Rule based applications ARE knowledge based systems ??
Senior Consultant @CogSoft || Top100 Expert @ISTA || Multiple Boards
5 å¹´Silvie Spreeuwenberg The secret sauce will contain a network of engineered representations, a multi graph explicit knowledge approach. Some structures will be rule based, others knowledge based. i.e. procedural vs more complex logic. Great article. Thanks.
Developing and delivering knowledge based automated decisioning solutions for the Industrial and Agricultural spaces.
5 å¹´Can't wait for the book (or at least the US version, my Dutch is non existent, just being "transparent" <g>)