The five reasons for explainable AI
Silvie Spreeuwenberg by Ben Kleyn

The five reasons for explainable AI

Artificial Intelligence (AI) is increasingly popular in business initiatives in the healthcare and financial services industries, amongst many others, as well as in corporate business functions such as finance and sales. Did you know that startups in AI raise more money? and that within the next decade, every individual is expected to interact with AI-based technology on a daily basis?

AI is more than machine learning

AI is a technology trend related to any development to automate or optimize tasks that traditionally have required human intelligence. Experts in the industry prefer to nuance this broad definition of AI by distinguishing machine learning, statistics, IT and rule-based systems. The availability of huge amounts of data and more processing power - not major technological innovations – make machine learning for better predictions the most popular technique today. However, I will argue that other AI techniques are equally important.

AI contributes to human decision making

The consequences of AI innovations for humanity have been huge and were, at the time, difficult to oversee. There were pioneers, visionaries, investments and failures needed to get us where we are today. I am so grateful with the result. Every day I use a computer, a smart phone and other technology to provide me travel advise, ways to socialize, recommendations on what to do or buy and help me memorize and acquire new knowledge. Many of these innovations are related to technology developed by researchers in Artificial Intelligence and the full potential has not yet been exploited.

But there are also concerns.

Artificial intelligence solutions are accepted to be a black box: they provide answers without an explanation, like an oracle. You may already have seen the results in our society: AI is said to be biased, governments raise concerns about the ethical consequences of AI and regulators require more transparency. You should embrace the potential improvements that AI can bring to improve human decision-making in companies, but instead, people have become skeptical about AI technology. Not only because they fear for losing their job, also because, as an expert, you are aware of all the uncertainties surrounding your work, how can an AI algorithm deal with those aspects?

Examples of AI biases.

AI systems have been demonstrated to be gender– promoting males for job offers – and ethnicity biased – classifying pictures of black people as gorilla’s -. These biases are a result of the data used to train the algorithms – containing less female job seekers and more pictures of non-colored people. Let’s not forget that this data is created and selected by humans who are biased themselves.Perhaps you need to make choices and guide your company to compete using AI.

What approach could you follow without losing the trust of your own employees or customers? 

Now that AI technology is at the peak in the hype cycle for emerging technologies more conservative businesses want to use the benefits of AI based solutions in their operations. However, they require an answer to some or all of these – above mentioned - concerns. 

To benefit from the potential of AI the resulting decisions must be explainable.

For me this is a no-brainer since I have been promoting transparency in decision making using rule-based technology for years. In my vision, a decision support system needs to be integrated in the value cycle of an organization. Business stakeholders should feel responsible for the knowledge and behavior of the system and confident of its outcome. This may sound logical and easy, but everyone with experience in the corporate world knows it is not. The gap between business and IT is filled with misunderstandings, differences in presentation and expectations.

It takes two to tango.

The business, represented by subject matter experts, policy makers, managers, executives and sometimes external stakeholders or operations, should take responsibility using knowledge representations they understand, and IT should create integrated systems directly related to the policies, values and KPI’s of a business. Generating explanations for decisions plays a crucial role. We should do the same for AI based decisions: Choose AI technology when needed and use explanations to make it a success. That is, explainable AI – known by the acronym ‘XAI’.

Five reasons to ask for explainability

The five reasons why XAI solutions are more successful than an “oracle” based on AI, or any black box IT system, are as follows:

  1. Decision support systems that explain themselves have a better return on investment because explanations close the feedback loop between strategy and operations resulting in timely adaption to changes, longer system life time and better integration with business values.
  2. Offering explanations enhances stakeholder trust because the decisions are credible for your customer and also makes your business accountable towards regulators
  3. Decisions with explanations become better decisions because the explanations show (unwanted) biases and help to include missing, common sense, knowledge.
  4. It is feasible to implement AI solutions that generate explanations without a huge drop in performance with the six-step method that I developed and technology expected from increased research activity.
  5. It is preparation for the increased demand for transparency based on concerns about the ethics of AI and the effect for the fundaments of a democratic society.

Upcoming book: XAIX

In my upcoming book named XAIX: Explainable Artificial Intelligence Explained, I will detail each reason and provide examples or practical guidance. After reading you will have a good understanding what it takes to explain solutions that support or automate a decision task and the value explanations add for your organization. 

Connect to me and send me a message if you want to get a notification when the book is available.

Note: by liking and sharing this post you will receive a discount code for the upcoming book.

I would like to thank Patricia Henao for being my editor for both the book and articles series.


Fred Simkin

Developing and delivering knowledge based automated decisioning solutions for the Industrial and Agricultural spaces.

5 å¹´

David we are going to have to talk about your use of the term "knowledge based". Rules are a knowledge representation schema so by definition Rule based applications ARE knowledge based systems ??

David Geddes

Senior Consultant @CogSoft || Top100 Expert @ISTA || Multiple Boards

5 å¹´

Silvie Spreeuwenberg The secret sauce will contain a network of engineered representations, a multi graph explicit knowledge approach. Some structures will be rule based, others knowledge based. i.e. procedural vs more complex logic. Great article. Thanks.

Fred Simkin

Developing and delivering knowledge based automated decisioning solutions for the Industrial and Agricultural spaces.

5 å¹´

Can't wait for the book (or at least the US version, my Dutch is non existent, just being "transparent" <g>)

要查看或添加评论,请登录

Silvie Spreeuwenberg的更多文章

  • RFP processes in the software industry: a waiste of time.

    RFP processes in the software industry: a waiste of time.

    It was a beautiful sunny Sunday in August 2022 and I found myself - again - working against a deadline; this time…

    1 条评论
  • Hybride AI systeem voor betere klantbeleving

    Hybride AI systeem voor betere klantbeleving

    Nederland heeft een traditie hoog te houden als het gaat om het gebruik van systemen met regels om beslisssingen te…

    2 条评论
  • Gaan de vakverenigingen ook online?

    Gaan de vakverenigingen ook online?

    In Nederland hebben we een traditie van vakverenigingen. Experts die bij elkaar komen, kennis uitwisselen en netwerken,…

    4 条评论
  • Why explainable AI must be grounded in the boards risk management strategy.

    Why explainable AI must be grounded in the boards risk management strategy.

    Many issues have been reported on IT projects that fail after millions of dollars have been spent or that do not return…

    1 条评论
  • What Simacan can achieve by bringing XAI to the transport sector

    What Simacan can achieve by bringing XAI to the transport sector

    Welcome back! So, if you’ve already read blog numbers one & two in this trilogy, then you’re probably interested in the…

  • How the scale-up journey revolves around creating a learning organization

    How the scale-up journey revolves around creating a learning organization

    In my previous post I explained the main reason why I made a career switch: from being an entrepreneur to becoming an…

  • Why I gave up my freedom as an entrepreneur for a ‘regular’ job

    Why I gave up my freedom as an entrepreneur for a ‘regular’ job

    When people heard I was starting a new chapter in my life as an employee, many of them expressed their surprise…

    8 条评论
  • The ethics of AI

    The ethics of AI

    Artificial Intelligence is technology that can be used in many ways. It can fake images, fake video’s and can make fake…

    6 条评论
  • A good explanation must be believable

    A good explanation must be believable

    If you start thinking about the people you trust in your environment, it’s likely that you end up more confused than…

    2 条评论
  • How to apply XAI?

    How to apply XAI?

    If you believe the reports, AI is simply uploading big-data, running an AI algorithm and getting results…

    6 条评论

社区洞察