Trust is hard, knowing what to trust is harder.

Trust is hard, knowing what to trust is harder.

eXplainable AI

Gartner’s Magic Quadrant report on data science and machine learning (DSML) for 2021 is out. And no surprise- responsible AI governance, transparency, and addressing model-based biases are the most valuable differentiators for every vendor listed on the quadrant. AI is facing a techlash as researchers continue to find undesirable racial and gender bias in publicly available datasets. A case in point- using the largest publicly available medical records dataset, the Medical Information Mart for Intensive Care IV (MIMIC-IV), researchers trained a model to recommend one of five categories of mechanical ventilation. Alarmingly, the model’s suggestions varied across different ethnic groups and was influenced by insurance status.

  • Black and Hispanic cohorts were less likely to receive ventilation treatments on average, while also receiving a shorter treatment duration.
  • Privately insured patients tended to receive longer and more ventilation treatments compared with Medicare and Medicaid patients.

Similar studies have been published across different sensitive domains like finance, healthcare, and public safety. Trust and adoption of AI technologies is under threat. The topmost priority for CxOs and senior management teams working with AI technologies is understanding bias mitigation and how AI technologies can control for biases on a per-model basis.

In the early days of Artificial Intelligence (AI), the predominant reasoning methods were logical and symbolic. These early systems reasoned by performing some form of logical inference on human readable symbols. And a trace of their inference steps could be generated, which then became the basis for explanation. The accuracy and applicability of these systems was limited and inefficient, leading to a wave of ‘black box’ models with opaque internal representations that are more effective but less explainable. These new techniques include probabilistic graphical models, reinforcement learning, and deep learning neural networks.

To effectively manage the emerging generation of AI systems, Federal agency DARPA proposed Explainable Artificial Intelligence (XAI), a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand and appropriately trust the AI. The target for XAI is the end user who depends on the decisions, recommendations, or actions of the system. The framework proposes a micro as well as macro view of the systems explainability to provide end users with an explanation of individual decisions, and the system’s overall strengths and weaknesses. Questions that the framework seeks to answer are:

Why did you do that?
Why not something else?
When do you succeed?
When do you fail?
When can I trust you?
How can I correct an error?

In AI the terms interpretability and explainability are closely related and often used inter changeably. Interpretability is the degree to which a human can understand the cause of a decision or a ML model's result. The higher the interpretability of a model, the easier it is to comprehend why certain decisions or predictions have been made. Explainability builds on interpretability and is being able to explain what is happening.

Not all machine learning models require the same level of interpretability and the importance of interpretability really depends on the downstream application. Three broad XAI strategies are,

  1. Use an interpretable model like a linear regression, logistics regression, or decision tree with monotonicity constraints, which ensures that the relationship between a feature and the target outcome is linear.
  2. Model specific interpretability which involves examining the structure of algorithms or intermediate representations
  3. Model agnostic interpretability which adds a layer of interpretability on top of the complex models.
Interpretability techniques mind map

Choice of the interpretability model methods depends on the scope, purpose, and context.

As the top DSML vendors race to add XAI capabilities to the tools, we can expect that in the future algorithms will explain themselves. And while business leaders are hesitant to adopt, no one doubts that AI going to become more and more prevalent and powerful. Till explainability becomes an integral part of the DSML suite, transparency in model design and feature engineering is key. Design choices that affect machine learning like the selection of training data, initial conditions, architectural layers, loss functions, regularization, optimization techniques, and training sequences need to be recorded, evaluated and represented to end users.

As Ronald Reagan once said,
“Trust, but verify”.

要查看或添加评论,请登录

Anagha Vyas的更多文章

  • Protein meet AI: Unfolding the future

    Protein meet AI: Unfolding the future

    Proteins, the workhorses of life, are complex molecules essential to virtually every biological process, from…

    3 条评论
  • From Genes to Screens: How AI is Giving Cancer a Run for Its Money

    From Genes to Screens: How AI is Giving Cancer a Run for Its Money

    I recently had the opportunity to attend Google Cloud's inaugural Cancer AI Symposium, where healthcare professionals…

    10 条评论
  • Once upon a time in Emerging Technologies

    Once upon a time in Emerging Technologies

    Emerging technologies are defined by Gartner as disruptive with a potential of providing competitive advantage in the…

    3 条评论
  • NFTs: Non-Fungible Tokens

    NFTs: Non-Fungible Tokens

    From an acronym that no one had heard off, to being debated on every financial channel, featuring in comedy bits…

  • Resumes: part Art, part Science

    Resumes: part Art, part Science

    Remember this scene from Game of Thrones when Jon Snow and Daenerys Targaryen are introduced to each other. Which…

  • What’s in a name?

    What’s in a name?

    Below is a snapshot of an email that I received recently, I have experienced many written and verbal permutations of my…

    8 条评论
  • Get better with change, not by chance.

    Get better with change, not by chance.

    About a year ago I set out on a new adventure. I joined Cardinal Health.

    10 条评论
  • Bingeing the Basics

    Bingeing the Basics

    Product, Platform, Tool, Service, Methodology As builders, testers, custodians of software assets we often use the…

    3 条评论
  • Whose Data is it anyway?

    Whose Data is it anyway?

    Artificial intelligence and machine learning have given rise to transformative business models with few dominant firms…

    10 条评论
  • Quantum Computing, Wickedly Futuristic

    Quantum Computing, Wickedly Futuristic

    Quantum computing promises to usher in a new epoch in computing technology by exponentially advancing computational…

    10 条评论

社区洞察

其他会员也浏览了