The mystery of AI. XAI in medical imaging

The mystery of AI. XAI in medical imaging

AI has gotten really good at solving problems in recent years, but the algorithms behind it are becoming more complex and difficult to understand. Under what circumstances can I trust an AI model's output? How can it explain its reasoning behind a specific prediction, and why did it choose that outcome over others? This lack of transparency is one of the factors behind slow AI adoption and relatively low public trust in AI-powered systems.

This uncertainty is a problem, especially when it comes to using AI for important tasks or… ?diagnosing and treating patients. That's where Explainable AI (XAI) comes in – it aims to make AI more transparent so people can trust it and use it more widely.

XAI methods in AI-powered medical image analysis

In a paper published nearly a year ago in the European Journal of Radiology [1] researchers investigated the approaches adopted around medical imaging and XAI. We strongly encourage you to read the entire article. In the meantime, we would like to draw your attention to the types of methods used to better understand how AI works in medical imaging research analysis.

Based on?a PubMed analysis?researchers distinguished:

  • visual?(saliency-based)
  • non-visual?(textual, auxiliary, and case-based methods).

Textual explanations focus on providing clear, written explanations for the AI's predictions, aiming to capture the meaning behind its decisions. Auxiliary measures use tables, graphs, or other visual formats to present additional information, such as the importance of specific image features or statistical indicators that influenced the AI's output. Case-based explanations help identify key concepts or influential data points within the specific task the AI is performing. This can clarify what kind of information the AI relies on most heavily for its predictions.

However, when it comes to medical imaging — the field in which we analyze image data — unsurprisingly, visual XAI methods are the most popular.

Figure: Annual development of visual and non-visual XAI methods applied in medical imaging (based on the cumulative number of citations). Source:

Saliency-based (visual) methods

Visual explanations in XAI allow us to:

  • Compare AI decisions to radiologists' expertise: we can see if the AI's decision-making process aligns with how a radiologist would approach the same image.
  • Identify potential errors and biases: if there are significant differences between the AI's and a radiologist's approach, visual explanations can help pinpoint where errors or biases might be creeping in.
  • Promote transparency and confirm diagnoses: beyond just error detection, visual XAI methods can make the AI's reasoning more transparent, building trust in its results and aiding doctors in confirming diagnoses.

It should be noted that this is a diverse group, encompassing a wide range of techniques. These techniques alongside their practical impact, shortcomings, and limitations are presented in detail in the publication, to which we cordially refer you once again.

What we would like to particularly emphasize (and which the authors of the publication also emphasize) is the need to consider diverse stakeholder groups. It is impossible to achieve a full understanding of the algorithm's operation if its logic is described only in complex technical documentation. This will be useful for engineers and developers, but it may not tell a clinician much. Considering all stakeholder groups (scientists, developers, doctors, patients, ethics committees and compliance experts) will allow us to build certainty and trust that we know and understand why the AI model made such and such a decision. Why does the AI analysis of the medical scan or data suggest this particular problem and not another one?

?

?

References:

[1] Katarzyna Borys, Yasmin Alyssa Schmitt, Meike Nauta, Christin Seifert, Nicole Kr?mer, Christoph M. Friedrich, Felix Nensa, Explainable AI in medical imaging: An overview for clinical practitioners – Saliency-based XAI approaches, European Journal of Radiology, Volume 162, 2023, 110787, ISSN 0720-048X, https://doi.org/10.1016/j.ejrad.2023.110787.

An insightful article, well published! looking forward for new insights!

要查看或添加评论,请登录

Graylight Imaging的更多文章

社区洞察

其他会员也浏览了