AI versus human factor in medicine

AI versus human factor in medicine

Recently, Nature published a news article titled 'The testing of AI in medicine is a mess. Here’s how it should be done' [1]. As part of our series, we would like to invite you to read this article and take a closer look at several points raised in it.

Despite the complexity of testing AI systems for medical use, many developers are not sharing their results. A recent review found only 65 randomized controlled trials of AI interventions published between 2020 and 2022, while regulators like the FDA have approved hundreds of AI medical devices. We covered this subject in one of our recent blog posts. While alarming, this is possible because clearance often doesn't require rigorous testing. The FDA's guidance, in providing clear distinctions between different types of clinical validation studies for AI medical device manufacturers, falls short. Moreover, when it comes to medical device reports of adverse events, the reporting system remains suboptimal, which provokes questions, especially in a culture of nonreporting… The ongoing debate on this matter is on, and there are several very interesting areas it concerns.? Nature’s article also touches on these topics.

AI versus patient consent

One of the most important questions is how to inform patients about the results of AI-based medical devices. This is especially crucial for software used for patient triage or for those that operate in the background, searching for incidental findings. How should healthcare professionals talk to a patient diagnosed with lung cancer about the fact that an algorithm analyzing their chest CT scan has also revealed an atherosclerotic plaque? What should the script for such a conversation look like, especially when the doctor doesn't fully understand the logic behind the algorithm? Furthermore, how can a doctor be sure that the algorithm's result is correct if they don't know the validation mechanism used for acceptance and whether it underwent clinical trials during certification? Finally, how should a patient be informed about the algorithm's results if they were not previously asked for consent to use this technology?

AI algorithms are currently present in screening, diagnosing, and treatment planning. Patients may be unaware of the use or testing of AI technologies in their care, as there's no universal requirement for disclosure.

Ongoing discussions center on if and how to inform patients about these technologies. In the case of software used for patient triage, doctors are essentially excluded from this process, so the question is whether or not the patients become the end users. If errors in the algorithm lead to incorrect medical decisions, it is not the patients who bear the responsibility. And then there is the financial issue. What if, as a result of the algorithm's suggestions, tests and examinations are performed that ultimately prove to be unnecessary, who will be burdened with this unnecessary cost?

Despite understanding the ideal design for clinical trials of AI-based interventions, practical challenges hinder testing. Implementation success hinges on healthcare professionals' interaction with algorithms: even the best tools can fail if ignored. Additionally, informing patients and obtaining their consent for data use remains a complex issue.

?

References:

[1] https://www.nature.com/articles/d41586-024-02675-0


要查看或添加评论,请登录