AI, Patient Safety, and the Risk of Distraction
It is often said that technology in healthcare is 80% people, 15% process, and only 5% technology.? As we apply technology to healthcare, It is important to evaluate and govern AI in the same way we do other clinical tools and processes.? AI is not perfect, and neither is modern medicine.
That said, AI holds enormous potential for improving clinical outcomes and making healthcare more efficient. As AI moves beyond back office administrative tasks to play a larger role in clinical care, a number of initiatives are underway to regulate its development and create standards for its medical use.? Some notable approaches include the Coalition for Health AI, the White House memorandum “Delivering on the Promise of AI to Improve Health Outcomes,” and the US National Coordinator for Health IT’s rules for healthcare AI transparency and risk management.??
While these initiatives are aligned with the idea of keeping AI safe in a clinical environment, we are far from a world in which autonomous diagnosis and action by AI routinely cause patient harm.? In medicine, AI is most commonly used as a supplemental indicator or a nudge to clinical human action.? At CommonSpirit Health, our 100+ AI applications are all governed by the principle that a clinician will always be between AI and a patient.? While we use AI to reduce the time to identify a stroke by up to 80 minutes and our AI infection early diagnosis tool has saved the lives of more than 1,600 patients during the past six years, AI is only one among many tools our clinicians use when deciding the best course of action for patient care.?????
There are, indeed, risks from use of AI in clinical and research settings.? They include unequal care because of algorithmic bias, poor outcomes from misunderstanding the black-box principles of AI, and inappropriate or insecure use of patient data.? And, while governance and oversight of AI are appropriate and welcome, our collective focus on the risk of potential patient harm can distract from the benefits of AI to minimize the actual risks which currently exist in clinical settings.
This focus bias can hinder technology progress.? A good analogy is evident in the way we think about the safety of autonomous vehicles.? The 2018 death of a pedestrian by a self-driving car in Phoenix rightly garnered national headlines, and coverage of subsequent deaths and accidents caused by autonomous vehicles reinforced perceived risk of the technology without questioning the current and existing baseline from which we operate.? In 2022, the National Highway Traffic Safety Administration (NHTSA) reported more than 42,000 vehicle accident deaths, yet fewer than 20 of those fatalities were associated with what it calls Advanced Driver Assistance Systems.? While every death is a tragedy, we cannot accept those 42,000 deaths as a baseline risk and apply greater focus and a higher standard to a new perceived threat which has the potential to provide far safer driving outcomes.??
领英推荐
The baseline risk is even greater in medicine.? In 1999, the Institute of Medicine estimated that as many as 98,000 patients die annually from preventable medical errors in the United States, and a 2013 Journal of Patient Safety study increased the estimate to more than 210,000 annually.? Every day we in the US medical community make preventable errors which lead to the same number of deaths which would be caused by the crash of a fully loaded airliner, and yet these real and present errors and their human impact are largely eclipsed and unexamined relative to concern about future, anticipated AI risk for patients.?
Shane Parrish has written on the 98/2 Rule “people spend 98% of their time talking about flashy things that contribute only 2% to the results, while overlooking the fact that 98% of the results come from consistently doing the boring basics that few notice.”? Focusing on AI healthcare innovation alone risks ignoring its known foundational work and achievable basic improvements which have real potential positive impact on patient safety and clinical outcomes.
Our approach to AI at CommonSpirit is one of enhancement.? We are not trying to replace care, diagnoses, or treatment by humans, rather we are implementing AI to improve the care we deliver.? Long before the advent of AI, safety and quality have guided our every decision at every step.? Every new tool, procedure, drug, and protocol carries some measure of risk which we must balance, and, when advanced solutions (AI or otherwise) offer better outcomes, withholding such care conveys even greater risk.
This approach takes discipline and patience.? Isabelle Bousquette’s recent Wall Street Journal article “It’s Time for AI to Start Making Money for Businesses. Can It?” highlighted the work underway by businesses for AI to help improve margins.? This pressure may grow externally as large technology firms seek business customer revenue to offset huge investments in AI development.? Any additional focus on AI as purely monetary or as a solution unto itself in clinical care needs to be met with thoughtful skepticism.
Our current standard of clinical care is not without error, harm, or revised scientific discovery.? To ignore the known existing risk in our current systems not only prevents honest self-evaluation but exposes our patients to the risk of opting not to use AI.? We can and should adopt governance and frameworks for safe development and use of AI alongside our investment in all enhanced training, tools, and processes to improve clinical outcomes, reduce preventable errors, and make healthcare safer for us all while remembering healthcare begins and ends with humanity.??
CEO at Aidoc
14 小时前Hey Daniel, what?you shared was powerful. AI’s greatest strength is its ability to enhance, not replace, clinical expertise. By reducing preventable errors and improving workflow efficiency, AI empowers clinicians to deliver faster and safer care. We must stop treating AI as a hypothetical risk and recognize it as a powerful solution already changing lives. The real risk is waiting, because every delay puts more lives at stake. With strong governance and trust in clinicians, AI can—and will—reshape healthcare for the better.
Founder and CEO, Egen
1 个月Daniel Barchi, finding the right balance between AI's potential and its risks is essential for paving the way to a future where healthcare innovations can genuinely improve patient care. It's an exciting journey we're on together!
Lead Physician Advisor, SW Div
2 个月Thank you @Daniel Barchi for this thoughtful and balanced article of the role Of AI in healthcare delivery. Just like budding health care providers AI in health care would would required to be trained and confirmed to the rigorous standards of medical practice. An enhancement to improve the safety, quality, and standardization of the delivery will be the key to the symbiotic relationship which w have with AI. Nevertheless it is in the role of the ancillary processes where logistics are involved I see the early benefits of AI to improve patient satisfaction by removing the irritable barriers between the payor, provider and the patient. It is a privilege to be a part of this movement as we explore the new frontiers. Thank you @Sunil Kakade for illuminating this article.
Digital Health Growth Strategist | 3x Founder | 6x Entrepreneur | Strategy | Innovation Acceleration | Durable Market Leadership | Strategic Growth Framework | GTM | Market Intelligence as an Asset | Partnerships
2 个月Really thoughtful article. Also a good reminder that there is still a lot of trepidation about the use of AI and ML in healthcare. With AI becoming more prominent in care settings, I believe all health systems should have a statement, supported by the commensurate controls, in a policy, charter or even on their website which reassures the public, physicians and staff that AI will be governed responsibly. Here is a sample I've posted in the past, "AI systems deployed by [Organization Name] are designed to augment and empower staff in decision-making and performing their work. [Organization Name] will implement measures so that humans retain control over AI system inputs and outputs." That being said, the outcomes from your models show once again the potential of AI and ML to make healthcare more efficient, effective and better for patient care.
Chief Digital Officer, Chief Information Officer, Chief Analytics Officer, Board Member, Board Advisor
2 个月Well-Stated and Insightful Daniel ??