Protect Patients from AI-Driven Healthcare Misinformation

Protect Patients from AI-Driven Healthcare Misinformation

The proliferation of health misinformation, a complex and formidable issue, was underscored by a recent Supreme Court case involving the Biden administration's battle against false COVID-19 vaccine claims on social media. As a healthcare information technology and public health expert, I am deeply alarmed by the potential dangers of medical misinformation, mainly as artificial intelligence (AI) becomes increasingly integrated into patient care, exacerbating the problem.

A recent New York Times article by Dani Blum offers valuable insights into the evolving nature of health misinformation and how to recognize it. Blum points out that unsubstantiated health hacks, cures, and quick fixes have spread widely on social media, while conspiracy theories that fueled vaccine hesitancy during the COVID-19 pandemic are now undermining trust in vaccines against other diseases. Recent outbreaks of measles, previously deemed eradicated in the U.S., are evidence of the impact of a reduction in childhood vaccinations fostered by misinformation. Rapid developments in AI have made it even harder for people to distinguish between true and false information online.

Test AI-Generated Content

As AI is integrated into patient care, it is imperative that organizations rigorously test the AI output for accuracy and regularly monitor it to prevent the dissemination of potentially harmful misinformation. Equally crucial is educating doctors, nurses, other clinicians, and patients about the risks of healthcare AI misinformation and how to identify it. The primary threat to patient exposure to misinformation is the abundance of unverified and untrusted healthcare websites that mimic reputable institutions but can quickly disseminate AI-generated misinformation.

Identify Misinformation

Blum's article provides valuable tips for recognizing misinformation, such as looking out for unsubstantiated claims, emotional appeals, and "fake experts" who lack relevant medical credentials or expertise. It also recommends validating claims with multiple trusted sources, such as health agency websites, and tracking down the original source of information to check for omitted or altered details.

I fear that AI-generated misinformation will be used to support political agendas, such as those proposed by anti-vaccination supporters who reject the proven science of the value of vaccinations . Additionally, unscrupulous drug or supplement manufacturers may offer unsubstantiated information about their products, prioritizing profit over patient health and safety.

As Blum's article rightly points out, addressing misinformation within personal circles necessitates empathy and patience. Using phrases like "I understand" and "it's challenging to discern who to trust" can help maintain relationships while guiding individuals toward reliable resources. Local public health sites and university websites may prove more effective for those who distrust national agencies.

Duty to Call-out Misinformation

As healthcare professionals and informed citizens, we must remain vigilant in identifying and addressing health misinformation, particularly as AI advances and complicates the information landscape. By educating ourselves and others about the risks of misinformation, validating claims with trusted sources, and engaging in empathetic dialogue, we can work together to protect patient health and safety in the face of this growing threat.

Source: Health Misinformation is Evolving. Here's How to Spot It, NY Times, March 16, 2024


Miseducation of Google's AI

Google’s recent kerfuffle over the release of Gemini, its new AI application, only emphasizes the importance of data quality when training large-language models. Google got in trouble trying to correct the biases, prejudices, and stereotypes in the internet data used to train Gemini. Internet data represents both the good and bad aspects of human existence. For those creating AI tools, what is their responsibility to limit harm, and should they try at all?

Source: The Miseducation of Google's A.I., NY Times, March 7, 2024


Automation Bias: A Short Cut to Medical Errors

To navigate the world, we generalize situations to reduce the burden of thinking through every decision point. While AI offers an assistive tool to help physicians with decision-making, it becomes a threat to patient care when automation bias takes hold. Clinical workflows that include AI must present “human stops ,” so the clinicians are forced to review the AI recommendations rather than automatically approve them.

Source: Blind Spots, Shortcuts, and Automation Bias - Researchers Are Aiming to Improve AI Clinical Models, JAMA, February 28, 2024


Inspirational Resources - Thank You

Austin Awes Randy Iskowitz David Gute Jeremy Racine Ted James, MD, MHCM Jeanine "Nini" Martin, FACHE/FHIMSS Gates Fellow Olympic athlete Ted James, MD, MHCM Jeff Huckaby Joe Bormel, MD, MPH Kristin Covi


Ted James, MD, MHCM

Medical Director | Speaker | Advisor | Passionate about Transforming Healthcare

7 个月

Thanks, Barry Chaiken. Yes, we need to address security, privacy, misinformation, and bias to prevent them from undermining the advantages AI could bring to healthcare

回复
Jeff Huckaby

CEO and Co-Founder | Passionate about helping people have better analytics outcomes using consulting, talent acquisition, and analytics solutions as a service.

8 个月

Great post. I have this concern, not just for medical reasons but almost all of the institutions.

Jim Pittman

Chief Communications Officer, STEM NOLA | STEM Global Action ? Board Chair, Alzheimer's Association Louisiana Chapter [21.8K+ micro-influencers]

8 个月

Thank you for your insight, Barry. This is most helpful and important. Great job!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了