April 2024 recap: 8 stories worth a fresh look
Credit:metamorworks

April 2024 recap: 8 stories worth a fresh look

Click here to receive AI in Healthcare newsletters in your email and stay informed about healthcare's biggest stories.


From the editor

Most may still be in diapers, but today’s infants and toddlers will be the first generation of Americans never to have known a world without generative AI. That’s going by the bursting onto the scene by ChatGPT in late 2022. Already it and other large language AI models have brought a new normal into many spheres of human endeavor. Much that’s happened since is familiar, at least in outline form, to those who recall life before the internet but can’t imagine how they’d survive without a connection today. In healthcare, of course, GenAI is only one flavor of AI shaking up the status quo. Plus healthcare is affected by the technology’s reach into other industries and sectors of the economy, not just into its own. April’s stories for AIin.Healthcare drive these points home. Here are some highlights.


The time to build AI literacy in young people is ASAP. Teachers of schoolkids from kindergarten through high school believe so. At the same time, these educators approach the technology with a mix of enthusiasm and anxiety. A new survey helps explain the mixed feelings: Most K-12 teachers have little to no firsthand experience with the technology—yet the media buzz over AI pressures them to bring it to their students. Meanwhile more than a few seem to doubt the value of generative AI in education. One survey respondent expressed concern that AI could make students so reliant on technology that “they no longer think for themselves. They won’t see a need to learn and therefore won’t.” Read our coverage: Ready and willing but not yet able: America’s schools staring down GenAI

Perception: There’s enough data out there to train GenAI for generations to come. Reality: AI-quality data is a finite resource—and it’s already endangered by over-mining. OpenAI, Google and Meta all know the score. They’ve been consuming the data much faster than it can be replenished. Which is why, to feed their respective AI beasts, they’ve been cutting some ethical corners, skirting their own policies and mulling the pros and cons of bending the law. “Their situation is urgent,” a New York Times investigative team reports. “Tech companies could run through the high-quality data on the internet as soon as 2026.” AIin.Healthcare’s summary: The well for AI training data is running dry. Big Tech heavyweights are taking extraordinary measures to deal with the drought.

Providers and payers agree on one thing about generative AI in healthcare: It will make a real difference improving clinical outcomes as well as the patient experience. The two camps diverge in their thinking, though, over just how much transformation GenAI will actually deliver. “Payers appear to be convinced that GenAI is a game changer, particularly for administrative functions,” market researchers write. “Providers are more muted about the impact of GenAI” beyond improving care delivery. GenAI-focused market researchers: ‘The payer–provider divide is wide and has dangerous implications’

The National Academy of Medicine wants everyone involved with healthcare AI to come together over certain principles of responsibility. An AI steering committee at NAM lays out its reasonable-sounding guideposts as a way to make sure the technology is always and everywhere safe, effective, equitable, efficient, accessible, transparent, accountable, secure, adaptive and attuned. “Engagement of all key stakeholders in the co-creation of this Code of Conduct framework is essential to ensure the intentional design of the future of AI-enabled health, healthcare and biomedical science that advances the vision of health and well-being for all,” NAM explains. Our synopsis: Submitted for consideration by all healthcare AI stakeholders: 10 principles, 6 commitments, 1 direction

Now that’s a disconnect. Only 20% of U.S. physicians believe patients would be concerned about the use of GenAI in a diagnosis. But most Americans—80%—say they are concerned. The Wolters Kluwer Health surveyors who report the results further found 40% of U.S. physicians ready to use “point-of-care GenAI” as long as they’re confident in the specific tool they have in hand for the purpose. “Physicians are open to using generative AI in a clinical setting provided that applications are useful and trustworthy,” comments Peter Bonis, MD, Wolters Kluwer Health’s chief medical officer. “The source of content and transparency are key considerations.” Physicians are embracing clinical GenAI—in theory, at least

Highly knowledgeable medical AI is here for the tapping, AI is advancing the practice of medicine, and the FDA is approving more and more AI-equipped medical devices. That’s the good news. On the other side of the ledger, the public is pessimistic about AI’s overall economic impact, complex vulnerabilities have emerged in large language models, and the number of AI “incidents” continues to rise. The observations are from analysts at Stanford University’s Institute for Human-Centered AI, aka “HAI,” which presents its latest findings in Artificial Intelligence Index Report 2024. AIin.Healthcare’s summary and link: 10 things you may have suspected about AI but didn’t know for sure till now

“There is nothing inevitable about AI’s advancement into healthcare. No patient should be a guinea pig and no nurse should be replaced by a robot.” The quote is from a nurse who helped lead the anti-AI protest at Kaiser Permanente in April. Kaiser officials countered the marchers’ sentiments: “We believe that AI may be able to help our physicians and employees, and enhance our members’ experience.” More quotes from the kerfuffle: Overheard around the Kaiser nurses’ protest over AI in healthcare

Clinicians who rely on AI to guide care risk falling afoul of courts considering malpractice suits. That’s nothing new. But a new analysis offers ways to think about the benefits of clinical AI in light of the litigious landscape. “Until the use of AI/machine learning for treatment recommendations by clinicians gets recognized as the standard of care, the best option for clinicians to minimize the risk of medical malpractice liability is to use it as a confirmatory tool to assist with decision-making.” Emphasis on confirmatory. As opposed to authoritative. Our coverage of the analysis: Against malpractice for using clinical AI, the best defense is a good offense


Click here to receive AI in Healthcare newsletters in your email and stay informed about healthcare's biggest stories.

Helmut Dahl

Expert for Sales & Business Development Strategy (SaaS, Analytic, Big Data, Cloud & AI & Healthcare)

10 个月

Thank you for sharing!

回复
Sandy Wilson

Healthcare IT Solutions Sales Leader. Passionate Networker & Relationships Investor, Expert Partner & Field Sales Enabler, Levers optimism & humor, Huge HIMSS & RSNA Network, Multi Cloud, AI & LLM

11 个月

Very informative and a great start!

回复

要查看或添加评论,请登录

AI in Healthcare的更多文章

社区洞察

其他会员也浏览了