Graham Walker's 2024 Predictions for Healthcare and AI

Graham Walker's 2024 Predictions for Healthcare and AI

Thursday we asked ChatGPT, Claude, Bard, and others what they thought we'll see in 2024. Today, I'll give my own predictions as an ER doctor, healthcare tech leader, and daily user of GenAI.

??But first, our LLM Awards for most accurate 2024 predictions:?

  1. Claude — while it appears to be mostly trained up to 2022, Claude’s responses were the most in-line with my predictions.
  2. Gemini Pro surprised me as a runner-up as I’ve been quite underwhelmed with Google’s Bard chatbot.
  3. Mixtral-8x7B gets an honorable mention for being the only one to mention cybersecurity, as 2023 saw growing numbers of hospital ransomware attacks.?

I’ve broken each of my predictions into larger topics. Here we go.


The Healthcare Industry

  • Startup Failures: Struggles for healthcare AI startups without unique propositions or moats as Big Tech dominates with scalable, affordable solutions. Startups will need to build unique features or find partnerships/integrations that make them special.?
  • Data Accessibility: I wouldn’t be surprised if we some large anonymized or synthetic data sets (paid or even open source’d) to help support the industry and support…
  • General AI Standards: I’m beating the dead horse on this one, but without a gold standard or evaluation/validation criteria, no one is going to stick their neck out in the powerful but risky GenAI space.?
  • “AI Can’t Fix It All”: We’ll recognize that while AI is going to be amazing, it won’t solve all of healthcare’s problems and can’t solve many of them today. We’ll need better patient and doctor UX and better digital workflows in the meantime. Video demo of today-fixable problems!


Health System AI Adoption

  • “Backend” AI Help: We’ll see AI work its way into back-office tasks and MD-to-MD communication — referrals, summaries, etc. (These by definition keep the human in the loop and it’s easier for a doctor to probably detect an error/hallucination or at least know to not-trust and verify.)?
  • RAG Chatbots: We’ll see HCP-facing chatbots that help clinicians access guidelines, formularies, etc of their system — especially given that Microsoft is announcing these as future Azure tools.?
  • AI Partnerships: Health systems will want partnerships with AI companies, and AI companies will need health systems’ data and access to healthcare delivery.?
  • Informal “What If We’d Used AI” Outcomes Analysis: HCPs will informally “run this case by AI” especially if there’s a bad outcome. “What would the AI have said about this EKG or this CT scan?” This will be a critical step toward comfort, education, and trust.
  • Predictive Analytics: Predictive analytics tools will gain more attention, though their business applications may still be in nascent stages.
  • Diagnostic Success Stories: You’ll hear more of these news stories where “ChatGPT diagnosed rare disease” which will drive public perception that doctors need to use AI. (Conveniently, the media will leave out the way-more-frequent times where ChatGPT was wrong, or led the patient astray.)?

Patient-Focused AI

  • Patient-facing AI Trials: Some health systems will experiment with patient-facing GenAI in “one-way” interactions as I’m calling them, where the AI may ask questions and collect information but won’t offer anything back to the patient.
  • GenAI Potential in Physical Therapy?: I could see some niche areas where GenAI could probably be used safely, like physical therapy. If we know you’ve got an ACL tear, could a PT Chatbot help answer your questions safely, grounded in the fact that the diagnosis is already confirmed? Perhaps.
  • Risks and Consequences: I think there will be setbacks and challenges. Maybe not in 2024, but soon. GenAI and even more deterministic models like ML, we’ll unfortunately see some harms and even deaths. Who will be ultimately responsible and how will this be decided by society and the courts??
  • Patients Running Their Symptoms by ChatGPT: People already Google their symptoms, and honestly I’d much prefer if they’re going to do it they use an LLM over Google anyway, so this is definitely welcomed by me; the question is whether OpenAI, Anthropic, and Google will continue to allow these questions or will try to block them for liability reasons. ?


Ethics and Regulations

  • Bias Exploration: We’ll see more papers discussing biases in today’s medical care (examples like CKD-Epi the new PREVENT Equations) as well as biases discovered in GenAI tools and ML models (as they’re all based on data which contains its own biases as well).
  • FDA Regulatory Shifts: I wonder if the FDA may have to provide clarifications or changes to its SaMD policies as the technology moves way faster than it can, or the FDA becomes overwhelmed with SaMD applications.
  • More Pressure for Regulatory Change: Regulatory agencies will face growing pressure to allow AI models to replace some policies that currently require humans — for example language translation services —?ensuring compliance with Title VI of the Civil Rights Act.
  • Ethical Dilemmas: Society and medicine will grapple with some ethical questions as AI becomes cheaper, more readily available, and more accurate, while healthcare access and inequality grows: What’s better, an imperfect but very-good AI tool available right now, or a patient having to wait 3 months to see their primary care doctor (or those who don’t even have a PCP)??

AI/Tech Industry

  • Open Source Emphasis: Open source GenAI models like Llama 2 and Mistral are right at the heels of OpenAI and Google (We have no moat!) The speed at which the open source?community is training and developing GenAI tools is simply incredible. Every week there’s a new incredible open source project making it even easier to build and create, even for people without any programming experience thanks to no-code and low-code tools. Ollama, Cheshire Cat are two open source faves; LM Studio is a great tool as well Mind Studio from YouAI.?
  • Task-Specific Models: If 2023 is all about size and scale, I think 2024 may introduce smaller, task-specific models that could run on your phone or laptop and help with specific user needs. They can’t create a sonnet about scrambled eggs AND make a pumpkin pie recipe, but a model might help clean up your calendar or categorize photos for you. (Or in medicine, it couldn’t diagnose disease but could answer common post-operative questions.)
  • Reduced Hallucinations: All the trends suggest reductions in LLM hallucinations, and I think we’ll continue to see more of this in 2024.
  • Efficiency Gains: We're already seeing gains in efficiency and speed from software and hardware improvements in 2023. We'll see way more in 2024.
  • Environmental Costs: As Spencer Dorn mentioned recently: We’ll have to come to terms with the energy consumption of these models. These will improve, but we cannot continue to have everything in the world just use the environment and energy consumption as an externality that no one actually has to deal with. Because of this, we'll likely need regulation in the space.


Safe, Fair, and Accurate AI in Medicine

We’ll see version 2 of our Physicians Charter for Responsible AI in Medicine, and hopefully other organizations will prioritize more public discussions of this critical topic!

??2024 Winners

  • Those with Data and a Trusted Brand (Mission-Driven Organizations with Vision)?
  • Building inter-disciplinary teams where healthcare experts can work with AI and data science experts to solve big, real problems
  • Guardrails to keep AI safe while healthcare and patients learn more about it?
  • (Unfortunately) Legacy regulations — Congress and other regulators won’t change nearly as much or as fast as anyone wants them to, and we’ll sadly build AI tools into crappy workflows, because it’s easier to do that than fix the regulations
  • Transparency — At least in 2024, physicians won’t adopt something that diagnoses or predicts or treats that they don’t actually understand?
  • The patient-physician relationship as patients will look to their physician to help interpret and understand the AI landscape.?


??2024 Losers

  • Startups with no moat; ‘nuff said.
  • HealthAI companies who don’t respect and understand healthcare workflows Relatedly, any companies who underestimate the complexity of medicine and think diagnosis or treatment “aren’t that hard” or that doctors are just lazy or inefficient (Most companies will want to partner with health systems who manage populations of patients, which is 100x more challenging than managing the health of young healthy people)
  • The Term AI Itself will be further diluted into marketing buzz. We’re already seeing “GenAI” for generative LLM and image creation tools and we’ll probably see more specific buzzwords for machine-learning tools as well
  • The AI Hype Train will run out of track as we realize an LLM can’t draft a message to a complex patient with 4 different problems who’s leaving on a trip tomorrow for 9 months.
  • Forms that need filling out will hopefully be automated away. Good riddance.
  • Healthcare workers and the aging population as the US still has absolutely no plan to handle this massive wave of aging baby boomers and all of their medical and psychosocial needs.


Scott J. Campbell MD, MPH

AI whisperer for healthcare decision-makers / Founder, Zero Hour Medical / Clinical AI Advisor, Turing Biosystems/Advisor, American Board of AI in Medicine/Emergentologist

9 个月

OK, Graham… As a longtime friend, colleague, and medical group partner… Here is what I think: The three most important concepts in AI/machine learning for healthcare in 2024 are: 1. Federated learning. 2. Graph neural networks. 3. Synthetic data Will LLM to facilitate query each of those technologies and make the UX more physician-friendly? Sure. But If you are a payer/provider, delivery system, AI start-up, medical device, or pharma in the drug discovery, drug repurposing space… And you don't manage those three key technological opportunities… You're going to have a problem in 2025 Skate to where the puck is going to be not where it is…

Faisal Cheema

Physician @ The Permanente Medical Group | Hematology, Oncology | Lead NCAL malignant hematology clinical trials Interested in Health Care Tech and AI

9 个月

Right on! I believe that 2024 will be the year of applications. Along those lines in your impression what regulatory hurdles would an aspiring clinical startup will have to tackle prior to deployment in a health care setting. Granted it has certifications from HIPAA, GDPR and other built in compliance measures.

Stephanie Owen

Microsoft Healthcare Consulting Lead | Certified Health Informatician | Fellow AIDH | GAICD | MSP | MBA | BEc Computer Science

9 个月

Insightful and succinct. The point about interdisciplinary teams is what I am passionate about. It's the only response to ambiguity and complexity.

回复

At?saiwa, we have developed a range of?AI?services. I encourage you to visit our website.?https://saiwa.ai/

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了