How and when to use LLMs in healthcare

How and when to use LLMs in healthcare

This post was written by Benjamin McCulloch , Content and Conversation Designer at VUX World .


You may be tempted to say no. There’s two obvious reasons why; LLMs hallucinate (which could mean that nonsense or simply incorrect health advice would be generated) and they can be compromised by injection attacks (which would mean that private medical data could be extracted).

Is that the end of the story though? Can’t we try a different strategy for how we use LLMs in healthcare?

Delfina Huergo , Senior Market Research Analyst, Frost & Sullivan, and Chip Steiner , Product Manager Healthcare Practice, Kore.ai , recently appeared on the VUX World podcast to present a safer alternative that speeds up patient diagnosis.

LLMs aren’t only for generating responses

A great deal of the discussions around the use of LLMs focuses on their ability to generate responses on the fly. That can work well in low-risk settings, or when there is an expert on-hand to evaluate the output before it goes to a customer. In a healthcare setting that’s risky though.

You don’t need an LLM’s ability to pepper language with variety in healthcare. You want the opposite. You need to consistently give crystal-clear advice, and it absolutely has to be correct. Otherwise, what’s the point? When you need to talk about health, you expect to have an accurate diagnosis as well as to receive accurate advice.

While there’s still good reason to be careful about the advice given to patients by an LLM, it turns out that it can be a lifesaver when it comes to diagnosis.

Please, describe where it hurts

Diagnosis is understandably a vital component of administering proper healthcare. However it’s not possible to have an expert on every possible ailment available at the front desk for every patient who approaches their healthcare provider.

That is where LLMs can help. As Chip says, “So we put in front of patients, along with some of our partners, Symptom Analysis Technology that allows a patient to describe, in their own words, the clinical condition that they’re facing. That is enough of a key for the healthcare organisation then to marshal the right resources to be able to support that patient.”

You may be wondering how you would describe your own symptoms? Rather than being able to describe a stomach problem in minute detail, most of us would likely say something along the lines of ‘I dunno why. My belly just hurts.’ It’s up to the skill of the person doing the diagnosis whether they can glean the right information.

As it turns out, industry-specific LLMs (large language models trained on specific company or industry data) help. As Chip says, “Not every patient is able to describe what’s going on in their own words. And so that’s where the advent of some healthcare-specific Large Language Models are really helping, because they’re the ones that have been trained on understanding the common language around particular medical conditions. We can’t expect patients to be able to cite journal articles when they’re describing their clinical condition.”

And just like a human practitioner, the assistant will ask follow-up questions to try and pinpoint the issue.

Half of the story

While the potential of an LLM-based assistant to pinpoint a patient’s ailment is useful, what should it say in return? It can’t just open with ‘where does it hurt’ and then repeat ‘mm-hmm’ after everything the patient says (despite that being rather common among human docs).

A patient expects to receive insightful advice. That’s what they came for. As you may expect, this has to be carefully administered.

As Chip says, “you’re going to err on the side of conservatism. That’s how these healthcare specific LLMs are defined. They’re going to be conservative. [In most cases] they’re going to guide a patient to seek immediate care… What the virtual assistant is doing is enhancing the conversation by eliciting information about the status of the patient. It’s not making clinical decisions based upon that but rather guiding a patient to more fully describe their symptoms.”

These assistants reply with evidence-based medical practices. The LLM helps to understand what the user is saying. When the assistant replies, it’s using advice that has been verified to be correct.

As Chip says, “that can either be clinical documentation, or it can be legal documentation, or it can be material that has already gone through the vetting process. And so that’s how we believe healthcare specific LLMs will have the most impact; having a better way of understanding [the patient’s needs], but then being able to deliver a relevant, concise and correct response.”

Helping experts make the correct choice

So how useful is this? If the strategic use of LLMs for diagnosis rather than response generation minimises risks, what are the benefits?

According to Chip “This is not a replacement for a direct interaction between a licensed clinician and their patient, but what it does do is improve the quality of the time spent between a patient and their physician, thus enhancing the level of patient care. That’s fundamentally why this is valuable.”

That’s where we’re seeing the biggest gains with Conversational AI in general. It doesn’t, and shouldn’t, replace human experts. What it does very well is aid those experts to focus on what’s important.

In this case we’re talking about AI speeding up the solicitation of patient conditions, so that medical experts can focus on those who have the most urgent need.

Thanks to Delfina Huergo, Chip Steiner, and Kore AI for these insights. You can watch the entire webinar here.

Valeria Brigatti

Consultant - Digital and AI Transformation | Automation Architect | McKinsey Network

6 个月

This is a really interesting read, lots of food for thoughts in there. These are the kind of projects I like to work on: lots of things to consider with the potential of having a real positive impact on people (hopefully not a negative impact, that's why it's necessary to carefully consider all aspects and err on the conservative side)

回复
CHESTER SWANSON SR.

Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan

6 个月

Thanks for sharing.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了