Dr Perplexity?
If you’ve been reading my posts, you probably see that I am focused on healthcare specific AI models, and particularly how they are trained.? However, that doesn’t mean that ‘generic’ Large Language Models (LLM) are clueless with respect to healthcare.? Nearly a year ago, in Dr AI?, one of my first healthcare AI posts, I discussed a patient with MS CoPilot, Google BARD (now Gemini) and Pi.? I think things have advanced quite a bit since then. Let’s see if that’s true.
To explore this, I use Perplexity which, at the present time, is my favorite LLM. The company was founded in 2022 by Aravind Srinivas, Denis Yarats, Johnny Ho and Andy Konwinski (not pictured), engineers with backgrounds in back-end systems, AI and machine learning.
The company has raised $165 million, at a valuation of over $1 billion. Investors include Jeff Bezos, and the model is hosted on Amazon Web Services. ?The company has been accused by Forbes for using its material without acknowledging that (the company has said this might have been the case due to “rough edges” in the tool). It has also been criticized by Wired for its crawling practices because reportedly the company doesn’t abide by the robots.txt Robots Exclusion Protocol used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit. The company denies this allegation.
First, how is Perplexity different?? This question appears on its search home page and clicking it provides several answers.? Note the link from the question. Perplexity gives each search result a URL that you can use to access or share it, so you can read the entire answer.? For our purposes I feel these are the most important:
1) It combines advanced AI language models with real-time web searching capabilities so it can gather up-to-date information.
2) It distils information from multiple authoritative sources and summarizes it, including references. In fact, it says that this is its primary function.
In addition, and importantly, users can upload documents relevant to their query, and this was a key capability for my first test.?
I copied an HL7 Fast Healthcare Interoperability Resources (FHIR) bundle from the HL7 website.? A bundle is a group of typically related FHIR resources – the entities within which a related group of clinical or other information is stored in FHIR.? FHIR resources are typically represented in JavaScript Object Notation (JSON), a text format commonly used on the internet (although XML is an option).? I put it into a text file and attached it to this Perplexity query “Please summarize this FHIR-based Patient Record”.? Note the link, you can examine the results for yourself.?
The make-believe patient is Pieter van de Heuvel, a male born November 17, 1944.
Here’s a part of the summary that indicates Pieter has severe non-small cell lung cancer in his thorax.?
To produce this summary, Perplexity clearly must understand the FHIR standard.? As we will now see, this requires knowledge of the standard and the other standards and codes it refers to.
Here is the relevant FHIR Condition resource visualized by the clinfhir.com FHIR Bundle visualization tool. clinfhir, by New Zealand’s FHIR expert, David Hay , is one of my favorite tools for exploring FHIR.?
One of the standards FHIR refers to is @SNOMED, accurately described by its developers as “the?most comprehensive,?multilingual clinical healthcare terminology in the world”.? Understanding it is a challenge because it contains over 350,000 concepts that their 1.3 million relationships.
领英推荐
The code 254637007 is for non-small cell lung cancer and 51185008 is for the thorax.? The links from these codes are to the international version of a free, public browser for exploring the standard.
As it promises, I feel Perplexity did a good (but perhaps not perfect) job of summarizing the information and formatting the results in a very human friendly way. Let me know in the comments what you think.
However, I suppose it could be argued that this was easier, as you can see above, because the FHIR information was already structured and organized.?? To test this, I asked Perplexity to summarize text information, such as might be found in a patient chart (but, considering the source, probably better written).
Case conferences are a learning tool in which a complex patient is presented to physician attendees who are asked to name the diagnosis. A few days before the New England Journal of Medicine (NEJM) posted the diagnosis, I asked Perplexity to summarize Case 31-2024: A 37-Year-Old Man with Fever, Myalgia, Jaundice, and Respiratory Failure from the journal’s October 9th issue. If you have access to the journal, you can read the case, but I can’t post it or the results, because the case is copyrighted.
My query was “Can you summarize this patient” followed by the case text.
I didn’t tell Perplexity that this is a NEJM case, nor did I ask it to propose a diagnosis, but it did that on its own, complete with references:
In the NEJM survey, 11% of readers guessed this diagnosis but, alas, the patient had Leptospirosis.
According to the National Library of Medicine: “Leptospirosis is an infectious disorder of animals and humans and is the most common zoonotic infection in the world. This infection is easily transmitted from infected animals through their urine, either directly or through infected soil or water.?Leptospirosis?can cause a self-limiting influenza-like illness or a much more serious disease. This condition is known as Weil disease, and it can progress to multiorgan failure with the potential for death.”
However, again before the diagnosis was posted, I asked Perplexity “What are the possible causes of this patient's symptoms” (a differential diagnosis list). As you can see here, Leptospirosis was at the top.
Note that it added that “This bacterial infection can cause fever, myalgia, jaundice, and respiratory symptoms.?The patient's exposure to wooded areas and insect bites increases this possibility”.
Finally, I suppose, the obvious question is:?
I was impressed!? Are you?
Digital Computer Scientist with Passion & Persuasion-Bilingual (Spanish & English) Business & Leadership Coach - I accelerate CLEAR Communications,Context,Change, Strategy Design, Delivery, Enterprise GAP Analysis
5 个月Insightful
???????????????????? ??: The wording of Queries matters (the discipline is called Prompt Engineering). I directly asked Perplexity "What's wrong with this patient?" followed by the NEJM same case text I used previously. It immediately provided Leptospirosis, the correct diagnosis.
PhD Clinical Neuroscience | Medical Imaging & Regulatory Affairs Specialist | Remote
5 个月great post Mark
?? ????????????????????: I asked Perplexity to summarize the record (a very large FHIR bundle) for Lakendra Roberts, a synthetic, SYNTHEA generated diabetes patient from the SMART R4 Sandbox. It really struggled. You can see the conversation here: https://www.perplexity.ai/search/please-summarize-this-fhir-bas-1_e2qpspTGalPRzuwFFKsg I assume the size of the record caused it problems since LLMs typically have a limited "context window" of information they can consider at once. This "patient" has 16 Conditions, 113 Encounters and 657 Laboratory Tests!