Healthcare and ChatGPT: How Does Prompt Engineering Help?

Healthcare and ChatGPT: How Does Prompt Engineering Help?

In late 2022 Clifford Stermer posted a video on TikTok.?The short clip?showed the rheumatologist typing a prompt into OpenAI’s ChatGPT. The program then wrote a fully formatted letter to a medical insurance company, complete with treatment explanations, references, and a request for approval for a specific procedure on a specific patient.?

The video went viral almost overnight. “Use this in your daily practice,” Stermer says at one point. “It will save time (and effort).”

While impressive, however, it’s important to note that generative AI models such as ChatGPT aren’t modern-day miracle workers – at least not yet. To return useful and accurate results, and lessen the possibility of inappropriate or downright false results, these models require the right prompting and supervision.

We’ll get into the importance of prompting and prompt engineering shortly. But in the meantime, we have to ask…

What are ChatGPT and Generative AI?

Stable Diffusion. Midjourney. Synthesia. Murf. DALL-E. BLOOM. GPT-3. GPT-4. ChatGPT.

You’ve probably heard of one or more of the above generative models. Generative models come in all shapes and sizes – from generative adversarial models (GANs)?to diffusion models?to large language models. What each has in common is the ability to generate original content, such as text, video, or illustrations.?

Indeed, one non-filmmaker?recently generated headlines after creating?Salt, a series of short films completely generated by a few of the AI tools mentioned above.?

Specifically,?large language models?and other generative pre-trained transformers (GPT) models, such as GPT-J 6 by EleutherAI and OpenAI’s GPT-3, have shown an impressive ability to generate text based on commands (or prompts) from a human user.?

These models are typically deep neural networks with large numbers of parameters (elements of the model that change as it learns) trained on massive amounts of data from the internet. The models function by predicting “the next token in a series of tokens,” according to?Towards Data Science. While not trained on specific tasks out of the box, they are flexible and well-trained enough to react appropriately to most prompts.

Large language models with the right prompting can handle many downstream natural language processing (NLP) tasks, such as:

  • Named entity extraction
  • Text corrections or editing
  • Text classification
  • Topic modeling

ChatGPT, in particular (based on OpenAI’s GPT 3.5 series of models), uses an internal weighting engine to make real-time predictions, along with?reinforcement learning (RL)?to help fine-tune the model by rewarding it after appropriate user interactions.?

What is Prompt Engineering??

Prompt engineering is necessary because models such as ChatGPT don’t consistently deliver optimal answers. Models left to themselves can become sarcastic or provide?inappropriate, incorrect, or downright false?responses – an especially egregious result in a healthcare scenario where lives are often on the line.

To read the full article, please visit:?https://www.capestart.com/resources/blog/healthcare-and-chatgpt-how-does-prompt-engineering-help/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了