The AI-empowered patient is coming. Are doctors ready?

The AI-empowered patient is coming. Are doctors ready?

Artificial intelligence (AI) has long been heralded as an emerging force in medicine. Since the early 2000s, promises of a technological transformation in healthcare have echoed through the halls of hospitals and at medical meetings.

But despite 20-plus years of hype, AI’s impact on medical practice and America’s health remains negligible (with minor exceptions in areas like radiological imaging and predictive analytics). As such, it’s understandable that physicians and healthcare administrators are skeptical about the benefits that generative AI tools like ChatGPT will provide.

They shouldn’t be. This next generation of AI is unlike any technology that has come before.?

The launch of ChatGPT in late 2022 marked the dawn of a new era. This “large language model” developed by OpenAI first gained notoriety by helping users write better emails and term papers. Within months, a host of generative AI products sprang up from Google, Microsoft and Amazon and others. These tools are quickly becoming more than mere writing assistants.

In time, they will radically change healthcare, empower patients and redefine the doctor-patient relationship. To make sense of this bold vision for the future, this two-part article explores:

  1. The massive differences between generative AI and prior artificial intelligences
  2. How, for the first time in history, a technological innovation will democratize not just knowledge, but also clinical expertise, making medical prowess no longer the sole domain of healthcare professionals.

To understand why this time is different, it’s helpful to compare the limited power of the two earliest generations of AI against the near-limitless potential of the latest version.

Generation 1: Rules-Based Systems And The Dawn Of AI In Healthcare

The latter half of the 20th century ushered in the first generation of artificial intelligence, known as rule-based AI.

Programmed by computer engineers, this type of AI relies on a series of human-generated instructions (rules), enabling the technology to solve basic problems.

In many ways, the rule-based approach resembles a traditional medical-school pedagogy where medical students are taught hundreds of “algorithms” that help them translate a patient’s symptoms into a diagnosis.

These decision-making algorithms resemble a tree, beginning with a trunk (the patient’s chief complaint) and branching out from there. For example, if a patient complains of a severe cough, the doctor first assesses whether fever is present. If yes, the doctor moves to one set of questions and, if not, to a different set. Assuming the patient has been febrile (with fever), the next question is whether the patient’s sputum is normal or discolored. And once again, this leads to the next subdivision. Ultimately each end branch contains only a single diagnosis, which can range from bacterial, fungal or viral pneumonia to cancer, heart failure or a dozen other pulmonary diseases.

This first generation of AI could rapidly process data, sorting quickly through the entire branching tree. And in circumstances where the algorithm could accurately account for all possible outcomes, rule-based AI proved more efficient than doctors.

But patient problems are rarely so easy to analyze and categorize. Often, it’s difficult to separate one set of diseases from another at each branch point. As a result, this earliest form of AI wasn’t as accurate as doctors who combined medical science with their own intuition and experience. And because of its limitations, rule-based AI was rarely used in clinical practice.

Generation 2: Narrow AI And The Rise Of Specialized Systems

As the 21st century dawned, the second era of AI began. The introduction of neural networks, mimicking the human brain’s structure, paved the way for deep learning.

Narrow AI functioned very differently than its predecessors. Rather than researchers providing pre-defined rules, the second-gen system feasted on massive data sets, using them to discern patterns that the human mind, alone, could not.

In one example, researchers gave a narrow AI system thousands of mammograms, half showing malignant cancer and half benign. The model was able to quickly identify dozens of differences in the shape, density and shade of the radiological images, assigning impact factors to each that reflected the probability of malignancy. Importantly, this kind of AI wasn’t relying on heuristics (a few rules of thumb) the way humans do, but instead subtle variations between the malignant and normal exams that neither the radiologists nor software designers knew existed.

In contrast to rule-based AI, these narrow AI tools proved superior to the doctor’s intuition in terms of diagnostic accuracy. Still, narrow AI showed serious limitations. For one, each application is task specific. Meaning, a system trained to read mammograms can’t interpret brain scans or chest X-rays.

But the biggest limitation of narrow AI is that the system is only as good as the data it’s trained on. A glaring example of that weakness emerged when United Healthcare relied on narrow AI to identify its sickest patients and give them additional healthcare services.

In filtering through the data, researchers later discovered the AI had made a fatal assumption. Patients who received less medical care were categorized as healthier than patients who received more. In doing so, the AI failed to recognize that less treatment is not always the result of better health. This can also be the result of implicit human bias.

Indeed, when researchers went back and reviewed the outcomes, they found Black patients were being significantly undertreated and were, therefore, underrepresented in the group selected for additional medical services.

Media headlines proclaimed, “Healthcare algorithm has racial bias,” but it wasn’t the algorithm that had discriminated against Black patients. It was the result of physicians providing Black patients with insufficient and inequitable treatment. In other words, the problem was the humans, not narrow AI.

Generation 3: The Future Is Generative

Throughout history, humankind has produced a few innovations (printing press, internet, iPhone) that transformed society by democratizing knowledge—making information easier to access for everyone, not just the wealthy elite.

Now, generative AI is poised to go one step further, giving every individual access to not only knowledge but, more importantly, expertise as well.

Already, the latest AI tools allow users to create a stunning work of art in the style of Rembrandt without ever having taken a painting class. With large language models, people can record a hit song, even if they’ve never played a musical instrument. Individuals can write computer code, producing sophisticated websites and apps, despite never having enrolled in an IT course.

Future generations of generative AI will do the same in medicine, allowing people who never attended medical school to diagnose diseases and create a treatment plan as well as any clinician.

Already, one generative AI tool (Google’s Med-PaLM 2) passed the physician licensing exam with an expert level score. Another generative AI toolset responded to patient questions with advice that bested doctors in both accuracy and empathy. These tools can now write medical notes that are indistinguishable from the entries that physicians create and match residents’ ability to make complex diagnoses on difficult cases.

Granted, current versions require physician oversight and are nowhere close to replacing doctors. But at their present rate of exponential growth, these applications are expected to become at least 30 times more powerful in the next five years. As a result, they will soon empower patients in ways that were unimaginable even a year ago.

Unlike their predecessors, these models are pre-trained on datasets that encompass the near-totality of publicly available information—pulling from medical textbooks, journal articles, open-source platforms and the internet. In the not-distant future, these tools will be securely connected to electronic health records in hospitals, as well as to patient monitoring devices in the home. As generative AI feeds on this wealth of data, its clinical acumen will skyrocket.

Within the next five to 10 years, medical expertise will no longer be the sole domain of trained clinicians. Future generations of ChatGPT and its peers will put medical expertise in the hands of all Americans, radically altering the relationship between doctors and patients.

Whether physicians embrace this development or resist is uncertain. What is clear is the opportunity for improvement in American medicine. Today, an estimated 400,000 people die annually from misdiagnoses, 250,000 from medical errors, and 1.7 million from mostly preventable chronic diseases and their complications.

In the next article, I’ll offer a blueprint for Americans as they grapple to redefine the doctor-patient relationship in the context of generative AI. To reverse the healthcare failures of today, the future of medicine will have to belong to the empowered patient and the tech-savvy physician. The combination will prove vastly superior to either alone.

To receive the next article via email, click the SUBSCRIBE button.

Kate Himmelberg

Senior Marketing Major, Spanish Minor, Professional Selling Certificate

11 个月

These are great points and things to think about in the coming future. It is so important that we learn how to use AI as a tool, and that physicians can learn to use this advancement, especially with EHR. This reminds me when my professor, Erin Whitehurst, introduced this amazing topic during our course this semester. This was one of my favorite sections in class! Thank you for sharing this and the article about the importance and impact that AI can play in healthcare in the near future!

Temitayo Bewaji MD, MBA

Fractional CIO | Healthcare Innovation and Technology | Artificial Intelligence

1 年

The skepticism about AI's impact, as highlighted, is grounded in its historical trajectory in healthcare, where its transformative promises have often not been fully realized. Nevertheless, the emergence of generative AI, epitomized by innovations like ChatGPT, signals a potentially transformative phase in medical practice. For physicians, the integration of AI should commence with its application in managing the administrative aspects of their practices. This approach allows a gradual acclimatization to AI's functionalities in a less critical context. Utilizing AI for scheduling, patient communications, and record management can provide valuable insights into its operational mechanics and limitations. Such an initial engagement is essential for building a foundational understanding and trust in AI’s capabilities, laying the groundwork for its eventual application in clinical settings. This exposure is crucial for understanding AI's strengths, weaknesses, and potential biases. This incremental adoption strategy is not merely about integrating a new technology but is fundamentally about preparing the medical fraternity for an evolving healthcare landscape.

回复
Alysa Taylor

Chief Marketing Officer, Commercial Cloud & AI at Microsoft | Marketing Innovation | Product, Technology + Marketing | Business Advisor | Enterprise Product Marketing

1 年

Thanks for writing about this! AI is poised to transform patient care and I get excited every time we find a new way to empower patients.

回复
Micheal Sarkodie Dankwah

Student at Kumasi Technical University

1 年

This is a great

回复
David Harris

Strategic Product Development

1 年

All good observations but let’s not forget the bane of any data driven approach GIGO

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了