Physician critical thinking skills in the age of AI
Image generated by Co-Pilot by P. Harms , 24-02-08

Physician critical thinking skills in the age of AI

I first became interested in AI when a colleague showed me that he could ask ChatGPT how to treat his patient’s pancreatitis. I was stunned that he had outsourced his thinking, reasoning that a computer cannot possibly think with the exquisite sensitivity of a doctor. After some reading and ongoing continuing education, I have to conclude that yes, in fact, it can. When properly prompted, AI can now think and perform better than a human physician in multiple parameters (1). Bruised ego aside however, there are very valid reasons for us to avoid tossing the diagnostic thinking process onto the trash heap of old medical teachings. ??

The healthcare scene is a hotbed of competing and up-and-coming AI-powered technologies. Many provide the service of guiding the diagnostic and treatment recommendations for patients, with the doctor onboard to provide physical examination, supply the problem list and clinical signs, facilitate the process and provide a human touch. As these processes become integrated into medical care, the diagnostic thinking skills of the physician are slated to replaced by superior diagnostic thinking on behalf of the AI program. You might think this far-fetched, but it has been a frequent comment among healthcare industry professionals with whom I’d had the pleasure of learning about AI. Talk about a paradigm change! We are approaching a reality where diagnostic thinking could be delegated to computers, much like math and navigation have been relegated to calculators and GPS programs. Just as students are no longer taught cursive in school, the diagnostic thinking process (signs – problem list – differential diagnoses – diagnostics – results – diagnosis – treatment plan) may no longer need to be taught in medical school. Computers can do it more efficiently and accurately than humans at this stage, and teaching an obsolete process is a waste of teaching resources better applied elsewhere.

And yet.

I would argue that the diagnostic thinking process remains relevant, and should continue to be nurtured within our physician population. Maintaining this cornerstone of practice is essential to the resilience, accountability, unity and creativity of our medical industry.

Unlike calculators, AI programs are generally dependent on having an internet connection and a power source for one’s computer. Unlike GPS programs, the loss of access to a central database risks life-threatening consequences. This positions complete outsourcing of diagnostic thinking to AI as a unique weak point in the foundations of medical practice. Having experienced the chaos that occurs when a power outage takes away our access to our EMR (Electronic Medical Records) systems, I can well imagine the paralysis that would occur if a loss of connection were to interrupt access to an AI diagnostic program. In the event of loss of access to EMR systems, physicians turn to paper-and-pencil documentation until the connection is returned. In the event of loss of AI diagnostic programs (through conflict, natural disaster, breakdown or malware attack), physicians must be able to tap their own capability to move their medical cases forward without the technology. If physicians outsource their ability to perform diagnostic thinking to the point where it is no longer a core competency, a loss of connection with an AI program would paralyze the care process, with potentially life-altering consequences for the patient.

Maintaining diagnostic thinking as a core skill also allows physicians to independently verify the information that they receive from an AI program. No medical AI program is likely to be 100% accurate and precise – the potential for error in diagnostics or treatments will continue to exist, however small. Bias will continue to exist and needs to be acknowledged. Responsibility (ethical and legal) for patient outcomes rests with the human care provider, whether or not the AI program was the one who generated treatment suggestions. As in the case of self-driving vehicles, where the driver is liable for any accidents that occur on the road, professional liability does not care who had control of the steering wheel. The program may make decisions, but it is up to the operator of the vehicle (or the medical case) to ensure those decisions are sound, responsible and safe. With this in mind, developers of AI systems should be supporting continued use and teaching of medical diagnostic thinking as a core competency in medical schools. Without it, they lose the human physician as a liability buffer in the event of medical errors made by their programs. ?

Outsourcing diagnostic and treatment recommendations to AI programs, with corresponding the loss of these skills in physicians, also has the potential to lead to greater disconnect between medical systems. As AI-assisted professionals move away from classical diagnostic thinking patterns, they may find a communication gap between themselves and less technologically dependent physicians. When one physician is discussing lists of differential diagnoses, and the other hasn’t been trained on how to use this kind of thinking, common ground is eroded. This could lead to more difficulty in physicians collaborating across levels of privilege on the worldwide scale.

Finally, while medicine has made huge strides in innovation and discovery, it still has a long way to go. Progress in veterinary medicine depends, yes, on the scientific process, but also on the creativity that drives questions and innovation. Outsourcing medical thinking stifles the thought processes that can generate new and creative ideas. It has been shown that technology can both create, and also destroy value when applied to various tasks (2). As such, the physician’s diagnostic thinking process needs to be nurtured, so we don’t close off this avenue for novel ideas. A partnership between the two creative processes (human and AI) provides more fertile grounds for discovery than allowing one to supplant the other.

New AI tools are being implemented that will change how much physicians think in certain aspects of their practice. It will most certainly change what is taught in medical school over the coming generation. The potential benefit of AI in healthcare is vast. The art of the implementation will be in ensuring that the implementation does not come at a cost to physicians’ medical resilience, creativity, unity or critical thinking ability.

Petra Harms

CEO, doctor, educator, and speaker with 15 years in practice. Recognized SME in AI adoption in vet med, driving responsible, responsive and resilient AI adoption through education, consultancy and diverse networks.

8 个月

Ref. 1. Mcduff, D., Schaekermann, M., Tu, T., Palepu, A., Wang, A., Garrison, J., Singhal, K., Sharma, Y., Azizi, S., Kulkarni, K., Le Hou, Cheng, Y., Liu, Y., Mahdavi, S., Prakash, S., Pathak, A., Semturs, C., Patel, S., Webster, D. and Dominowska, E. (2023). Towards Accurate Differential Diagnosis with Large Language Models. [online] https://arxiv.org/pdf/2312.00164.pdf. Available at: https://arxiv.org/pdf/2312.00164.pdf. 2. Dell'Acqua, Fabrizio and McFowland, Edward and Mollick, Ethan R. and Lifshitz-Assaf, Hila and Kellogg, Katherine and Rajendran, Saran and Krayer, Lisa and Candelon, Fran?ois and Lakhani, Karim R., Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (September 15, 2023). Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013, Available at SSRN: https://ssrn.com/abstract=4573321 or https://dx.doi.org/10.2139/ssrn.4573321

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了