Quest for Efficiency: How Artificial Intelligence Would Drive Image Interpretation in Private Practice Radiology

Quest for Efficiency: How Artificial Intelligence Would Drive Image Interpretation in Private Practice Radiology

I am lucky to be happily employed as a diagnostic radiologist in a busy hospital-based private practice. What I love is that radiology is always changing. In the last seven years we've dropped transcription services in favor of voice recognition, expanded to a second hospital, taken back in-house night coverage, and witnessed a doubling of call volume while the expectation for efficiency and accuracy is set higher and higher each year.

Out of necessity and personal interest, I've become an efficiency junkie. I've fine tuned my hanging protocols and PowerScribe templates. I've got the anti-glare computer glasses and mastered my 12-button programmable mouse. Even my office layout and ergonomics are optimized, with all my phone numbers and lookup tables within eyeshot. I derive satisfaction plowing through the worklist with the least amount of resistance.

With recent headlines extolling Artificial Intelligence (AI) that will revolutionize healthcare while forever changing the way physicians practice medicine, could it be the Holy Grail in this quest for peak efficiency? Especially in private practice, we radiologists are continually tracked and evaluated by metrics such as productivity and turnaround time. How readily we adopt this disruptive technology will depend on the ability of AI to improve our ratings. Everyone would agree that an accurate report and correct diagnosis benefits the patient. However, there are excellent radiologists that struggle in private practice simply because they are too slow and cannot handle the volume. Even if there are measurable gains in accuracy, I believe AI will not gain traction in real-world private practices if there are unfavorable trade-offs in speed. This is a major obstacle for utilization.

When I push through my workday, I don't need a lengthy report full of measurements and long differentials. I don't want to sift through an AI report that will take me longer to prelim than to read the case on my own. I don't want to fumble through extra sets of images to cross check or validate AI findings when I know the x-ray is normal, multiplied by one hundred.

In its current form, AI is a long way from taking my job. Granted, if the main function of AI is to produce a better radiology report, then it does become an existential threat and therefore my competition. I still must be the final read. By checking the AI report it might result in a better quality report but at what cost to my productivity numbers? With that trade-off, there is a huge conflict of interest to trust and utilize the AI. Ironically, this feeds back into a defensive mindset and creates an unhealthy environment for improving accuracy and then failure. Which begs the question how often is this clinically significant and therefore is it worth my time using AI at all.

So for me and AI to play happily together, we must come alongside each other to complement our abilities, not compete to deliver the same product. And since machines are arguably more trainable than stubborn humans, AI should defer to the radiologist way of doing things -- by that I mean be subordinate to the radiologist's workflow.

The Eyes Have It

The process of reading a case is entirely visually driven. I open a study and my eyes automatically begin my search pattern. After thousands of repetitions, my eyes know where to go. Consistency, detection ability, and accuracy is maximized. I run down the checklist in my mind, note the findings, and contemporaneously dictate my report. When I am in "The Zone" everything just flows. I get to the end then I pause, collect my thoughts, and form a clear and concise impression.

When I am in "The Zone", I am not interested to know what AI has for me until the end. The last thing I want is to interrupt my search pattern and concentration. But if I get bogged down analyzing a bowel obstruction, difficult anatomy, or comparison lung nodules and bone mets, of course I'd like the option to summon the AI genie to do the dirty work. An even smarter AI would learn when to jump in and when to lay low.

What is the natural solution? An AI that tracks my eye movement! Amazingly, this is already mainstream technology (e.g. Apple iPhone Face ID technology) but I am sure it can be improved to detect not only where I am looking, but also where I am focusing my attention. A fast AI would keep up with my visual lead. Only pertinent information would be presented in real-time. An even faster AI might use Natural Language Processing to listen and flag discrepancies for a closer look and then alert me.

Embedded Heads Up Display is Critical

If workflow is visually driven, it follows that taking eyes off the image leads to inefficiency. Switching multiple displays, correcting dictation typos, checking the Electronic Health Record, and tweaking hanging protocols are distractions that will break concentration let alone waste time. A poorly designed AI interface is no different.

The best solution is to have some type of embedded Heads-Up Display. In an automobile, this is a digital transparent image that is projected onto the windshield which reduces distraction and promotes safety and fewer mistakes by "keeping the driver's eyes on the road". In similar fashion, diagnostic information feeds and alerts would be streamed to the radiologist in real time. To avoid information fatigue, the AI feedback could be toggled on and off, perhaps with a foot pedal or voice command and adjusted for peripheral or central vision.

Wearables, e.g. Google Glass, could find practical application here. Virtual Reality (VR) and Augmented Reality (AR) headsets and goggles could provide a more immersive experience and add capability for auditory and sensorineural feedback as well. There is potential for nanotechnology to offer a lightweight solution in the form of contact lenses and implantables.

Diagnostic information could manifest as highlighting of anatomic images, color-coded heat map overlays, or floating text and numbers. Of course, this all would be presented in real time while looking through images.

Image Manipulation Beyond the Point-and-Click

Here is where it gets exciting. With AI capability for medical image segmentation and image registration we are no longer limited to the 2-D realm. Imagine viewing the liver in full 3-D Augmented Reality, rotating the organ at will, adjusting the transparency to reveal cancers that have already been characterized and measured, and then manipulating the tissues to isolate the blood supply, generating a heat map of perfusion and viable tumor, and highlighting anatomic regions of local invasion for tumor resectability. Motion artifact is automatically corrected. 4-D dynamic motion images are automatically generated.

Naturally, smooth manipulation of 3-D images lends itself to novel user input devices that would sense 3-D space and acceleration. With AR capability and haptic feedback, our input devices may simply be our own hand gestures and finger tip motion. This technology has landed already, as millions of video gamers can testify.

Future radiologists will look back to how used to expend so much effort scrolling through thousands of images in 2-D with keyboard and mouse (oh so 1990s), mentally sifting through extraneous data to find the needle in a haystack, while pushing our productivity to the limit with PACS, voice recognition, and teleradiology (i.e. "Imaging 2.0").

Search patterns will change. Attention will gravitate towards new targets and engage more of our senses. The ability to make findings will be overshadowed by skill in appraising AI data and negotiating better patient outcomes. Our role as physician consultant will expand. Are we ready for this?

As with any new disruptive technology, the main question pertaining to AI is "Will this help me do my job better/faster/cheaper?" I can attest that for the busy day-to-day radiologist a driving concern is efficiency.

In later articles I hope to address practical topics such as accuracy, when AI would take the lead, quality improvement, human judgement and bias, non-clinical radiology work, and whatever else comes to mind as I stay busy in my own private practice.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了