A PRIMER ON CONVENTIONAL AND GENERATIVE AI IN RADIOLOGY – WHAT ARE THE DISTINCTIONS AND WHERE IS IMAGING HEADING?
Edward Steiner
Medical Director and Chief of the WellSpan York Prostate Care Center, Chairman of Imaging and Radiation Oncology 2016-2022 WellSpan/York Hospital PA
In radiologic image interpretation, machine learning (ML) and deep learning (DL) have revolutionized the approach. Machine learning involves algorithms that can learn patterns from data, while deep learning is a subset of ML that utilizes neural networks with multiple layers (deep neural networks).
In radiology, traditional machine learning AI techniques involve feature extraction and selection, where specific image features are identified manually, and a model is trained to recognize these features. On the other hand, deep learning, particularly convolutional neural networks (CNNs), has shown significant success by automatically learning hierarchical features directly from images.
Deep learning models can learn complex representations and patterns in medical images, allowing for more accurate and automated detection of abnormalities. These models are trained on large datasets. The ability of deep learning to automatically extract relevant features has contributed to its success in medical imaging tasks, including detecting tumors, abnormalities, or other diagnostic criteria in radiologic images.? These algorithms need to be FDA approved and are essentially “locked” after approval.? That means that they do not “learn” from interaction but need to have FDA re-submission periodically to prevent drift.
?
Gen AI, short for Generative Artificial Intelligence, involves the use of algorithms and models to create new data or content. Unlike traditional AI systems that follow pre-programmed rules, generative AI learns from vast amounts of existing data in the millions to billions of data points, ?and then generates novel outputs.
At the core of generative AI is the use of neural networks, particularly a subset called generative models. One common type is the Generative Adversarial Network (GAN). In a GAN, two neural networks, a generator, and a discriminator, are pitted against each other in a training process.
1.??? Generator:?This part of the model creates new data. For example, in image generation, the generator might create pictures of faces.
2.??? Discriminator:?This part of the model evaluates the generated data and compares it to real data. It tries to distinguish between real and generated examples.
During training, the generator aims to produce content that is indistinguishable from real data, while the discriminator improves its ability to differentiate between real and generated data. This dynamic creates a continuous feedback loop, with the generator getting better at creating realistic content and the discriminator improving its ability to spot the difference.
Once trained, the generator can be used independently to produce entirely new, unseen data.
Generative AI has diverse applications, ranging from creating realistic images and videos to generating human-like text. It has potential uses in fields like art, content creation, and even drug discovery, where generating new molecular structures is a complex but promising application.
However, it's important to note that generative AI also raises ethical considerations, especially in terms of deepfakes (fake but convincing media) and issues related to misuse of generated content.
领英推荐
The future of medical applications was highlighted by multiple vendors at this week’s RSNA will involve the integrated use of imaging based TRADITIONAL AI for pixel data with the integration of Generative AI for a comprehensive clinical patient view that integrates:
·???????? Radiology reporting automation to prevent common errors , automatically generate reports (with radiology physician? input but with more than 80% syntax reduction.)
·???????? Auto generation of IMPRESSIONS that link to dictations without common errors such? as left/right inversions.
·???????? Immediate reference to the radiographic abnormality, measurements, follow-up plan, and link to proven criteria such as the Fleischner criteria for lung mass classification.
·???????? Link to HL7 data? and follow-up so that any incidental abnormality is followed up and no patients "fall off the grid" automatically.? This is very important in screening studies such as lung cancers, as well as for incidental findings.)
·???????? Detection and prioritization of critical findings based on traditional Pixel Based Imaging AI so that these patients come to the top of our reading list with a significantly decreased error rate.
·???????? Improved efficiency to reduce false negatives/positives for findings such as mammography and prostate cancer in what I call “first time right diagnosis”.
·???????? Provide speed and resolution increases such as MR Smart Speed in our Philips Platform, allowing for 40% reduction in MRI time resulting in better studies, patient comfort and improved throughput. Most vendors are able to provide similar products.
Generative AI are diagnostic aids that do not need FDA approval, and their evolution and learning algorithms are more dynamic and can incorporate on the go learning and internal modulation.? Their training data sets are not in the thousands but in the hundreds of thousands to billions of data points. This is both an incredible positive and a potential pitfall in that we? have to pick the right vendor with a track history of responsiveness and reliability.
This article is a synopsis and over simplification of a complex topic but it is also based on the experience of the author in rolling out AI APPS such as Aidoc (for PE, Large vessel Occlusion, Stroke/Intracranial Bleeds, Rib Fractures, Cervical Spine Fractures, Free Air, Aneurysms), RAPID for Stroke and Brain Perfusion abnormalities, Nuance Powerscribe/Microsoft for voice recognition and large language models, Quantib/Deep Health for Prostate MRI AI diagnosis, the Dragon Ambient eXperience (DAX) Copilot solution, a conversational ambient generative AI that allows physicians to use natural patient interactions with full AI processing of the conversation so that patients truly feel full physician attention rather that impersonal note taking during a visit, Screenpoint ?AI for breast mammo cancer detection which may detect cancer up to 2 years before conventional Mammography, and more.
This article was assisted by ChatGPT and the Scrooge image on top (Bracing for the Holidays!) is an AI generated personal creation that used my own face as a “fake”.?
AI is here to stay, but we need to use it with discrimination, partner with experienced vendors that have an AI platform or operating system that can roll out in your system, select the AI tools that give the greatest return on investment