A Year In AI: The Most Important 2023 Milestones For Medicine
Bertalan Meskó, MD, PhD
Director of The Medical Futurist Institute (Keynote Speaker, Researcher, Author & Futurist)
2023 was the year of artificial intelligence in our medical futuristic world. Although we‘ve been predicting for years that AI will transform healthcare, this revolution, which felt like a vague promise for years, suddenly became a reality. And this was not brought by a groundbreaking new technology.
To determine why generative AI became such a big hit in the past year, we need to look for the explanation around public access. After all, AI (and AI in medicine) isn’t exactly new. Nobody cared for more than a decade and now, boom, suddenly everyone is interested.
ChatGPT has arrived
In November 2022, OpenAI publicly launched its large language model ChatGPT. It reached the milestone of having over 100 million users in only 2 months, and it became the fastest-growing consumer application ever. In less than a year, it has over 100 million users per week, and millions of developers are building on its API.?
ChatGPT was the first widely available solution for interacting with AI and it has changed everything. Artificial intelligence, the mystic tool available only to a select few, became an everyday tool for anyone – overnight. Soon, everyone started speculating about the potential medical use cases, and how these tools could assist us in building a medical praxis.?
We, of course, also offered our two cents on what’s coming next and medical journals started to publish the first studies analysing potential clinical applications.
By the time we got used to it, GPT4 had arrived
By the time we started to feel accustomed to this new, wow, we have large language models at hand reality, OpenAI released GPT4, its most developed model, which was mind-blowingly advanced and capable.?
That opened up new areas for potential future use cases and here we analysed what more we can expect from it compared to the previous GPT iterations.?
As new features were added to the publicly available GPT4 model, we started learning how to work with plugins and it became evident that prompt engineering will be a crucial skill soon. And of course, such a mainstream hype around AI made us think what more we can expect for the rest of the year.?
Enter medical alternatives with their specific challenges
As healthcare is an industry where decisions can be a matter of life and death, we can’t just start using random large language models for diagnosing patients or doing the paperwork. Many of these algorithms are quite reliable, but even the best of the models hallucinate now and then. That is why we need healthcare-specific models.?
Only a few weeks after the release of ChatGPT, Google/DeepMind announced Med-PaLM, a large language model specifically designed to answer healthcare-related questions, based on their 540-billion parameter PaLM model. Med-PaLM can’t be tested by the general public, but you can read the researchers' paper here.?
In a few months, a study published in Nature revealed that the more advanced Med-PaLM 2 answered medically related questions with 92.6% accuracy. Although the detailed data shows that MedPaLM still has room for improvement, the results are impressive. Medical LLMs are not yet widely available, but Med-PaLM 2 is reportedly used by a limited (but growing) number of healthcare partners.
Besides Google’s algorithm, dozens of others can be used by medical professionals to assist in various tasks, or let’s say, eventually there will be dozens, as most of these tools are not yet reliable enough. But they allow us a glimpse into future use cases we can expect in a few years.?
领英推荐
Regulatory issues became the hottest topic
As generative AI algorithms became extremely popular and we started to see real-life healthcare integrations of the technology in the healthcare segment, regulatory issues became very prominent. It soon became evident that the capabilities of LLMs expand beyond text-based interactions, to analysing images, documents, hand-written notes, sound, and video. Thus we need to regulate the future, not just the present.
The Medical Futurist and Dr. Eric Topol published a paper “The Imperative for Regulatory Oversight of Large Language Models (or Generative AI)?in Healthcare” in Nature’s npj Digital Medicine. They analysed the challenges and the possibilities of LLM regulations to make them accessible for healthcare use.?
The question is indeed tricky, we need a solid framework for constantly changing algorithms to make them safe, but this framework needs to be flexible enough not to stifle innovation.?
And this is just one facet. Privacy issues are also a major concern when it comes to AI models, with multiple layers of problems. We discussed these when we analysed what AI companies do with your data.?
Aiming for a high-level understanding
With so many things happening at once, a steadily growing part of humanity started feeling lost or left out. However, throwing our hands in the air and declaring that “AI is not for me” will definitely not be a winning attitude for our future careers.?
However, the good news is that everyday healthcare professionals don’t need to become computer scientists or designers of deep learning neural networks. Most of us only need a “scientific” understanding, like we have with our cars or an fMRI machine. Chances are, we couldn’t build one from scratch, but we understand when and how to use it safely and what is it good for.?
For it, we need a general grasp of what generative AI is, and it is also important to understand that multimodality is the inevitable next step. Multimodality is a major concept: such algorithms can process and interpret multiple types of data simultaneously, just like we humans do. This will enable comprehensive analysis in medicine, facilitate communication between healthcare providers and patients speaking different languages, and serve as a central hub for various unimodal AI applications in hospitals.
While the year was not short of doomsday scenarios and media hysteria around how AI will destroy humanity, we, at The Medical Futurist remained optimistic and listed seven reasons for this positive attitude. Also, we don’t believe that generative AI will create creativity just now.
The predicted future is already happening
With so many predictions and guesses we made during the year, it was exciting to see how the field developed. Several things already arrived, for example, multimodality has become present.?OpenAI integrated all the different “modes” of ChatGPT into a unified user interface, which allows you to upload visual prompts for example.??
And most recently, Google announced Gemini, its natively multimodal algorithm, which, according to the company, is “a brand new breed” of AI. Sadly, as it hasn’t arrived to EU users yet, we haven’t started testing it yet.?
What a year it has been! I’m very happy to witness this whole AI revolution and have learned a LOT in the past 12 months. Sure, artificial intelligence can’t immediately solve all our problems, and we have to settle a huge heap of regulatory, ethical, and moral questions besides technological advances, but I remain hopeful.?
Can’t wait to see what 2024 brings! We are just witnessing the birth of multimodality with the “natively multimodal” Gemini announcement from Google. Of course, we could experiment with it for “personal” use cases, and eventually, this will get into healthcare.?
--
7 个月Very good and Thanks
Chief Digital Transformation Consultant @ SumatoSoft | Modern IoT & MedTech Solutions | Driving Business Growth Through Software Development
10 个月Useful read, thanks!
Driven by Innovation: Entrepreneur & Owner| Specializing in Quality Motorbike Spare Parts | Driving Performance, Style, and Reliability on Two Wheels | Love to Connect with New People
10 个月I agree with
Account Mgt - Legal Beagle - Business Development - Board Advisor - Legal Research - Editor & Speaker
10 个月Thank you!
Operations Executive @Lucytech | Digital Content Creator
10 个月Interesting!