What If Generative AI Turned To Be A Flop In Healthcare?

What If Generative AI Turned To Be A Flop In Healthcare?

The excitement surrounding generative AI is reaching a fever pitch. From tech giants to healthcare leaders, investment in this seemingly game-changing technology is exploding. We're embracing the trend: we’ve written dozens of articles , created multiple videos, published an ebook , and recently launched a new short course .?

However, amidst the enthusiasm, AI expert Gary Marcus raised an important question a few months ago: What if, for all its promise, generative AI fails to deliver long-term? While he outlined the pessimistic scenario in general, I wanted to dissect what genAI being a flop would mean in healthcare and medicine.

How to define generative AI failure in healthcare?

Since the public launch of ChatGPT, we've explored the potential of generative AI in medicine. This technology offers promising applications , from enhancing administrative efficiency to functioning as a virtual medical scribe , potentially reshaping how medical facilities operate and interact with patients.

Let’s now consider the other side of the coin. If generative AI fails to live up to the expectations, the consequences for healthcare could be significant. Let's break down what failure could look like.

We don’t find evidence that it works

A fundamental failure of generative AI would be its inability to be incorporated into evidence-based medicine. Without robust empirical support from well-conducted research and clinical trials, AI technologies can’t be applied in medical practice.?

No clinical trials prove its safety

Similarly, a major red flag will be if years go by and we don’t see solid clinical trials to evaluate the potential of generative AI.

And/or find proof that it's useless

Another clear indicator would be if trials, pilots, and studies would prove that using generative AI in healthcare is inefficient and/or unsafe. This could mean AI systems making inaccurate predictions, leading to incorrect treatments, or compromising patient privacy and safety, ultimately causing more harm than good.

Deep fakes rule the information highways

The use of AI to create deep fakes could provoke significant ethical concerns and public outrage. This could include being used to falsify medical records, create misleading patient data, deepfake medical authorities advocating bogus treatments or arguing against clinically proven ones, or fabricating medical advice. These could be life-threatening scenarios, and right now we are not exactly sure how to ensure such content can’t reach the general population.?

People recognize AI text and don’t find it credible enough

As the novelty of generative AI may wane, AI-generated materials - brochures, summaries, etc. - become easily distinguishable and may be seen as less credible and unreliable. Thus generative AI as a tool for creating legitimate medical content could diminish.

What happens next?

Continuing from the potential pitfalls of generative AI in healthcare, let's explore the broader implications should these technologies fail to fulfill their promises. The consequences would ripple across the healthcare industry affecting public trust, regulatory landscapes , and investment dynamics.

Erosion of trust

Generative AI's failure to deliver on its promises could erode public trust in AI applications in general, casting doubt on its reliability and effectiveness. This could also slow the adoption of other AI-powered tools in healthcare and beyond.

Leading to bans

If generative AI is unsafe or ineffective in healthcare, regulatory bodies might impose restrictions or bans on its use in sensitive environments such as medical schools and hospitals. Such prohibitions would be a protective measure to prevent harm and preserve the integrity of medical education and patient care, but they would also hinder development and limit the technology's potential benefits.

Overly stringent regulations

In response to potential risks, policymakers might introduce overly stringent regulations, stifling innovation and halting the development of generative AI in healthcare. This could create a bureaucratic quagmire, slowing progress and discouraging investment.

Investors turn away from the field

If generative AI fails to demonstrate clear value, investors might lose confidence and pull back their funding. This could lead to a decline in research and development, further delaying or even halting progress in this field.

However, I don’t think it is a flop

Having said all that, I still don’t believe generative AI will be a flop in medicine (or elsewhere). It is different from previous technologies in a crucial way: we don’t need to believe how it works to a handful of experts working in specialised labs. Quite the contrary, we can directly interact with and test it, experiment, and discover its potential. Still, thinking about “what if” questions is rarely a waste of time, as it helps us prepare for the future.

I think the very nature of generative AI - its accessibility and the ability for daily hands-on use - suggests that it is unlikely to fail outright. Instead, my greater concern lies with its potentially rapid, unregulated development, which could lead to unforeseen consequences and challenges.?

Silvia Veronese

Mathematician, Tech Entrepreneur

6 个月

While the excitement around generative AI in healthcare is understandable, are we perhaps too eager to trade human judgment for algorithmic efficiency? The potential benefits are massive, but so are the risks—especially without solid empirical evidence. We must ask: Are we overlooking potential ethical pitfalls and inequalities in our rush to innovate? As we embrace AI, let's ensure that our enthusiasm doesn't outpace our commitment to safety and equity. Are we moving responsibly, or are we blinded by the allure of technological advancement?

回复
Birgitte Rasine

? Storyteller and muse for visionary leaders, organizations, and changemakers. Startup advisor. All words 100% human crafted.

6 个月

In general it's wise to assess the power and potential of gen AI in any given industry or real-world application in the specific context of that industry or real-world application. The health/medical sector presents a very different use case/context than, say, video games. Rather than brute-forcing use cases, we should determine what features/aspects of the technology would be valuable to a specific sector, and which ones should simply be avoided. Not as easy as it sounds of course, but that would be my strategic approach. (thanks Raul I Lopez for the heads up on this article)

Eva A?ón

PhD Formadora y conferenciante. ??Ayudo a profesionales y sociedades científicas a utilizar las redes sociales de manera profesional para conseguir sus objetivos. ??Acercando la IA a los profesionales sanitarios

6 个月

Agree: "Creo que la propia naturaleza de la IA generativa -su accesibilidad y la capacidad de uso práctico diario- sugiere que es poco probable que fracase por completo". Me preocupa más que no sepamos gestionarlo...

回复
Othmane El Mouden

Productologist | Product Operations & Marketing Manager | Intelligence | Consulting | Product Owner | PaaS & SaaS | B2B | AI & ML Enthusiast | Healthcare expert | AWS Cloud Practitioner Certified

6 个月

Very informative

回复
Sandeep Ozarde

Founder Director at Leaf Design; PhD Student at University of Hertfordshire

6 个月

It won't be a flop. GenAI will continue to evolve as we progress.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了