Generative AI in Healthcare: Where is the Line?
IntelliSOFT Consulting Ltd
Health IT Specialist | Management Systems | Research Systems
Artificial intelligence has existed in different forms for over 70 years, well, at least the modern concept of it, but its history spans much longer. Only in the recent past has its use become widespread.
You may have interacted with tools founded on AI technology, such as Siri, Alexa or the customer service chatbots that stalk you on websites, before AI consumed virtually every part of our lives. However, investment and interest in artificial intelligence only boomed in the 2020s. Reel back to November 2022, when OpenAI released ChatGPT for testing by the general public, having over 1 million people signed up to use it in just five days. AI has come a long way since, with its use doubling over the past five years.
AI: The Swiss Army Knife
So, what exactly is artificial intelligence? Simply put, it is the practice of getting machines to mimic human intelligence. With AI, machines can perform tasks that typically require human intelligence, such as creativity, learning, and reasoning.
AI use is widespread and cuts across industries. In the healthcare sector, it is revolutionizing how we deliver and receive care. From chatbots in health apps to analyzing patient records, appointment scheduling, assisting in diagnosis, and processing medical images, AI is enhancing every aspect of healthcare. Generative AI (gen AI), in particular, has been beneficial due to its potential to improve clinical decision-making, healthcare delivery and patient outcomes.
Gen AI is a type of AI trained on a vast subset of data to generate a specific output, such as audio, text, images, or videos. You may recognize some specific implementations of gen AI, which include ChatGPT, Copilot, and Gemini.
It's important to note that if used responsibly, gen AI can ease healthcare strain, provide medical guidance, and streamline administrative tasks, offering a promising future for healthcare.
But where is the line? As much progress as we feel AI has made, we still have a long way to go. While we have been able to mitigate some (known) risks of AI, we have yet to see its long-term effects. Perks aside, what does this mean for gen AI in healthcare? For starters, we would have to discuss some of AI's two most significant concerns and how they extend to the healthcare setting.
Hallucination
AI algorithms require vast amounts of data to learn patterns and make accurate predictions. However, errors may occur due to insufficient training data, bias encoded in the training data (racism, gender bias, etc) and incorrect assumptions made by the model, leading to inaccurate output. Hallucination mainly occurs in generative AI, and while we have managed to reduce the regularity with which it occurs by training the AI model more, it remains a problem. Think back to about two months ago when Google faced a scandal regarding inaccurate search results after integrating Gemini into the search engine.
领英推荐
Hallucination could be especially dangerous in a healthcare setting, where even the slightest error could result in the loss of lives. There is also the question of who will be liable if things go wrong for a patient. The developers? Machine learning engineers? The organization developing or using the AI model? We, for sure, can't take AI to court.
So far, the best way to deal with AI hallucination has been to keep generative AI out of areas such as diagnostics and prescriptions and instead utilize it for less sensitive tasks such as chatbots, patient record analysis, and summary and predictive analysis.
Privacy Concerns
There has been rising concern over the use of private data to train AI tools. Some of this data scraped from the internet may include personal and copyrighted material, often without explicit consent. That brings the (well founded) fear that these AI models might mistakenly reveal sensitive, personal information they were trained on. As much as there have been AI-related regulations in some parts of the world, we need legislation specific to the use of AI in healthcare.
So, is gen AI the big bad? Sure, it lacks the sort of nuance one could only gain from "living" and being sentient, as humans are, but certainly not.?
A study conducted by the National University of Singapore suggested that hallucination is an inevitable outcome of large language models. However, as seen here, we can reduce risk by keeping generative AI out of sensitive areas of healthcare such as diagnosis. As for privacy concerns, our best bet is healthcare-specific AI legislation that protects users and their private data but does not simultaneously stifle innovation. Since AI is here to stay, we must find a way to coexist and grow alongside this technology while prioritizing our best interests.
Project Profile
At IntelliSOFT, we see the potential of gen AI to change the digital health landscape and we are always actively exploring innovative ways to do this. IntelliSOFT, funded by the Bill and Melinda Gates Foundation through the Global Grand Challenges, is conducting a proof-of-concept study that focuses on harnessing the power of artificial intelligence (AI) to combat NCDs among Kenyan youth. IntelliSOFT has also developed an AI-powered tool that leverages ChatGPT and Gemini to empower and engage Kenyan youth in NCD risk prevention.
We are working with researchers from Stanford University, Children's Hospital, USA, and the National Cancer Institute of Kenya to implement this project, which aims to bridge the knowledge gap surrounding NCDs and their major risk factors, which often take root during childhood and adolescence.
Ex-AVP - Product Management, Infoedge Ltd | IIM Lucknow
7 个月Great article! I have been exploring the efficacy, ethics and the stage at which AI is in Diagnostics for a pet project of mine and the risk of hallucination is definitely very large and could be outright dangerous in some situation.
Public Speaker ? Indigenous African Thinker ? Head of Sustainable AI Africa Research at Inclusive AI Lab (Utrecht Uni) ? Cross-Cultural Peace-Weaver ? Talks about Ubuntu - Art of Being Human ? African Folklorist
7 个月Love this. One consideration to make is asking what core ethics you’d like your chat bot to operate from. This is crucial for ensuring that the cultural context is fit for purpose.