Why Generative AI (LLM) Is Ready for Mental Healthcare
Jose Hamilton Vargas, MD
CEO @ Youper | Mental Health Innovation | Psychiatrist | Advisor | Consultant
A discussion about the steps to make ChatGPT and generative AI safe and effective for mental healthcare.
Key points
I’m among those amazed by the power of OpenAI ’ ChatGPT and new technologies using large language models (LLMs). There is no doubt that these innovations represent a new chapter in applying Artificial Intelligence (AI) for mental healthcare. However, I’ll start this article by focusing on the human patient, not the technology.?
About three years ago, a sudden spate of panic attacks hit Eric Vaughan out of the blue.?
The 42-year-old marketing professional from Allen, Texas, initially thought he was suffering from some inexplicable medical condition, which is very understandable - a panic attack presents with a sudden onset of symptoms like accelerated heart rate, sweating, trembling, shortness of breath, chest pain, nausea, fear of dying, losing control or going crazy.?
After doctors determined that there was no underlying physiological cause for the attacks, Vaughan stumbled upon Youper and started using the AI-powered chatbot to cope with his condition.?
For almost a year, he religiously logged his ups and downs using the app. Eventually, Vaughan’s symptoms receded, and he stopped using Youper regularly. While he can’t give the app all the credit for stopping his panic attacks, Vaughan said it gave him a better understanding of the relationship between his moods and mental health.
“Therapy isn’t really my jam, So I managed it with medication, sleep, and Youper,” Vaughan told the report Christina Campodonico at The SF Standard.
Compared to most people, Vaughan was lucky to receive a diagnosis early on and start treatment. Close to half of Americans who needed mental health care in the past 12 months did not receive it, and 67% said finding mental health care is harder than connecting with a physician, according to a survey.
When reporters recurringly question me whether chatbots are here to replace therapists, my first thought is, “What therapists?”
The United States is experiencing a severe shortage of mental health professionals, contributing to a growing mental health crisis. According to the National Institute of Mental Health, approximately one in five adults in the US experiences a mental illness each year. Yet, there are only 30 psychologists and 16.6 psychiatrists per 100,000 people—and those numbers are even lower in rural communities. The COVID-19 pandemic has exacerbated the problem, as demand for mental health services has surged. Addressing this shortage will require all the help we can use, and that includes Artificial Intelligence (AI).?
While the use of AI in healthcare is not a new concept, there are moments in a technology’s maturation where a critical boundary is crossed and a breakthrough product is released. We are witnessing this with OpenAI’s ChatGPT, generative AI, and large language models.
Generative AI is ready for mental healthcare
Generative AI refers to a subset of artificial intelligence techniques that enable machines to create or generate new content, such as images, videos, music, or text, that resembles or is indistinguishable from content created by humans. ChatGPT is a? generative AI that uses large language models to create text and content.
Generative AI is ready to revolutionize the field of mental healthcare and become the foundation for a more specialized layer that will provide effective and accessible care to patients and also augment mental health providers.
Some people disagree and even feel threatened by this new technology. One of the critics of generative AI is surprisingly a mental health chatbot developer that believes that we should stick to old and more predictable technologies known as rule-based chatbots. They say that primarily rules-based systems are better equipped to replicate evidence-based therapy interventions reliably and point out that the tendency of generative AI LLMs to “hallucinate” and generate nonsensical or inaccurate responses can be detrimental to people’s mental health.
A rule-based chatbot is a type of chatbot that relies on pre-set rules and responses to interact with users. These rules are created by the developers and are based on a specific set of questions and responses that the chatbot can handle. The chatbot can only provide responses within the scope of its pre-set rules and cannot generate new responses, which means it’s incapable of hallucinating. On the other hand, a chatbot using generative AI, like ChatGPT, is powered by machine learning algorithms that enable it to generate new responses based on the data it has been trained on. This type of chatbot can understand natural language input and can generate more personalized and nuanced responses.?
Generative AI has several benefits that outweigh its tendency to generate nonsensical responses, such as personalization, efficiency, scalability, learning, and improvement. Most importantly, what most critics are not understanding is how generative AI creates value.?
Similar to the mobile technology breakthrough in 2008 when Apple launched the App Store and catalyzed a new era for app developers to create on top of the platform, the same thing is happening to ChatGPT, and mental healthcare won’t be an exception.?
Understanding this new technology and the mindset to apply it is key to overcoming resistance and fear.
Companies that use the base model, tune it, and add access will create a lot of enduring value, as said by Sam Altman, CEO of OpenAI.?
Steps to use generative AI to provide safe and effective mental healthcare
At Youper, we created a platform that combines AI and decades of research in psychology to develop digital mental health solutions that engage patients and deliver actual clinical results. Our first app, developed using our platform, has been used by more than 2 million users and hosted more than 10 million conversations. Researchers at Stanford University showed that Youper is effective at reducing symptoms of depression and anxiety
领英推荐
Here are the principles we follow at Youper to create the specialized layer built on top of OpenAI GPT to provide safe and effective mental healthcare:
Approach AI as a tool to improve clinical outcomes
Keeping the patient at the center of everything, focusing on improving clinical outcomes, and addressing real-world problems rather than simply adopting technology for its sake are the prerequisites of a successful approach. This requires an ongoing dialogue with patients, data collection, and analysis to develop a deep understanding of patient needs.
Define the intended use
Determining the intended use and indication for use is important for AI-powered digital health products because it helps ensure that the product is used safely and effectively for its intended purpose. This information is used to inform the design and development of the product, as well as to guide regulatory decisions about its approval and marketing. The company’s first product, for example, the Youper app, is intended for general mental health improvement and learning coping behavioral skills. At the same time, we have a pipeline of prescription digital therapeutics intended for use in the diagnosis, treatment, and prevention of mental health conditions.
Fine-tune GPT on relevant proprietary data
Training general models on data specific to the target mental health concerns is critical to ensure that GPT is able to provide accurate and relevant responses. This can be achieved by fine-tuning the model on a dataset of mental health-related conversations, or by using transfer learning to adapt pre-trained models.
Integrate evidence-based expert systems
Combining a structured expert system, natural language processing, and generative AI is the core of the process. The expert system is a complex decision tree that guides how and when the generative AI interacts with the user to ensure the application of evidence-based interventions.? At Youper, interventions are based on clinically proven behavioral therapies, including Cognitive Behavioral Therapy (CBT), Acceptance and Commitment Therapy (ACT), Dialectical Behavioral Therapy (DBT), Interpersonal psychotherapy (IPT), and Mindfulness-based Cognitive Behavioral Therapy.
Implement moderation and safeguards
Implementing moderation and safeguards to GPT is crucial for preventing harmful content and ensuring the accuracy of responses. Automatic input and output flagging can identify potentially harmful or sensitive content, such as messages indicating suicidal ideation or self-harm.
Protect user privacy
Implementing strategies to ensure that personal and sensitive information is not shared with unauthorized individuals or entities, including OpenAI, is an essential part of any application dealing with mental healthcare. Among the strategies to ensure user privacy is hashing usernames, email addresses, and other identifying information when communicating with OpenAI and using encryption when storing user information in the cloud.
Develop a user-friendly interface
Designing a user-friendly interface makes the conversational interface accessible, easy, and fun to use. We use a mobile app instead of a website or text messaging to be able to use all the amazing smartphone functionalities.
Monitor and evaluate performance
Continuous monitoring and evaluation of performance are important to ensure that it provides accurate and helpful responses. This can be achieved by gathering user feedback, analyzing anonymized logs, and constantly iterating for improvement.
Evaluate safety and clinical effectiveness
Rigorously evaluating AI-powered digital therapeutics is the only way to ensure it is safe and effective. This includes initial studies of acceptability and effectiveness followed by randomized clinical trials and other forms of validation. At Youper, we believe this is true even for general wellness products, for which FDA does not intend to enforce oversight.
Pursue a regulatory pathway
Investing in regulatory compliance and establishing a communication channel with regulatory bodies such as the FDA are the first crucial steps to ensure a rigorous and systematic evaluation of the potential benefits and harms of an AI-powered digital therapeutic intended for use in the diagnosis, treatment, or prevention of mental health conditions. This will enable an independent team of experts to evaluate and mitigate new risks promptly, especially when unanticipated issues arise.
Conclusion
This is the best time to build solutions to help millions of people suffering in silence, unable to access the mental health care they deserve. Generative AI and LLM models, like ChatGPT, are the breakthroughs we have been waiting for. At Youper, we understand that this is just the beginning. Still, we couldn’t be more motivated to keep doing what we do best: taking AI and combining it with rigorous research and care for our patients to make this technology safe, effective, and accessible to people.
Behavioral Health Scientist and Technologist specializing in AI and mental health | Cybertherapy pioneer | Entrepreneur | Keynote Speaker | Professional Training | Clinical Content Development
1 年Such an important discourse. I believe it's time we start looking at AI as a potential game-changer for mental healthcare. With the devastating crisis we're facing, we can't afford to write off promising innovations before fully exploring how they might expand access and improve care.? Of course, I'm not suggesting we charge ahead recklessly with deployment. Thoughtful regulation and testing will be crucial every step of the way. But done right, I truly believe AI could be transformative - providing quality, personalized mental healthcare to underserved communities, catching signs of mental illness early, freeing up therapists to focus on patients. We have to be willing to engage with that immense potential. By thoughtfully integrating AI to augment dedicated professionals, not replace them, we can build on human strengths while benefiting from data-based insights. I wrote on this in two articles recently: https://www.dhirubhai.net/pulse/emergence-ai-mental-healthcare-navigating-risks-scott and https://www.dhirubhai.net/pulse/judicious-innovation-navigating-tensions-around-ai-scott/
Global M&A, OutSourced B,D&L Leader for SME (Small and Medium Enterprises) & Emerging Technology Healthcare Companies
1 年Jose Hamilton, MD - once again, your insights are spot-on for what's needed in #mentalhealth care. When done properly, the use of generative #AI is safe and effective - as Youper continues to demonstrate for it's users.
Helping Medical Device & Healthcare Companies Achieve Maximum Value When Selling Their Business
1 年Great contribution to the discussion, Jose Hamilton, MD. The challenges associated with accessing mental health care need an innovative solution like Youper to fill the gaps in care.
MD Pediatrician, MBA, MsC, PhD
1 年Outstanding article, Jose Hamilton, MD! Mental health issues have been a challenge across the world. Therefore, it is essential to understand how a consolidated organization in the mental health field, such as Youper, can use Generative AI models, such as ChatGPT AI, to face these challenges. Congratulations, and thanks for the insights sharing.
A giver and proven Tech Entrepreneur, NED, Polymath, AI, GPT, ML, Digital Healthcare, Circular Economy, community wealth building and vertical food & energy hubs.
1 年Insightful article. We are looking to collaborate with people and organisations like you at the cutting edge. Our alertR, is a behavioural intelligence-based alerting system providing support and assistance to vulnerable individuals.?video https://vimeo.com/779333634