AI and Psychology: What are the big questions for 2025?
David Cooper, PsyD
Strategy and Product Leader in Digital Mental Health, AI, business development and other innovation buzzwords. | Startup Mentor and Clinical Advisor
Last week, I attended the American Psychological Association 's Mobile Health Tech Advisory Committee meeting, where we explored the evolving role of AI in mental health. Throughout the week, I've shared insights in smaller posts, but here’s a long-form summary of the most critical takeaways for anyone working in digital health, AI, and mental healthcare.
How Do We Show the Value of Psychologists in the Age of GenAI?
LLMs are increasingly easy to implement. With access to psychology self-help books and research papers for retrieval-augmented generation (RAG), creating AI-driven mental health tools has never been simpler. Add to that the fact that some individuals may feel more comfortable opening up to AI than to another person due to perceived lack of judgment.
The big question: How do we, as psychologists, clearly articulate and demonstrate our unique value in this landscape?
Psychologists go beyond surface-level interventions—we ensure that AI-driven solutions are ethical, effective, and grounded in a deep understanding of human behavior and mental health principles. This is a challenge we must actively address as AI continues to evolve in mental healthcare.
What AI Tools Are Psychologists Actually Using?
As AI continues shaping healthcare, psychologists must ensure these technologies are ethical, inclusive, and aligned with patient needs. However, one of the gaps we identified is that psychologists aren’t regularly using AI tools—even widely available ones like ChatGPT.
Here are a few AI-powered tools I personally use:
? Boardy – A LinkedIn-based tool for professional networking.
? Lex.page – A writing tool with custom GPT instructions for specific contexts.
? ChatGPT for troubleshooting – AI can provide step-by-step solutions to technical issues faster than searching outdated forum posts.
Understanding Risk and Liability in AI-Driven Healthcare
For companies building AI solutions for healthcare, a major hurdle is how clinicians perceive risk and liability. Many providers hesitate to use AI-driven tools because they are unsure how these technologies impact their ethical and legal responsibilities.
Key question: If AI-driven systems make errors in assessment or recommendations, who is accountable - and more explicitly, who gets sued?
????This article by Mello and Guha dives deeper into this issue: ?? Read more
How Do We Ethically Use AI in Mental Health?
Many of the ethical dilemmas AI presents in mental health aren't entirely new—our existing ethical frameworks already offer guidance.
?? Tiffany Chenneville, PhD, outlines key ethical questions clinicians should consider when applying these frameworks to GenAI. ?? Read more
Given that business leaders and tech executives don’t take ethical oaths like we do, it’s on us to advocate for responsible AI development.
Can AI Improve Equity in Mental Health?
With all the noise around the rollback of DEI initiatives, how can we still push forward in improving equity? AI presents exciting opportunities to bridge gaps in access to mental health care. For example:
领英推荐
?? Language Translation – AI can translate psychological reports into multiple languages, making care more accessible.
?? Audience-Specific Reports – AI can tailor reports for different audiences, simplifying clinical findings for patients while keeping in-depth analyses for specialists.
These tools could be game-changers for underserved communities, where clinician availability is limited.
Why Explainability in AI Matters for Healthcare
With all the ethical and legal concerns mentioned earlier, explainability in AI models will become increasingly important.
?? DeepSeek and similar models that explicitly show their reasoning and thought processes will be essential.
Clinicians are responsible for patient outcomes, so they need tools that are interpretable. If an AI suggests a diagnosis or treatment plan, providers must understand why—not just take AI output at face value.
Where Is the Ethical Line in AI-Assisted Academic Work?
One of the more nuanced discussions revolved around the ethical use of AI in academia.
?? There’s a clear difference between using AI to debug an R script vs. using AI to fully write a research paper—but where exactly is the line?
If AI assists with organizing ideas, does that constitute academic dishonesty? AI is just another tool—no one claims they “wrote” a paper using Microsoft Word, even though it assists in formatting. As AI becomes more embedded in research, institutions must define clearer guidelines for ethical use.
How Can We Guide Patients on AI Risks If We Don’t Understand Them Ourselves?
?? Final thought: If psychologists are expected to guide patients on AI risks and benefits, we need to educate ourselves first.
AI is already embedded in mental health tools, from chatbots to diagnostic aids. If misinformation spreads, psychologists need to be the ones helping patients make informed choices. That means actively engaging with AI tools and understanding their strengths and limitations.
Final Thoughts
The discussions at the APA meeting underscored the immense opportunities AI presents in mental health, as well as the challenges that must be addressed for ethical and effective integration.
By continuing these conversations and fostering collaboration between psychologists, technologists, and policymakers, we can shape AI-driven healthcare in a way that prioritizes:
? Patient well-being
? Equity
? Responsible innovation
I’d love to hear from others in this space—what are your thoughts? How do you see AI shaping the future of mental health?
Remote/Telehealth Licensed Psychologist specializing in Tribal/Indigenous Health and Psychological Wellness
3 周There’s also an interesting convo to have regarding if LLMs replicate the bias and racism in society as well, since the people and materials they are based on are often steeped in the same societal biases as humans.
Interesting
Clinical Social Worker | Behavioral Health Technology
3 周I really enjoyed this! I thoroughly enjoy using and testing AI tools. I really enjoyed the part of your article that mentioned the use of AI for translation! What a brilliant use of AI to bring information to those who speak a different language. The only concern would be that someone needs to check the translation for errors that speak the language or are a certified translator but as a clinician I would be sure to do that prior to giving a report or any information to a client or recipient. The sky is the limit with AI and I’m glad the APA has you’ll leading the charge to help not only psychologist but other mental health clinicians properly vet and use tools that will help us be more efficient but ethical. Thanks for always sharing very helpful information!
Pediatric Neurologist | Clinical Development & Medical Affairs | Principal Investigator | Neurology, Oncology & Metabolic Diseases Expert
3 周One significant hurdle often overlooked in the AI-psychology conversation is how people perceive and trust AI-driven mental health tools. It’s not just about accuracy or effectiveness—it’s about whether people feel comfortable engaging with AI in such a deeply personal space. Even if an AI tool is clinically validated, skepticism remains, especially when it comes to emotional intelligence, cultural nuances, and the ability to truly understand human distress. Can AI ever replicate the trust and connection built in a therapeutic relationship? And if not, how do we ensure it complements rather than replaces human care? This psychological barrier could be just as important as the technical and ethical ones we often discuss. David Cooper, PsyD , what do you think—will people ever fully trust AI in mental health?