Mental Health Awareness Week: it's good to... type?
Of all the events to have taken place during Mental Health Awareness Week, will OpenAI's launch of the emotionally-aware GPT-4o prove the most significant?
This accident of alignment raises [intriguing/exciting/terrifying] questions about the current state and future potential of mental health support in the era of AI for the masses.
Partly driven by the digitisation and dehumanisation of daily life, we're already deeply immersed in an age of anxiety, rage and other doom-laden descriptors.
As millions know to their personal cost, mental health support needs are crushingly high and the available bandwidth of traditional services is often low-to-non-existent.
The charity Mind estimates over two million people are waiting for NHS mental health services. Since 2017 the number of young people struggling with their mental health has nearly doubled.
Far more than just our sanity is stretching at the seams. Something needs to change, but how?
Psychological pretext
From a young age I've tried never to take good mental health for granted. Being unknowingly autistic throughout my formative years frequently manifested as anxiety and depression, amid a carnival of trials-and-errors and a riot of unintended consequences.
Friendships, relationships, career progression and personal finances all felt the impact of regular resilience tests: some attached to youth and others self-engineered, like the time I accidentally collected £8,000 of parking fines in six months without realising. Amusing stories in hindsight and character-building experiences, for the most part, but very bruising for the brain.
The kind of coping mechanisms we all develop to some extent served me pretty well for most of my thirties, before imploding with a vengeance during the pandemic.
It was then, at the bottom of a six-month descent into burnout, I discovered text-based talking therapy. With the UK locked down, demand for video-based consultations outstripped supply to a far greater degree than in ‘peacetime’.
Text-based help was more readily available, and the lack of feasible alternatives made it a no-brainer. But what felt at first like a technical trade-off soon transpired to be, for me at least, a significant value-add.
Text appeal
In an ideal world, talking therapy wouldn’t be needed. In a better version of the world we’re fated to live in, almost anyone could gain something from routine access to mental health support, like seeing the dentist or getting a haircut.
It wasn’t my first therapy rodeo, but this particular process somehow set in motion a train of thought which ultimately led to a psychiatrist’s couch a few months later and the discovery of my neurodivergence aged 37.
Why did this approach alight on a line of exploration which a long list of other conversations – with family, friends, GPs, HR, occupational health, counsellors and psychotherapists – had failed to surface over the decades?
There's no simple answer, but I’m convinced the text-based process was a fundamental part of making progress. Helpful though it was, the benefit was not solely in the advice dispensed directly to my home by the remote keyboard therapist, but for the breathing space it allowed me for buried thoughts to come up for air.
It won’t suit everyone or every circumstance, but for someone with autistic and/or ADHD traits, the potential benefits of using tech to create a safe space and controlled setting go far beyond simply widening access to mental health support at a scale that is proving unachievable by people-led services.
Robot rapport
I've found one-to-one conversations in a counselling or coaching context have almost always resulted in gains when I’ve sampled them over the years.
But the popularity of journaling shows the written word is also a crucial means of expression for many, allowing for greater reflection and precision. In the pandemic context, the novel text-based therapy format meant:
What does all this have to do with AI?
The BBC, the Guardian and many more have frequently highlighted the growing body of work and commercial ventures harnessing emerging tech in psychotherapy.
Therapists are among the many professions AI is busy attempting to mimic at scale. Like it or not, robot therapy is a ‘thing’ and very few ‘things’ can be put back in their box once they're unleashed.
领英推荐
Various demographics report finding AI assisted therapy beneficial versus the traditional alternative method of in-person (now video) conversations, with their years-long waiting lists, which in many cases make for no realistic alternative at all. More specific use cases are sure to emerge as the field develops.
Is the technology good enough to be used in this way?
Naturally there are AI advocates and alarmists. Given the industry is already in growth mode, we’d better hope someone somewhere is checking it’s fit for purpose, including weighing up the risks of Unintended Consequences.
On the surface, AI feels anathema to providing humans with psychological and emotional relief. But so too does ketamine, which hasn't stopped its reinvention from horse tranquiliser and post-club digestif to beneficial therapeutic aid.
Will technology get better?
It just did, with OpenAI’s new emotionally-savvy release, and if past performance is any guide to future returns then it will continue getting better next week, and the week after, and the week after that until we all arrive in sunlit AI uplands or a dystopian AI wilderness.
Would I have been surprised if my text therapist turned out to be an AI chatbot? [spoiler alert: they didn't]
Yes, because everyone knows OpenAI only invented AI in 2022.
Yes, because the NHS information didn't suggest in any way that it was using AI as part of delivering public healthcare in this way, unless buried deep in the terms and conditions (which I naturally skimmed at best).
No, because re-reading the 2021 transcripts in 2024, the therapist's contributions feel very much like the kind of things an AI chatbot might or could say, if not quite today then at some point in the very near future.
Would I have gone ahead with an AI therapist if I’d been offered one?
In all likelihood, yes.
Partly due to the aforementioned lack of alternatives.
Partly due to natural curiosity.
Partly due to the belief that trying something different is a necessary precondition for achieving different outcomes.
I might take some convincing to accept the advice of an AI therapist at face value.
But I've little doubt that a well-trained and carefully safeguarded AI could add value to the exercise: by asking probing questions, challenging assumptions and putting forward interpretations which differed enough from the narrative I’d constructed for myself to move the discovery process forwards.
Value and where to find it
Data privacy, security, bias and ethics are all huge considerations in this space: far, far bigger than my grasp of the subject.
So too are the potential effects of knowing you're talking to an AI bot, rather than a trained and experienced human therapist.
Someone might answer to an AI therapist in a different way than to a human, while an AI might interpret their answers differently to its human counterparts.
That's part of the tension, but maybe also part of the opportunity to augment and improve on human-led therapy beyond simply enabling wider access. If understanding neurodivergence has taught me anything it's that 'different' doesn't automatically mean 'bad' or 'wrong'.
Most early adopters and experimenters with generative AI are aware that its value relies heavily on the quality of the prompts we give it.
A far more [intriguing/exciting/terrifying] question is whether the prompts AI can give us in return might help lighten, rather than add to, our existential load.