Artificial Intelligence In Mental Health Care

Artificial Intelligence In Mental Health Care

Could the advancement of machine and deep learning algorithms be harnessed meaningfully in the area of mental health? Could depression, bipolar disorder, schizophrenia, or any other mental disorder be quantified so that technology could somehow add positively to their diagnostics or treatment? We tried to explore the uses of artificial intelligence in mental health care, and stumbled upon smart algorithms that support clinicians with early detection and diagnostics of mental health issues, with the flagging of suicide risks, and other ones that help patients manage their condition through counselling and constantly being there for them.

Could mental health become quantifiable?

While preparing our article about the future of psychiatry, we realized that one of the most exciting and controversial intersections of mental health and technology is the attempt to place artificial intelligence into psychology and psychiatry. Machine learning and deep learning algorithms are based on ‘hard core’ physical datasets: lesions spotted on X-ray images, CT scans, or malignant tissue samples. How could the achievements of artificial narrow intelligence and its two most advanced fields, computer vision and natural language processing be applied in such a fluidly classifiable field as mental health disorders?

Let’s approach the difficult question from looking at areas where smart algorithms excel: classification and clustering data coming from images, written or spoken texts and then finding patterns in the huge piles of datasets. As treatment for mental health issues is based in the majority of cases on language interaction, analyzing the use of speech data might be a viable way forward – no matter whether the data comes from social media posts, therapy sessions, or survey responses. On the other hand, the ‘physical’ manifestations of mental disorders could also be studied more closely by A.I. algorithms – meaning changes in the brain might be analyzed more closely.

Pearl Chiu at the Human Neuroimaging Laboratory believes that machine learning algorithms that are fed by both types of data will be able to diagnose mental illness. The lab hopes they could soon add saliva and blood samples to the equation as well. Perhaps, she says, this method can identify patterns that clinicians don’t notice or can’t access through the brain alone. By making mental health disorders more physical, Chiu hopes to help destigmatize them as well. If they can be diagnosed as objectively and corporeally as heart disease, would depression or bipolar disorder or schizophrenia carry the same shame?

It seems that the emergence of A.I. could not only mean profound change in the diagnosis – or we will soon see, the treatment – of mental health issues, but also how we look at the disorders themselves. Perhaps the era of shame attached to depression, suicidal thoughts, bipolar disorder is over with A.I.? There are many who are researching the potential emergence of smart algorithms for diagnosing or managing mental health disorders. Let’s see some promising examples.

No alt text provided for this image

Uses of artificial intelligence in mental health care:

Early detection, flagging risks, and prediction

In the future, patients might go to the hospital with a broken arm and leave the facility with a cast and a note with a compulsory psychiatry session due to flagged suicide risk. That’s what some scientists aim for with their A.I. system developed to catch depressive behavior early on and help reduce the emergence of severe mental illnesses. The machine learning algorithm created at Vanderbilt University Medical Center in Nashville uses hospital admissions data, including age, gender, zip code, medication, and diagnostic history, to predict the likelihood of any given individual taking their own life. In trials using data gathered from more than 5,000 patients who had been admitted to the hospital for either self-harm or suicide attempts, the algorithm was 84 percent accurate at predicting whether someone would attempt suicide the following week, and 80 percent accurate at predicting whether someone would attempt suicide within the following two years.

In another experiment, researchers proved that a smartphone coupled with an algorithm monitoring user behavior over a period of time could come up with a similar diagnosis. According to preliminary studies, changes in typing speed, voice tone, word choice and how often kids stay home could signal trouble. There might be as many as 1,000 smartphone-based ‘biomarkers’ for depression, said Dr. Thomas Insel, former head of the National Institute of Mental Health and now a leader in the smartphone psychiatry movement. At the moment, researchers are testing experimental apps that use artificial intelligence to try to predict depressive episodes or potential self-harm. Another tool called EARS (Effortless Assessment of Risk States) also uses smartphone data to identify people in psychological distress and may someday help flag individuals at risk of suicide.

No alt text provided for this image

Facebook also allows to do something similar on its platforms. For years, the company has allowed users to report suicidal content, but the social network ramped up these efforts after several people live-streamed their suicides on Facebook Live in early 2017. About a year ago, Facebook added A.I.-based technology that automatically flags posts with expressions of suicidal thoughts for the company’s human reviewers to analyze. Thus, the company now leverages both algorithms and user reports to flag possible suicide threats.

Researchers participating in a study published in World Psychiatry used a machine-learning computer to classify speech patterns in individuals with schizophrenia and was 83 percent accurate in predicting when psychosis would occur.

Digital interviewers by the side of human doctors

Another area where algorithmic analysis could help is the automation of certain tasks that aim to be repetitive for a reason. For example, conducting structured clinical interviews could be done in the future by virtual humans – as they would definitely ask the same, previously determined questions and interviewees wouldn’t be that much burdened by sharing their secrets to a virtual, anonymous entity as to another, possibly judgmental human.

For example, in one study, a virtual human conducted interviews with real people in emotional distress. Distinct speech patterns, such as slurring vowel sounds, and patterns in body language, such as the direction someone is looking, were analyzed. If a machine learns that people who are depressed do not open their mouth as wide as someone who is not depressed, it can use speech analysis to identify people who are more likely to be depressed. Such technology has the power to dramatically improve research and treatment. Smart algorithms could find patterns and behavior that human interviewers might miss or leave out of their sight – because we all have cognitive bias.

No alt text provided for this image

Source: sparkforautism.org

Another research published in Studies in Health Technology and Informatics found that more symptoms of post-traumatic stress disorder were identified in service members who spoke to a virtual human using facial expression analysis than in service members receiving a post-deployment health assessment.

Artificial intelligence-based chatbots to help patients 24/7

Nevertheless, artificial intelligence could not only help with the diagnostics and early detection of mental health issues, but it could also participate meaningfully in the management of disorders. As compared to a human psychiatrist or psychologist, the most advantageous features of smart algorithms could be their anonymity and accessibility. For example, many smartphone-based applications have been developed in recent years that are able to proactively check on patients, be ready to listen and chat anytime, anywhere, and recommend activities that improve the users’ wellbeing. No matter whether it’s 3 a.m., the chatbot is ready to listen to any trouble and no one has to wait until the next appointment with the therapist. Moreover, these applications are usually more affordable than therapy itself, thus also those people could get some help who could otherwise not get any counselling at all.

For instance, Woebot, a little algorithmic assistant aims to improve mood. It promises to meaningfully connect with the user, to show bits and pieces of empathy while giving you a chance to talk about your troubles to a virtual robot, and have some counseling in return. Pacifica is a similar tool to boost users’ mood through cognitive behavioral therapy. Tools and activities include meditation, relaxation, mood and health tracking tools.

No alt text provided for this image

Source: aithority.com

Moodkit, developed by mobile application development company Thriveport, is a system of applications that helps users alleviate symptoms of mental illness, and it also bases the guided activities on the achievements of cognitive behavioral therapy in order to identify and change negative thought patterns over time. The A.I.-based ‘emotionally intelligent’ chatbot, Wysa, combines techniques of cognitive behavioral therapy, dialectic behavioral therapy with guided meditation, breathing, and yoga. It was developed in collaboration with researchers from Columbia and Cambridge universities, and aims to help users manage their emotions and thoughts.

Facing mental health issues is one of the most difficult tasks in life – and with the worsening global statistics about mental health disorders, we appreciate the hard work that each and every mental health professional does, and we truly welcome every innovation and technology that aims to bring down the prevalence of depression, suicide risk, or any other mental trouble. Statistics say that every 40 seconds one person dies from suicide and for every adult who dies from suicide, there are more than 20 others who have attempted to end their life. If we can turn away only one person from taking their life with the help of technology, we already won. Let’s keep up the fight together. 

Dr. Bertalan Mesko, PhD is The Medical Futurist and Director of The Medical Futurist Institute analyzing how science fiction technologies can become reality in medicine and healthcare. As a geek physician with a PhD in genomics, he is a keynote speaker and an Amazon Top 100 author.

Subscribe here for The Medical Futurist newsletter to get exclusive details about digital health!

Joseph Katimbo. S.

Aerospace undergraduate| Founder Serene Mind | Data nerd

2 年

I am looking forward to seeing companies like Ataraxis_ Co succeed with the power of A.I

Meagan Wyzenbeek

Business Relationship Manager for Global IT at Saab

5 年

The VA has been using VR to assist in the treatment of PTSD. There are many input parameters at the start of each session. Then there is the analysis of a session. A session is dynamic and adjustments need to be made based on the patients responses. Those are all areas that AI could assist the clinician.

回复

Such an interesting area which has been a tempting subject for many including me. Yet there are some basic questions untouched or unclarified when it comes to join AI with mental states and its role within the shoe of human therapist. The question is that the strong perception of AI is still debatable if intelligence of human matters as an opposing scale. That said, strong AI needs to establish justification if it could algorithmicaly semantize deep meaning of human mental life. However, classification of disorders via neurological, biophysical or textual data is considerable to some good extend. This is the weak (me calling it functional) idea of AI which may employ cognitive computing techniques including deep learning, undeniably can play as a healing coach. Here, monitoring and tracking the patterns of behavior regarding the mental states of humans are other functionality but we need to be vigilant that making a factual psychological meaning out of all structured/unstructured data is another metalevel and problematic debate. That is the question.

回复
Alice F.

|EMBA | MAICD| GIA(Affiliated)| Risk and Clinical Governance|Diversity and Inclusion

5 年

We need to develop more expertise on natural language processing with experts in mental health background including psychiatrists , allied health professionals who can provide that language contexts. A very exhibiting times ahead .

要查看或添加评论,请登录

Bertalan Meskó, MD, PhD的更多文章

社区洞察

其他会员也浏览了