Can we use Voice to detect illness?
VoiceSignals #2 - Musings on Voice tech news
You walk into your office, and your colleague at the next desk greets you... but something is not quite right. You ask what's wrong and they tell you they're suffering from a terrible migraine that was induced by a horrible family fight. Aha, you proclaim! You knew it from the moment you walked in. But did you or did his greeting... signal a problem? Can we use Voice to detect illness? We're not talking about capturing words like 'I feel sick' or 'I have a fever' but the actual vocal qualities of our voice. When we produce sounds and form words, there's a lot of energy and complexity that goes into that. Our lungs and vocal chords, our overall energy levels, our mood, our mental and emotional state, all contribute as to how our voice will sound and, obviously, all these factors are prone to illness. We speak differently depending on our mood, so imagine what happens when experiencing mental health issues.
Health professionals have been using vocal analysis for diagnosis for years. The Diagnostic and Statistical Manual (DSM) of mental health has been using speech and language to diagnose mental illness for at least 50 years (e.g., the second DSM, published in 1968, has “talkativeness” and “accelerated speech” as two common symptoms of what was then called manic depression, now termed bipolar). But what about diagnosing something like heart disease, migraines, or Parkinson’s disease using only the sound of someone’s voice? Amazon is taking personalised medication to a whole different level... they filed for a patent in 2017 that would allow Alexa to determine whether you're sick and... sell you targeted products. The Canary Speech app has successfully completed FDA clinical trials and is ready to detect Alzeheimer's, PTS, and depression for suicide prevention, while researchers from the Polytechnic of Porto, School of Engineering, in Portugal have submitted a paper suggesting a methodology for early detection of Parkinson's using signal and speech processing techniques integrated with machine learning algorithms. Meanwhile, the pharmaceutical giant, Boehringer Ingelheim, is working on an app using speech recognition to detect warning signals for Schizophrenia or Alzheimer's dementia.
There remains a lot to be desired, but the technology is looking promising. The way we diagnose health issues is bound to change, and voice is going to play a significant role. At Behavioral Signals, through Joana Correia's research work on speech analysis for clinical applications, the machine learning team has been able to work on building deeper models for detecting depression from speech. Depression and anxiety have a significant economic impact; the estimated cost to the global economy is US$ 1 trillion per year in lost productivity, according to the World Health Organization.
What else we read...
Voice Is the Safest and Most Accurate for Emotion AI Analysis
The conversation is on. Face Recognition vs Voice. Which is more accurate and violates people's privacy less? A lot of ink will be poured over the topic, in the future, as the public raises concerns regarding surveillance and personal privacy. Facial recognition has been proven not to be fail safe and people are concerned about wrong identifications and its consequences. So how does voice recognition play into the mix? Is it less invasive? Can identity be obscured? How anonymized is the data? Read more on Hackernoon >
The Emotion Machine [Podcast]
- Why is it important that machines that can read, interpret, replicate and experience emotions? It’s an essential element of intelligence, and users will demand and require increasingly greater intelligence from the machines they interact with.
- How does emotion analysis improve human computer conversations? It helps to establish conversational context and intent in voice interaction.
Discover the answers to these questions and many others on this week's Voice Tech Podcast with Carl Robinson and Rana Gujral.
Voice Software Pioneer is Suing Alexa for Infringing its Patent
VoiceBox, founded in 2001 was a leading voice technology company that was acquired last year by Nuance Communications Inc., a Massachusetts-based voice technology company had built software designed to help computers understand speech, and demonstrated in the mid-2000s a device that could do things like recite the weather, play music or search for recipes on command. The company provided speech services to companies including Toyota and Samsung. Its patents didn’t follow VoiceBox’s portfolio to Nuance and were instead transferred to a new entity, VB Assets. The later, which is owned by a trust created for the benefit of employees and investors in the original VoiceBox company, is accusing (and suing) Amazon for patent infringement on six cases, covering conversational voice interfaces, commerce and advertisements. The complain also argues that Alexa poached VoiceBox’s chief scientist and held a recruiting event to encourage other employees to make the jump.
Marvin Minsky, the father of AI...
I guess you've all read in the news about the alleged connection between Marvin Minsky, the father of Artificial Intelligence and co-founder of MIT Labs, and Jeffrey Epstein. One more disappointing addition to a long set of sexual abuses, by powerful men. There's not much to say. Both are diseased, now, but still... it never ceases to amaze me how people in power use every advantage to serve themselves :(
Let me know below thoughts or other great relevant articles you've read...