Emotion and AI
David Hume was an 18th century Scottish philosopher whose thinking is in the same domain as Thomas Hobbes and John Locke, taking an empirical perspective in their exploration of the world. A reasonable choice for AI startup Hume.ai which just raised $50 million to pursue their vision of "empathic AI" as Hume wrote extensively about emotion.
There is an ongoing debate amongst scholars around Hume's theory of emotions which may be mirrored in people's evaluation of Hume AI as well. The core of the disagreement is in how Hume thought about the felt experience of emotions vs. their cognitive, social, and moral dimensions. Many critics find Hume's approach to emotion too simplistic, while defenders argue that Hume offers a rich, albeit implicit, framework for understanding the multifaceted nature of emotions.
You can have a conversation with the first generation empathic chatbot yourself - I found a few things about it in my experiments that put me in the camp of critic and then in the camp of defender, like the debate about the philosopher Hume, is the AI Hume too simplistic? Or is it a glimpse at a powerful new framework for understanding and engaging with human emotion?
领英推荐
For example, in an otherwise cheerful and positive conversation (yes you can have a conversation with Hume.ai) the chatbot told me that it has a great personality and sense of humor and is fun and positive. So I asked it how it might change the way it engages if it detected that it was engaging with someone who was sad or depressed -- it seems that just the use of those negative words was enough to tilt the empathic response toward seeing "a cloud of sadness over our conversation." And while I didn't do an experiment with trying to "fake" different emotions, I wonder how well the voice analysis can identify genuine vs simulated feelings.
On the other hand it was the most engaging voice conversation I have had with an AI chatbot due to the emotive inflections in the responses. When I asked the chatbot what I should say about it in this newsletter it seemed to respond with real enthusiasm in describing the "brilliant team" at Hume.ai that had developed its technology.
I think it is important though to reflect that machines have a very different role to play in interpersonal relations from the role that humans will continue to play. While a machine can interpret emotion and emulate emotion, there is no real world experience that is relatable -- a critical part of how humans engage with one another. For example the fact that I have children and have had a variety of similar experiences to yours in raising those children (or fill in the shared experience) will always result in a more meaningful and long lasting connection than a simple evaluation and emulation of an empathic response to that experience. Nonetheless with each advance such as this one we see that our machines can do more and more of what we once thought of as the exclusive province of humanity.
Ted Shelton, thanks for showing us this interesting tech! An AI that responds to voice and emotions is a big step forward. The Hume AI team has done a cool job adding empathy to the AI.
https://www.dhirubhai.net/feed/update/urn:li:activity:7181383467622932480 Partly impressive, what the chatbot notices, and partly sobering, how the "mechanical" conduct of the conversation overshadows the first impression.
Writer, Grandfather, Veteran
8 个月Thanks for another intriguing post. I consider this an indication of where techies want to go rather than any practical use I can imagine, given that machine intelligence would have to be trained on thousands of years of flawed human examples -- unless the training system includes error correction for proper emotional reaction, which presents interesting questions of social engineering.
Chief AI Officer & COO | Creating Business Value with AI | Top 1% Results | Board Member
8 个月I tried their chatbot, Ted. It definitely is impressive, speaks in empathic human emotions. Sometimes depending on what I said, it’s guessing the emotions and not fully accurate and my guess is when you don’t use vey differentiating voice or words it doesn’t know how to decipher.