Emotion and AI
"Comic/Tragic Masks" by OpenAI's DALL-E 3

Emotion and AI

David Hume was an 18th century Scottish philosopher whose thinking is in the same domain as Thomas Hobbes and John Locke, taking an empirical perspective in their exploration of the world. A reasonable choice for AI startup Hume.ai which just raised $50 million to pursue their vision of "empathic AI" as Hume wrote extensively about emotion.

There is an ongoing debate amongst scholars around Hume's theory of emotions which may be mirrored in people's evaluation of Hume AI as well. The core of the disagreement is in how Hume thought about the felt experience of emotions vs. their cognitive, social, and moral dimensions. Many critics find Hume's approach to emotion too simplistic, while defenders argue that Hume offers a rich, albeit implicit, framework for understanding the multifaceted nature of emotions.

You can have a conversation with the first generation empathic chatbot yourself - I found a few things about it in my experiments that put me in the camp of critic and then in the camp of defender, like the debate about the philosopher Hume, is the AI Hume too simplistic? Or is it a glimpse at a powerful new framework for understanding and engaging with human emotion?

For example, in an otherwise cheerful and positive conversation (yes you can have a conversation with Hume.ai) the chatbot told me that it has a great personality and sense of humor and is fun and positive. So I asked it how it might change the way it engages if it detected that it was engaging with someone who was sad or depressed -- it seems that just the use of those negative words was enough to tilt the empathic response toward seeing "a cloud of sadness over our conversation." And while I didn't do an experiment with trying to "fake" different emotions, I wonder how well the voice analysis can identify genuine vs simulated feelings.

On the other hand it was the most engaging voice conversation I have had with an AI chatbot due to the emotive inflections in the responses. When I asked the chatbot what I should say about it in this newsletter it seemed to respond with real enthusiasm in describing the "brilliant team" at Hume.ai that had developed its technology.

I think it is important though to reflect that machines have a very different role to play in interpersonal relations from the role that humans will continue to play. While a machine can interpret emotion and emulate emotion, there is no real world experience that is relatable -- a critical part of how humans engage with one another. For example the fact that I have children and have had a variety of similar experiences to yours in raising those children (or fill in the shared experience) will always result in a more meaningful and long lasting connection than a simple evaluation and emulation of an empathic response to that experience. Nonetheless with each advance such as this one we see that our machines can do more and more of what we once thought of as the exclusive province of humanity.

Ted Shelton, thanks for showing us this interesting tech! An AI that responds to voice and emotions is a big step forward. The Hume AI team has done a cool job adding empathy to the AI.

回复

https://www.dhirubhai.net/feed/update/urn:li:activity:7181383467622932480 Partly impressive, what the chatbot notices, and partly sobering, how the "mechanical" conduct of the conversation overshadows the first impression.

Tom Abate

Writer, Grandfather, Veteran

8 个月

Thanks for another intriguing post. I consider this an indication of where techies want to go rather than any practical use I can imagine, given that machine intelligence would have to be trained on thousands of years of flawed human examples -- unless the training system includes error correction for proper emotional reaction, which presents interesting questions of social engineering.

回复
Reddy Mallidi

Chief AI Officer & COO | Creating Business Value with AI | Top 1% Results | Board Member

8 个月

I tried their chatbot, Ted. It definitely is impressive, speaks in empathic human emotions. Sometimes depending on what I said, it’s guessing the emotions and not fully accurate and my guess is when you don’t use vey differentiating voice or words it doesn’t know how to decipher.

要查看或添加评论,请登录

Ted Shelton的更多文章

  • Private AI

    Private AI

    Something has been bothering me for a while. Large tech companies running generative AI are promoting a confusing…

    29 条评论
  • The Turing Dilemma

    The Turing Dilemma

    In The New Yorker's "Weekend Essay," the fabulous science fiction author Ted Chiang opines on "Why A.I.

    13 条评论
  • Possible futures

    Possible futures

    A little bit in the future "The risk of under investing is dramatically greater than the risk of over investing," said…

    10 条评论
  • Meta + Ray Ban

    Meta + Ray Ban

    (a guest post from my daughter Paloma Shelton) As I grab my everyday essentials when I’m leaving the house—phone, keys,…

    8 条评论
  • Containment

    Containment

    where we are now According to a recent article in "The Information" an unnamed employee at Meta has stated that they…

    26 条评论
  • Where it all began

    Where it all began

    (an entry in the "how we got here" category) I have written in the past a bit about Frank Rosenblatt, the Cornell…

    3 条评论
  • Access to the future

    Access to the future

    (a "where we are going" entry..

    4 条评论
  • Transforming business with AI

    Transforming business with AI

    About ten years ago, when I had one of the world's largest retailers as a client, I visited their headquarters building…

    6 条评论
  • Mustafa Suleyman

    Mustafa Suleyman

    In 1984 I graduated from high school and headed off to my first year of college. That was the same year that Mustafa…

    6 条评论
  • First look at a new world

    First look at a new world

    When I wrote my newsletter post last week announcing my decision to join an amazing group of re-founders at Inflection…

    12 条评论

社区洞察

其他会员也浏览了