Drifting Into The Valley

Drifting Into The Valley

At the I/O 2018 conference, Google shared a demo of Duplex, their artificial intelligence service (bot) "for accomplishing real-world tasks over the phone". Their demonstration gives examples of a bot making reservations to a hair salon and restaurant. The unsuspecting recipient converses with the bot as they negotiate times and details. This is an impressive advancement in conversational bots and clearly demonstrates the power of AI as a tool for task augmentation. It’s also a clear indicator that we’re entering a new relationship phase with this technology, one where the line between humans and AI become blurred. 

While we’re still technically in the Narrow/Weak AI (context constrained) phase of artificial intelligence, the nuance and sophistication of natural language synthesis is now almost indistinguishable from a human. The ability for an AI to track multiple topics throughout a conversation is also contributing to a more human-like capability. While chatbots have been around for more than 50 years, often creating momentary illusions of sentience, we’re now entering a new era of conversational bots that can pass as human over the phone.

After listening to Google's demo, I was immediately struck by the use of “um” as a conversational filler. I’m guessing the presence of a filler tested well—providing a more human-like feeling, but um is annoying to me. It’s taken me years to reduce the use of um from my vocabulary, and now it seems it's being adopted as a part of Google's bot lexicon. A few years ago I was researching “bot etiquette” and I hadn’t anticipated we’d be perpetuating speech disfluency, but here we are. I'm sure the future of this technology will quickly adapt to a diction appropriate for each “owner”.

I was also struck by the idea that the human recipient of the call was unaware that they were conversing with a bot. While I was working on AI-related projects at Microsoft, we looked at the effects of perceived deception from AI—the results of your brain realizing it's been fooled. This is a primary principle in the Uncanny Valley hypothesis—the “feeling” that something perceived as real actually isn’t, which leads to a dip in the human observer's affinity for the humanoid or experience. In our research, the only way to reliably avoid a deception event is to be transparent—fully disclosing the hidden truth. While some may feel requiring a bot to self-disclose at this point is being overcautious, but the negative impact of a perceived deception should not be underestimated.

Extrapolating the use case into slightly more sensitive topics can also give us a sense of near-term moral and privacy dilemmas. For example, if the human were to ask the bot that they need a credit card to hold the reservation, the bot would need to understand the nature of the business trustworthiness and the user's intent regarding privacy. The user’s home address, family details, social security, work details, etc. could all be readily available to the bot, but when and why to share these details require a nuanced evaluation of each situation. When your bot answers a call from another bot, will it have the training to prevent a malicious phishing attack? There's also a good chance a bot would have better tools than most humans to recognize malicious behavior. Will we soon see a AI bot arms race, where we're damned if we adopt them and more so if we don't?

There’s an exciting side to to all of this, and it’s where Google’s Duplex product is surely headed—it’s about the creation of our personal digital proxies which will help us navigate the brave new AI-driven world. There’s even a chance these “AI proxy agents" will empower us to take back control of our data and leverage its true value. A well trained digital agent could negotiate an exchange of personal data—data not even Google has access to—for deep discounts at a retailer, instantly negotiate great insurance rates, perhaps even negotiate a new job salary. How much will we entrust in these intelligent agents? And who will act in our best interests? Personally I don’t have faith that Google will build my AI proxy agent with my best interests in mind—they have clearly shown their propensity for monetizing user data. 

While we still live in the Wild West of AI, there’s a growing interest in avoiding major missteps that could result in widespread mistrust in AI and related fields. Last year, a group of leading AI and Robotics researchers got together at the Asilomar Conference Center in California and wrote an initial set of principles to help guide new developments—it’s a good start, but in my opinion misses some thinking around disclosure and transparency. For the next couple of years we’ll likely stumble our way along as we figure out how to cohabit with our new digital friends.  

Heather Hughes

Consultant in Learning, Writing and Research

6 å¹´

I am so happy to hear the AI landscape is growing up. We are moving beyond enchantment that it exists to the impacts--subtle and blunt-- on our lives and opportunities. It gives me hope. I will digest the principles. Re: Proxies, I do wonder how to balance business interests. Don't think Google or any for profit can keep people safe. Might need different motivations. Non-profit? Govt.? Heard about this organization on NPR recently. https://ainowinstitute.org/ They document impact of AI decisions on vulnerable populations. Anyway, thanks for sharing your thoughts. I'll pass it around home.

要查看或添加评论,请登录

Ammon Haggerty的更多文章

  • Keep AI Weird

    Keep AI Weird

    Kyle Turman, creative technologist and staff designer at Anthropic, shared a sentiment that resonated deeply. He said…

    3 条评论
  • A founder walks into a bar. Ouch!

    A founder walks into a bar. Ouch!

    Or: It's all about the context. The recent AI boom is fueling a hardware revolution.

    10 条评论
  • The Duality of AI

    The Duality of AI

    The notion of a gift, that is both a blessing and a curse, is well-trodden parable fodder. I’ve been reflecting on this…

    12 条评论
  • Foxes in the Henhouse

    Foxes in the Henhouse

    This past year has seen a lot of hype around AI and the perceived dangers. Much of the fear is being driven by the…

    3 条评论
  • Grub first, then ethics

    Grub first, then ethics

    The Kevin Roose article about Bing/OpenAI got me thinking about my past research and thinking about AI and ethics. It…

    3 条评论
  • Purpose and Belonging

    Purpose and Belonging

    Today is an exciting day—our team launched a product, along with a big vision, called Joinable. Before I share what…

    40 条评论
  • Lessons Learned From My “Unfair Advantage”

    Lessons Learned From My “Unfair Advantage”

    This story is about my experience joining a corporate innovation incubator, pitching a successful venture, raising…

    12 条评论
  • Ambient Bionics

    Ambient Bionics

    The etymology of the word “bionic” is a beautifully simple construct—the blending of bio, Latin for life, and…

    5 条评论
  • Takt is now Formation + a brief history!

    Takt is now Formation + a brief history!

    In May of 2015 I met with the executive team at Starbucks to share my perspectives on personalization—an assemblage of…

    7 条评论

社区洞察

其他会员也浏览了