Three Things News Headlines Got Wrong About Google's "Sentient AI"? (and What They Got Right)

Three Things News Headlines Got Wrong About Google's "Sentient AI" (and What They Got Right)

Is AI sentient now? Is Google covering it up? That's definitely the impression easily gleaned from some news headlines this month about a Google employee's claims that a Google conversational AI chatbot 'has a soul', and his subsequent suspension from the company. Fear-mongering media coverage was frequently disturbingly off the mark - playing to overblown (but prevalent) fears that drive reads and clicks.

The fear, as usual, is a sentient, super-human AI that will take over humanity. You know, Terminator-style. Every 80s child remembers that Skynet became sentient on August 29th, 1997. So we're ticking quickly up to 25 years overdue. (Plan your Labor Day accordingly!)

So what's real vs. media hype about this situation (and many of the other headlines about AI today?) Let's dive in...

Three Things the Media Got Wrong

(1) MYTH: AI Sentience is here.

The latest advances in machine learning like GPT-3 or DALL-E - that leverage neural networks, deep learning and a gargantuan scale of primarily human-labeled data - are impressive and just the start of game-changing possibilities for almost every industry. But - each AI model is 'narrow', in that it focuses on a single task family (language tasks like generation or summarization for GPT-3, and image generation for DALL-E) and can't perform unrelated tasks. Note that humans still outperform these models in many dimensions - from language fluency and factual accuracy to matching the intention of the instruction or recognizing biased or offensive language or images. To use these models, you'll still have to be sure scaling a task through automation is worth the cost of this lower performance; or, you'll have to look at 'human in the loop' systems to verify and correct issues.

The closest anyone has come to 'general AI' - with the ability to perform any task a human can do - has been seen in the latest release last month of OpenAI's Gato, which can perform 604 different tasks, from controlling a human robot to captioning images to playing video games. Here's the rub: it can't do any one of these things better than the more focused 'narrow' models can (which are, in turn, a downgrade in quality from a real human performing the task). It remains to be seen if this approach - cross-training intelligent agent on an increasing number of tasks, hoping to later increase quality across the board - will be the one to scale to anything resembling human-like intelligence. In AI, some breakthroughs have come through scaling existing methods with more and more data; others have come from completely new approaches.

(2) MYTH: Google is acting like a guilty party, secretively covering up its more advanced capabilities in AI

Many headlines (examples here, here or here) imply that Google suspended the employee involved just for making these claims to silence them, or even that Google is 'running scared', alarmed over the potential of what they've created. What often gets buried or ignored was that the day before he was suspended, the employee "handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination". This is admitted, publicly, by the employee. Yet, news articles continue to use words like 'alleged' around Google's claims of confidentiality violations. Handing over confidential information is the nuclear option, an employment-ending move. Google could conceivably be accused of bias if they did NOT enforce this contractual obligation in this case, where it has in others. The employee's claims are, self-admittedly, not informed by "what's going on under the hood" (from this NPR interview) - they are ill-informed at best and attention-seeking at worst. Google has done nothing suspicious in suspending them, and has provided ample clarity about how they evaluate such claims responsibly.

(3) MYTH: The biggest immediate risk to our society is a sentient AI

Despite Elon Musk's infamous dire predictions, we are not on the cusp of a Skynet apocalypse. But we are on the cusp of some momentous changes in the nature of work and industry - on the order of the Industrial Revolution. Studies from McKinsey Global Institute and the Brookings Institute indicate that millions of people worldwide will need to switch occupations or upgrade skills with the advent of AI-powered automation. ~50% of present-day work activities are already automatable using currently-demonstrated technologies, with the pace of innovation and adoption accelerating. The most frightening part of this shift is who is getting left behind: already economically-disadvantaged groups like people of color, women, rural residents far from innovation tech hubs and older workers. Without substantial investment in retraining and up-skilling, and an examination of how we can work *with* AI systems across geographical boundaries, an already-growing divide threatens to surge into substantial civil unrest and upheaval.

It's more fun to focus on science fiction scenarios for potential technologies, so that gets more airtime. Meanwhile, we're ignoring the implications of our current advances unfolding in front of our noses day to day.

What the Media Got Right

With the latest advances in neural networks, machines simulating human behaviours can be more believable than you'd think.

I've worked first-hand with the output of neural model Natural Language Understanding (NLU) and Generation (NLG) for several years now, so I understand how completely believable and convincing the results can be. For humans evaluating the content quality, we had to provide a lot of context about the inputs and what the system is doing before they would stop blindly believing made-up, 'hallucinated' facts. It was disturbing to see that, even with an 'under the hood' understanding, I myself was too eager to accept words as truth when they are written in a way that's consistent with truthful, professional human writing. The conversations the Google employee shared externally are available for review; would you have been fooled too?

Are humans really as discerning as we think we are? Perhaps it's time to examine our own intellect - both in how we evaluate the information we receive, and how we choose the debates to focus on.

Bora Wiemann

Head of Digital CX & AI | Product Manager | Customer Self-Service | Conversational AI Evangelist

2 年

Btw. We have a great session on the topic in 4.5 hours time. https://meetu.ps/e/Lc2C6/5HZdv/i Hope you are joining too Polly Allen ?

James Thottan, PMP, SAFe LPM

Seasoned Program/Project Manager | Strategic Alignment & Execution | PMO, Data Governance, Digital Transformation, Compliance | SAFe, Waterfall, Agile | Delivering Value, Achieving ROI, Meeting KPIs & SLAs

2 年

Love this Polly - great read !

回复
Sam O.

Product Manager ::: I help businesses create value by lowering complexity and simplifying daily operations

2 年

I like your write-up Polly Allen :) thanks for clarifying and dispelling myths. The answers given sound very natural. However, they are very specific and use language and logic to respond to its design goals. A human response would most likely be more verbose and color out of the lines.

回复
Eduardo Fainblum P. Eng

Seasoned Electrical engineer. Registered Professional Engineer.

2 年

Excellent reading. Thanks !

回复
Jola Burnett

SVP, Client Officer @ Ipsos | MBA, Research

2 年

Super insightful Polly!!

回复

要查看或添加评论,请登录

Polly M Allen的更多文章

社区洞察

其他会员也浏览了