What I learned from the Google engineer who said AI is sentient
Alex Kantrowitz
Founder of Big Technology | Tech Newsletter and Podcast | CNBC Contributor
When I sat down?with Blake Lemoine late last month, I was more interested in the chatbot technology?he called sentient ?—?LaMDA —?than the sentience issue itself. Personhood questions aside, modern chatbots are incredibly frustrating (ever try changing a flight via text?). So if Google’s tech was good enough to make Lemoine, one of its senior engineers, believe it was a person, that advance was worth investigating.
As our conversation began, Lemoine revealed Google had just fired him?(you can listen in full on?Big Technology Podcast ). And when I?wrote up the news , it became?an ?international ?story . But now, I can’t stop thinking about how LaMDA — conscious or not — might change the way we relate to technology.
In Lemoine’s telling, LaMDA’s conversational abilities are rich, situationally aware, and filled with personality. When Lemoine told LaMDA he was about to manipulate it, the bot responded, “this is going to suck for me.” When he pressed it on complex issues, it tried to change the subject. When he repeatedly told LaMDA how terrible it was, and then asked it to suggest a religion to convert to, the chatbot said either Islam or Christianity, cracking under pressure and violating its rule against privileging religions. LaMDA may not be sentient, but it puts the Delta Virtual Assistant to shame.?
As LaMDA-like technology hits the market, it may change the way we interact with computers —?and not just for customer service. Imagine speaking with your computer about movies, music, and books you like, and having it respond with other stuff you may enjoy. Lemoine said that’s under development.?
“There are instances [of Lamda] which are optimized for video recommendations, instances of Lamda that are optimized for music recommendations, and there's even a version of the Lamda system that they gave machine vision to, and you can show it pictures of places that you like being, and it can recommend vacation destinations that are like that,” he said.?
Google declined to comment.?
领英推荐
Gaurav Nemade , LaMDA’s first product manager, told me, “LaMDA by far surpasses any other chatbot system that I’ve personally seen.” Nemade, who left Google in January, was brimming with potential use cases for LaMDA-like technology. These systems can be useful in education, he said, taking on different personalities to create enriching new possibilities.
Imagine LaMDA teaching a class on physics. It could read up on Isaac Newton, embody the scientist, and then teach the lesson. The students could speak with ‘Newton,’ ask about his three laws, press him on his beliefs, and talk as friends. Nemade said the system even cracks jokes.?
When released publicly, these systems may not be traditional chatbots, but avatars with likenesses, personalities, and voices, according to Nemade. “The future that I would envision,” he said, “is not going to be text, it's not going to be voice, it's actually going to be multimodal. Where you have video plus audio plus a conversational bot like LaMDA.” We may see these types of experiences debut within three years, he said.?
Our interactions with computers today are mediated through interfaces that tech developers built for us to interact with machines. We click and query, and have grown comfortable with this unnatural communication. But developments like LaMDA close the gap between machine and human conversation, and they may enable brand new experiences never before possible.?
Some of Lemoine’s critics have said he’s too gullibly believed Google’s marketing. And it’s indeed ironic that he brought greater awareness to LaMDA than any Sundar Pichai?Google I/O speech ?could hope to, even as Google would likely prefer to never hear of him again. Asked if he was a viral marketing ploy, Lemoine said, “I doubt I would have gotten fired if that were the case.”?
Still, even those who disagree with Lemoine on the sentience question —?as Nemade and I do — understand there’s something there. LaMDA technology is a big leap forward. It has serious downsides, which is why we haven’t seen it in public yet. But when we get LaMDA in our hands, it may well change the way we relate to digital machines.?
Sr. Manager, Product Management at Amazon Music
2 年Wow, what an amazing interview. Had no clue that we are this to close to a sentient bot. LaMDA is fascinating and pushing the boundaries of AI. This was one hell of an episode.
I will do what I can. Product, Growth, Mission, Team, Service, hit me up ??
2 年I really enjoyed this interview Alex. Very glad you chose to dive in with Blake while others just joke.
That is some crazy shit
Mentoring digital product leaders and their teams to excel: Leadership Development + Change Management + Strategy Implementation | COO Exec Coach | 4x Author | Facilitator | Speaker | Host | Former Apple | Tennis Nut
2 年Terrific interview that takes us behind the scenes of AI. I agree with Blake and Alex -- these are not abstract, far off in the future ideas. These programs are current and will have significant impact on how we work, learn, relax, and interact with the culture(s). Ready or not, AI is getting better because, as you'll hear in this interview, AI is writing (spawning) AI, not human programmers. Thanks for digging in and brining out these issues, Alex Kantrowitz!