AI decisions when you're Google
Credit: DALL-E

AI decisions when you're Google

With the buzz around OpenAI followed by the Gemini fiasco, Google CEO Sundar Pichai has been under pressure to turn things around quickly - some even asking for his resignation.

"Google has lost its way with AI" seems like a common refrain now.

This is however unfair. After all, Google parent Alphabet is way ahead of the competition when it comes with self-driving cars - arguably one of the most difficult nut to crack in the AI space. And the company has been using AI in many of its services for years, among others spam detection in Gmail, voice recognition on Android or many types of automatic detection in YouTube (copyright, adult content, etc.)

But the thing is: AI has many, many fields of application, each of which may require significant resources to develop. No company can thus expect to successfully master them all. Worse, it's impossible to know ahead of time what field will lead to something useful and what will be a dead end. Ten years ago, full self-driving cars were all the rage, now they seem like they're always a few years away. On the other hand, many people back then would have probably scoffed at the idea of a service turning a simple text description into an AI-generated realistic image.

So companies need to prioritize their AI strategy based on their business model - and gut feeling. If you're Google, focusing on spam detection or voice recognition makes plenty of sense. It acquired robotics company Boston Dynamics, but because it never found a use for it sold it a few years later. The self-driving car initiative was born more out of opportunity. It indeed piggy-backed on Google Street View's army of vehicles roaming the streets around the world to gather data. And I wouldn't be surprised if the company has been using AI to better understand what users mean when they enter some text in its search engine. I don't know if there were talks about a ChatGPT-like service inside Google before OpenAI burst on the scene, but even if there were it's not clear it would have been the most promising candidate given the company's business model.

Unfortunately for Google, perception matters a lot. Generative AI services such as ChatGPT or DALL-E may not be used as often as, say, Google voice assistant (many people got burnt by using ChatGPT for their work), but they provide fantastic demos. And unlike the very cool things the Boston Dynamic robots can do, these are demos regular people can use themselves. By grabbing so much mindshare, OpenAI created the perception that they alone rule AI.

As far as the Gemini debacle, the guys at Google forgot the number one rule of such a service: provide users what they expect. This is complicated by the fact that an image generator such as DALL-E or Gemini is ruled by stereotypes. People expect a hacker to wear a hoodie. They expect a college graduate to be a young adult wearing an (American) graduation hat. They expect vikings to wear horned helmets (even though, historically, they never did).

Which means that, no matter what you do, some results are going to offend some people. Now, it's possible Google tried to over-compensate some really embarrassing mistakes its AI made in the past. But by trying to avoid controversy, Google decided to consciously tweak results in less-than-subtle ways. That derailed any hope to explain controversial results as an honest mistake.

Also great: "no matter what you do, some results are going to offend some people."

Key sentence here: "By grabbing so much mindshare, OpenAI created the perception that they alone rule AI."

要查看或添加评论,请登录

社区洞察

其他会员也浏览了