Will OpenAI’s ChatGPT disrupt Google search?

Will OpenAI’s ChatGPT disrupt Google search?

OpenAI’s ChatGPT is the latest and biggest news in A.I. tech right now. Rather than me describing to you what it is, let’s have it describe itself.

No alt text provided for this image
Figure 1: What is ChatGPT, in the words of the A.I. itself. Try it out here

Overall, the system is?astounding. Although newly released, the underlying base model is about 2 years old (it is based on?GPT-3), which is further tuned with Reinforcement Learning from Human Feedback (RLHF) using Microsoft’s Azure supercomputing infrastructure.

No alt text provided for this image
Figure 2: How ChatgGPT is developed. Source

While it is a breakthrough in text generation and knowledge synthesis, it tends to give you answers that are plausible-sounding but incorrect, and it can be verbose. But the potentials are alluring, leading to people speculating that this AI Chatbot will disrupt Google’s dominance.

No alt text provided for this image
Figure 3: Is Google in trouble? Source

In the article above, the author compared the results of her past Google queries with that of ChatGPT and found that ChatGPT’s results were more useful in 13 out of 18 examples. She is not alone. Here’s another?thread?summarizing how ChatGPT is superior to Google’s search.

Is it true then that Google is in trouble? The answer might be ‘yes’ if (1) serving answers via ChatGPT is a better business than Google’s current offering, and (2) Google cannot create its own.

Serving answers via ChatGPT is really expensive & slow

The biggest problem in using a large language model such as ChatGPT to serve search queries is that it’s too slow and expensive and, therefore, not scalable (at least not yet).

The speed difference is due to the different ways they work. Google?indexes?the web, building a massive database of websites and their content. When you search, Google looks into its database and then provides you with what it thinks is the most relevant information. Meanwhile, when you run a query with ChatGPT, it runs probabilistic inference of the large AI model (using?Nvidia GPU clusters?in the cloud) and serves you what is the highest likelihood response. For example, when you run the following query: “What is ChatGPT?”. Google returns that query in 0.42 seconds, while ChatGPT takes almost 11 seconds to deliver its answer.

The cost difference. It is estimated that Google processed?3.1 trillion?search queries throughout 2021. While we don’t know Google’s energy cost of delivering this on a per-query basis, we do know the rough cost for ChatGPT. It’s?single-digit cents?per chat. Assuming the cost per query is $0.05/query, the cost to serve the same search volume using a large language model such as ChatGPT is roughly $155 billion. That is more than the $149 billion Google Search generated in revenue in 2021. Be mindful that this is only the variable cost. This does not include the capex needed to build out such infrastructure.

Therefore, using ChatGPT in its current iteration will help you to turn one of the best and most profitable businesses in the world into a bad one (not to mention the fact that the world does not have enough GPU to serve the level of search volume).

Google probably has its own ChatGPT that has not been made public

Remember that ChatGPT is trained using public data available on the Web. Google has this data too. Google also has the training infrastructure and A.I. scientists to work on the problem. In fact, in April 2022, Google announced its own large language model (called?PaLM) capable of language understanding and generation, reasoning, and code-related tasks (we discussed this in an?earlier article).

It is very likely that Google has a similar tool that it has not made public.

As you can see, proclamations that Google’s search is being disrupted are premature. However, this does not mean that ChatGPT cannot be used for search. Because of the high variable cost associated with delivering each query, this type of service is probably best used for niche high-value search or used as co-pilot, with human-in-the-loop to correct mistakes. This is something that Microsoft is doing with its?GitHub Copilot?(also powered by OpenAI).

At a glance, ChatGPT is surprisingly useful, but often gives the wrong answer (so much so,?stackoverlow bans responses derived from ChatGPT?because they cannot correct them all). But that’s ok. Right now, it’s still in preview mode. When we look back to 2022, it will feel as though this was the?year of inflection?for AI?into mainstream?consciousness. While it is?unclear who will emerge as the big winner, progress is moving along at a fast pace.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了