Search Engines dystopian race to LLM
Simone De Palma
Technical SEO Specialist | Data Analyst Practitioner | Founder of SEO Depths
The Shady race to LLM from Search Engines
In response to the success of Microsoft's ChatGPT3, Google recently displayed its version of a language model called BARD. The move followed a red code alert in Alphabet to regroup and devise the next business strategy to tackle the extraordinary advent of ChatGPT3.
Let's quickly recap what happened in the last few hours.
The Search Paradigm ?
The trickle-down advent of large language models (LLMs) inevitably wounded the search sphere. As arguably the search experience has reached a turning point in history, we are caught in a paradigm wear-out. In other words, the coexistence between LLMs and search engines has started to pose a structural problem.
Considering that one of the major challenges with large language models is they can’t say ‘I don’t know’, the inaccuracy rate in the response is doomed to rise indefinitely.
Because there is no universal truth worth a single question in this world, the state-of-the-art search as we are used to can't do anything but remain fixed at the core of the search experience
Let's be fair, large language models are good at generating prescriptive responses based on complex probability distribution models trained on a number of structured and unstructured data.?
Yeah. And where do all these data come from?
Whether it's a search engine or a language model, data always originate from the same old source: our websites.??
As people harvest the Web every single day with content, search results are certainly still far from perfect. The only thing we can do is refine the search and assess manually the results to find out the best response that confirms our biases, as in beliefs and emotions.
On the other end, a language model will cut off the noise from the search journey but will fail to deliver accurate and custom results.
I came to the conclusion that language models LLM should not be incorporated in any search engine as their purpose is different at the root.
The bizarre blend is among the reasons Alphabet lost $10 bn dollar after announcing BARD whilst leaving stakeholders high and dry.
The False Move of Vanity
Microsoft's integration of ChatGPT3 in Bing was a first-mover operation to chip away at crumbs of market share from Google.
When Alphabet raised a red code alert, they subconsciously became the prey of a confirmation bias that led the company to giggle the world something like:
"Hey there, I've always had the key to top-notch AI-generative systems. We're better than anyone else in that, now eat your BARD "
Hilariously, Google's Alphabet got scared about the uncovered power of an LLM that they actually master. An "inferiority complex" with regard to the competition played out in a gritty and pre-emptive counterattack to exorcise the anguish of seeing their monopoly crumble.
If Google is playing defense on margins going for the skinnier baby BARD, they refused to deploy their full-size LaMDA model or the far more capable and larger PaLM model
领英推荐
This is totally reasonable.
Google cannot deploy these massive models into search, as this would erode their gross margins too much.
The Innovator’s Dilemma
This is the real problem. It's not a matter of looking back in anger, rather it's the potential readjustment of the business model that could cast some shadows.
Introducing an LLM model in the search results is costly and requires to rethink of the current business model. To prevent the LLM hallucinations, they need to expand the batch of data to feed the model.?
The Bill of ChatGPT in Search
It was estimated that serving up an answer to a?ChatGPT?query costs roughly 2 cents, about 7 times more than a Google search due to the extra computing power required.
If ChatGPT-like LLMs are deployed into search, that represents a direct transfer of $30 billion of Google’s profit into the hands of the picks and shovels of the computing industry.
And if such a proportion was chipped away from Google, that means the operating income at Alphabet is doomed to severe contractions.
If the ChatGPT model were ham-fisted into Google’s existing search businesses, the impact would be devastating. Google’s annual net income for their services business unit would drop from $55.5 billion in 2022 to $19.5 billion.
?As the current state of search, the costly conversational search queries would generate fewer?ads?revenue unless Google switched abruptly its business model to charge advertisers more which would translate into displaying fewer ads.
Alphabet and their market followers could fairly afford this.
For example, Microsoft could adapt the existing business model to the conversational search by running it at a loss to pressure Google to react proactively to the competition shake-up.
The question is whether this is a sustainable business shift.
To wrap up
I expected Google to use its invaluable expertise to blow away threats from the competition instead of suffering apparently from a weird version of impostor syndrome.
I expected Google to channel the mounting frustration (if any) toward a new innovation by doubling down the great work in semantic search to cement their brand positioning.
We shall not forget that adopting an outside-in view in business strategy does not always pay off. This is especially the case of high-reputational brands that can even afford to transcend into verbs (let me google it?).
I am not saying Google should completely ignore what happens in the competitive arena, but rather review what Search really stands for and carefully remarking the underlying differences and scopes with respect to large language models (LLM)
SEO Lead at Minuttia | Organic Growth for SaaS ???
2 年Saved! Can't wait to read it. Super interesting subject.
?? I help DTC brands reduce their CAC with content marketing. ?????? Ecommerce SEO Specialist. ???? Hare K???a
2 年Lovely piece man! Nobody can explain like you do, totally based.
Data/Web Analyst | GA4, GSC, SEO, Content | BigQuery, Python, R, SQL
2 年What can I say? Based.