How To Think About Generative AI? Part?3
Eight Long-Term Trends Amidst the Short-Term Hype Cycle — Part 3
This is part three of a four part series with the following parts due to release every week here on?Linkedin?and?Twitter. Follow us to stay updated or share any feedback!
Last week, we covered how fine-tuned vertical LLMs will see massive adoption by businesses across the spectrum and how this presents a unique opportunity for a long list of AI services that startups to capitalise. We also touched upon SaaS startups that incorporate AI into the apps they are building and how they can create a moat in the industry. In case you missed any of this read last week’s blog here.?Now onto the next two trends!
Trend 5: Marginal Cost of Intelligence will approach?Zero
Remember in part 1, how we compared today’s AI revolution to 18th century’s Industrial Revolution — with the crux being that what Industrial Revolution did for manufacturing, AI will do for services?
Well, for the sake of recap — for the first time ever, AI will allow services to scale disproportionately. A call centre with the right virtual assistants trained on the right data can handle 10x the calls without needing 10x the staff. A copywriting agency can write 10x the Copy without scaling its staff 10x. The examples sound like a dream, but their second order effect is even more unthinkable — that marginal cost of intelligence will rapidly approach zero.
To understand this properly, we need a deeper understanding of?why?will MC of Intelligence approach zero and what chain of affects will it cause.
To know Why, Understand this
People often call LLMs a?stochastic parrot. While this may be technically true due to the randomness of outputs as well as the nature of LLMs to act as per their training data, this term is mostly used in critical context.
However, we feel the opposite — LLMs being a stochastic parrot is not a negative but rather a positive. Let’s redefine the way we look at them.
Probabilistic Mirrors, not Stochastic Parrots
Mirrors reflect back exactly what’s in front of them, right? Now imagine standing in front of The Mirror of Erised from Harry Potter— the mirror that reflects what you truly desire. AI is kind of like that — a probabilistic mirror — that tries to guess what you want to know and tries and throws back at you what it thinks you are looking for.
Picture this: you’re standing in front of this special mirror and instead of showing just one reflection, it shows multiple reflections, each slightly different but related to you. That’s where the “probabilistic” part comes in. In context of AI, it means that AI reflects back patterns based on the vast amounts of data it’s been trained on, but with an element of randomness. The AI can offer multiple possible responses to the same prompt, hence mirroring a variety of outcomes. At a philosophical level, this idea really pushes us to think about the essence of AI — a creation that doesn’t have its own thoughts or feelings but rather mirrors back the linguistic world it has learned, with a dash of unpredictability. It’s a reflection of our human language and interactions, yet colored by the randomness that the vast expanse of data introduces. It’s not just mimicry — it’s a probabilistic echo of our human complexity, an intriguing dance of determinism and chance.
The fact that AI is a stochastic parrot, ahem ahem, we mean a probabilistic mirror, means that it can create several strata and class of output within seconds, only with minor discrepancies due to its randomness. Basically, anything that can be summarised as input and output of data/information could potentially be recreated by AI — fast and cheap. This is what brings the marginal cost of intelligence closer to zero. Mind you, we don’t claim a complete zeroing of the cost of intelligence where every service is cheap. But yes, the reduction in cost of intelligence will definitely be visible in the years to come.
With vertical LLMs trained on specific fine tuned data, the quality of output will improve so much so that even in sensitive fields, cost reduction will be evident. Does this eliminate doctors, lawyers, teachers, accountants and consultants? No. But like Vinod Khosla’s?20% Doctor thesis, we may see a 20% Lawyer, 20% Teacher, and so on.
领英推荐
In fact, between now and 2030 — the reducing cost of intelligence trend will unfold in various sectors step by step on the back of Layer 4 (AI apps) built using Layer 3 (Tooling/LLM Optimisation Services). Let’s understand which sectors will see impact first.
Reduction in cost of marginal intelligence will enable knowledge services and coaching to be accessible to all, pushing the boundaries of what humans can achieve - forever.
Trend 6: Gold Rush is a Shovel Seller’s?Market
Historically, wherever there has been a gold rush, it has made the town’s shovel sellers rich.
Here’s what we know:
Layer 3 is anything above an LLM/AI Model and below a consumer/end-user AI app. Layer 3 helps companies win the AI gold rush. Its the modern day shovel.
Layer 3 of AI Stack is Well Positioned
Layer 3 is already seeing wins. MosaicML’s acquisition by Databricks for a whopping $1.3B is a major example. Both LangChain and Pinecone seem poised for creating huge shareholder value too. Countless other examples would include McKinsey and Accenture buying Iguazio and Flutura respectively, both with the intent to build better AI applications for their consulting clients looking for AI specific interventions.
This is part 3 of a 4 part series on long-term trends in Generative AI. Stay Tuned for the next post!?If you are building in this space or need to bounce off ideas, we are happy to chat.
Connect with the authors: