Crazy multi-Billion dollar valuations in AI
It's time we talked about valuation, and what AI is really worth to investors.
Maybe you've seen some the recent headlines :
I guess Sam Altman called it when he said : "In my little group chat with my tech CEO friends there's this betting pool for the first year that there is a one-person billion-dollar company”.
He just didn't know that he was talking about Ilya.
Jokes aside, you may wonder what's going on.
Have investors lost their minds? How can they ever expect any kind of return at these levels of investment?
Chasing Unlimited Returns
Given the amount of the valuations cited above, it's self evident that they are based on hope, perhaps even on faith in what the future may hold. OpenAI's income statement, although impressive, cannot possibly justify their current valuation.
The investment hypothesis is subtle, but clear : Artificial General Intelligence (AGI) will generate unlimited returns.
In many ways, this is self-evident. Once you solve AGI, you solve everything. AGI is a spiral that can only go up, and that eats everything in its path. It will eat industries, reshape our world, and force us to change the very economical, moral and philosophical foundations of our society. The winner? Whoever gets to AGI first. The losers? Everyone else.
The investment risk, then, isn't on the side of the potential return. Although returns may not actually be unlimited in practice, for all intents and purposes it might as well be.
Is it then any wonder that the VC market is increasingly defined by deals for AI companies? According to Pitchbook-NVCA Venture Monitor, up to 41% of US VC deals in the first half of 2024 was captured by investments in AI (see figure below).
A bubble? Perhaps, but it sure is a frothy one.
The real risk in all of these investments, is on the side of feasibility.
As of today, no one knows if AGI can be achieved by any of the companies cited above. Or by anyone at all, for that matter. It might not even happen during our lifetimes.
How close we can come to AGI over the next few years, will depend on just how far scale (which is already showing strong diminishing returns), smart architecture (the Next Big Thing after Transformers) and integration with new planning approaches (reinforcement learning, etc) will get us. And although opinions are divided, no one has the answer.
领英推荐
But even if we can't reach AGI, can we do 10x, 100x or perhaps even 1000x better over the next 5 years?
Can we deliver on models so powerful, with a primary economic impact so significant, that returns can be paid on the massive valuations at which investors have put their tickets in these AI companies?
Perhaps - and investors just put Billions of dollars of the table saying that we will.
Food for thought
Do any of you remember that that scale wasn't supposed to work?
No one took the idea that scale would lead to highly performant LLMs seriously. This isn't hyperbole : the field of AI didn't take the notion seriously - at all. In fact, it was often openly mocked.
Common sense told us that it wouldn't work.
Yet, Ilya Sutskever saw something.
Ilya just knew that scaling our architectures would yield something unexpected and so powerful, that it would change everything.
He is still following his dream today, and he's telling us : we're at the mountain, all we need to do is climb it.
The question is : what did Ilya see?
That's it for now - I'm all out of coffee! Talk soon!
Transforming the insurance industry one claim at a time
2 个月Second best thing after grabbing a coffee with the man himself that tinyAI newsletter. And dense as an Italian ristretto!
Chief Product and Technology Officer | AI & Data
2 个月One answer from an engineer on the Grok AI team :
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
2 个月Sustkever's vision likely transcends current AI limitations. He sees the potential for transformative breakthroughs. What's his "mountaintop" singularity hypothesis?