Why OpenAI’s $157B valuation misreads AI’s future

Why OpenAI’s $157B valuation misreads AI’s future

I break down the logic behind the company's towering price tag and why I think the most valuable AI companies have yet to be built.

When OpenAI raised $6.6B earlier this month—the second-largest private funding round in history, topped only by its own $10B raise from Microsoft last January—it wasn't just setting records. It was making an argument about where AI will create value and who stands to capture it.

Understanding this argument, both its logic and its limitations, sheds light on where I believe the most promising opportunities in AI are likely to emerge.

The bull case

OpenAI's growth has been nothing short of meteoric. Monthly revenue reached $300M in August 2023, a 1,700% increase from January. 10M users pay $20/month for ChatGPT, and the company projects $11.6B in revenue next year.?

This growth, which outpaces even the early days of Google and Facebook, underpins the bull case by investors like Brad Gerstner. In his view, calling OpenAI a "model company" is like calling Google a "search algorithm company"—it fundamentally misunderstands the scale of the opportunity. ChatGPT isn't just another tech product; it's a fundamental advance in how humans interact with computers, "a hundred times better than the card catalog known as 10 blue links that Google built 25 years ago."?

Google did indeed transcend its origins in search, but only by first mastering its core business. Today, OpenAI is attempting to be a research lab, a developer platform, an enterprise solution, and a consumer product company all at once. Its investors are betting that AI is so transformative that the usual rules of focus and specialization don't apply. The first company to achieve AGI will win everything, so the optimal strategy is to pursue every advantage simultaneously.

The bear case

?? Bad economics

This narrative collides with a stubborn reality: the economics of AI don't work like traditional software. OpenAI is currently valued at 13.5x forward revenue—similar to what Facebook commanded at its IPO. But while Facebook's costs decreased as it scaled, OpenAI's costs are growing in lockstep with its revenue, and sometimes faster.

In traditional software, increasing scale leads to improving economics. A typical software company might spend heavily on development upfront, but each additional user costs almost nothing to serve. Fixed costs are spread across a growing revenue base, creating the enviable margins that make today's tech giants among the most profitable businesses in history.?

Generative AI plays by different rules. Each query to a model costs money in compute resources, while each new model requires massive investments in training. OpenAI expects to lose $5B this year on $3.7B in revenue. Their projected losses through 2028 amount to $44B, excluding stock compensation. Computing costs alone will reach $6B this year.

Google's 2004 IPO followed years of profitable operation, with $106M in profit on $962M in revenue. Facebook went public after achieving $1B in profit on $3.7B in revenue. Both companies demonstrated that growth improved their profit margins. OpenAI, by contrast, plans to 100x revenue to $100B by 2029 while piling up progressively larger losses. This requires maintaining 93% annual growth for five years—a rate achieved by only a handful of companies in history, all of which saw their economics improve with scale.

The infrastructure needs to sustain AI's progress are staggering. Microsoft, OpenAI's primary partner, plans to spend $80-110B on AI infrastructure next year alone. According to semiconductor analyst Dylan Patel, Microsoft is building computing clusters with 100,000 GPUs and aims to construct a single facility consuming one gigawatt of power by 2026. By 2028, their computing requirements could reach multiple gigawatts—equivalent to an entire country's electricity demand.?

To put these numbers in context: At the height of the dot-com bubble, the entire internet economy generated $1.5T in revenue (adjusted to 2024 dollars). Today, generative AI companies produce less than $10B in revenue while planning infrastructure investments that could exceed $1T.

?? No technical moat

OpenAI's challenges extend beyond economics. Its leadership team has seen near total turnover over the past year. Eight of its eleven co-founders, including CTO Mira Murati and chief scientist Ilya Sutskever (who are launching competing ventures), have left. CEO Sam Altman has responded by recruiting experienced executives, like Sarah Friar as CFO and Kevin Weil as head of product. But when nearly all the talent that built your breakthrough technology goes elsewhere, it's worth asking why.

What's more, OpenAI's massive infrastructure investments might not be building a moat—they might just be the cost of staying in the race. As board chair Bret Taylor admits, we're watching "the fastest technology commoditization cycle we've ever seen." GPT-4's pricing per token has plummeted 98% since last year's dev day. The gap between their SOTA models and open-source alternatives is narrowing with a speed that should make any investor nervous.

?? Distribution and openness matter

This brings me to what might be the most important dynamics in AI today: distribution and openness. In tech, the winners aren't always those with the most advanced technology—they're often those who build the most compelling ecosystems.

In a recent memo, Zuckerberg draws a parallel to the early days of high-performance computing, when major tech companies poured resources into proprietary Unix systems. At the time, few imagined that open-source alternatives could win. Yet Linux ultimately prevailed—not because it was better from the start, but because it allowed developers to modify the code freely, run it more securely and affordably, and build a broader ecosystem that enabled more capabilities than any closed system.

Meta is betting AI will follow a similar path. While Llama 2 could only match older, closed models, Llama 3 has reached parity with the frontier. Their Llama 3.1 405B model can reportedly run at roughly half the cost of GPT-4, while giving enterprises something potentially more valuable than raw performance: complete control over their data and freedom from vendor lock-in.?

Meanwhile, Meta's consumer distribution is unmatched. LLaMA 3 currently powers AI features across Facebook, Instagram, and WhatsApp, reaching 1.1B users in 14 countries—and they're only a third through their rollout. While ChatGPT has reached 23% of adults in the U.S., Meta is deploying its AI to billions of users globally. LLaMA 3 might not match GPT-4 in every research benchmark, but it doesn't need to. Meta has optimized for what most users want: quick, reliable responses on mobile devices, especially in emerging markets where simpler queries dominate.

This creates brutal economics for OpenAI. While they reportedly plan to raise ChatGPT's subscription price to $44/month over the next five years, Meta can give away their AI for free. As Zuckerberg notes: "Selling access to AI models isn't our business model. Openly releasing Llama doesn't undercut our revenue or ability to invest in research like it does for closed providers."

Other factors in Meta's favor: it’s unencumbered by the legacy issues slowing down competitors like Google, whose search-based advertising business faces an existential threat from generative AI. It also has a founder-CEO willing to prioritize long-term market dominance over short-term profits.

? The Microsoft dilemma

OpenAI's relationship with Microsoft introduces another layer of complexity. What started as a lifeline—providing essential capital and computing resources—now risks becoming a constraint. Microsoft receives 75% of OpenAI's profits until its $13B investment is recouped, followed by 49% until it hits $92B.

Their partnership shows growing signs of strain. Microsoft's $650M acquisition of Inflection AI's team looks less like opportunistic talent acquisition and more like a hedge against overreliance on OpenAI. Meanwhile, OpenAI's $10B computing contract with Oracle (while structured through Microsoft to maintain exclusivity agreements) suggests a push for independence from Microsoft's infrastructure.

Adding to the tensions is OpenAI's pending change from a nonprofit to a for-profit entity—one of the most controversial corporate restructurings in recent history. Both companies have engaged investment banks to negotiate Microsoft's future equity position. OpenAI needs to complete this conversion within two years or risk investors from the latest round demanding their money back. Any major changes will also need approval from the FTC, which has been less than friendly of late to big tech.

Where value in AI will accrue

So where will the most promising opportunities in AI for investors and startups lie? Analyzing the AI ecosystem layer by layer reveals where the most durable value is likely to emerge.

??? Physical and cloud infrastructure

At the bottom of the AI stack is hardware: vast arrays of GPUs, specialized processors, networking equipment, and massive data centers. Hyperscalers are investing billions here—not for immediate returns, but for strategic control. Each is pursuing vertical integration, developing proprietary silicon to reduce reliance on suppliers like NVIDIA, which is also moving up the stack to compete in models. Several AI chip startups are also competing at this layer.

?? Foundation models

This layer seems exciting until you look at the economics. Building and improving foundation models requires massive ongoing investment, yet their value erodes faster than perhaps any other technology to date. Hyperscalers can absorb these costs by using models to drive demand for their cloud services, but independent startups face a steeper climb. The future of general-purpose models increasingly resembles a race to the bottom.?

Still, there may be a handful of opportunities for model startups to carve out niches, whether by leveraging proprietary data or developing new architectures like state-space models. We have a few stealth investments in this category.

??? Software infrastructure and developer tools

The next layer up includes companies that help developers build AI-powered software. Startups like Anyscale, Arize, Skyflow, and Turing are players here.

The cloud era saw about 15 companies scale to $1B+ revenue at this layer. I expect a similar pattern to play out in AI, with a few dozen startups building valuable franchises to serve AI developers. Our current investments align with this view, and we're particularly excited about tooling to develop agentic systems.?

At the same time, this category is challenging due to the rapid pace of innovation, competition from open source, and bundling by both hyperscalers and proprietary model providers. As one example, vector databases like Pinecone were highly sought after just a year ago, but their appeal has since waned.

?? AI Applications

The top of the stack is where I see the most promise. AI is not just adding features to existing software; it's transforming entire service industries into software products. This shift expands the addressable market from the $350B software sector to the multi-trillion dollar services industry.

While OpenAI has focused on building general-purpose models, a new wave of specialized startups is addressing specific industry needs with precision. The early dismissal of these companies as "GPT wrappers"—basic interfaces layered over foundation models—now feels outdated. We're seeing the rise of compound AI systems that combine multiple AI models with retrieval mechanisms, external tools, and diverse data sources, orchestrated by advanced control logic.

History suggests this layer will produce the most winners. The cloud era created over 20 application companies with $1B+ revenue. In AI, we believe this number could exceed 100. These new companies will redefine how industries operate and potentially replace legacy systems of record like Salesforce and SAP.

Where I land

New technologies, no matter how revolutionary, don't automatically translate into sustainable businesses. OpenAI's $157B valuation suggests we might be forgetting this lesson.

This isn't to diminish what OpenAI has achieved. They've shown us that AI can do things many thought impossible just a few years ago. They've forced enterprises to rethink how they operate and changed how humans interact with computers. But starting a revolution isn't the same as profiting from it. Today's headline-grabbing AI companies are creating tremendous value, but that doesn't guarantee they'll be the ones to capture it in the long run.

I'd argue that the most valuable companies of the AI era don't exist yet. They'll be the startups that harness AI's potential to solve specific, costly problems across our economy—from engineering and finance to healthcare, logistics, legal, marketing, sales, and more.

Amit Rele

Digital Transformation Executive @ Infineon Technologies | Electrical Engineering & Product Management

2 周

The Gen AI space is broken into 3 sectors: consumer, embedded into existing software products and enterprise deployments. My 2c: the enterprise market is the most ripe for investment because they can offset the cost of investment with efficiency improvements. My guess is that most large enterprises are swimming in confidential/proprietary data and if there are service companies out there that can figure out how to help enterprises train off it and make it useful, it’d be a goldmine!

回复
Vasco G.

CEO at OnHires | Tech recruitment for future unicorns ??

4 周

Ashutosh, I can send you the first relevant CVs for a Staff Software Engineer position within just 3 days. If you're interested, feel free to DM me.

回复
Pradeep Sanyal

AI & Data Leader | Experienced CIO & CTO | AI Transformation | AI CoE | IT, Cloud, Data and AI strategy | LLM & Generative AI

1 个月

Brilliant analysis, Ashu. Your comparison of AI economics to traditional software scaling is particularly insightful. One additional perspective to consider: the potential emergence of "AI optimization markets" - similar to how high-frequency trading evolved in finance. As compute costs remain a critical factor, we might see specialized players emerge who focus solely on optimizing model deployment and usage patterns across different providers, essentially arbitraging the efficiency gaps in the AI infrastructure market. Your Linux/Meta parallel is compelling, but I wonder if there's a hybrid future where proprietary and open models coexist in different niches, similar to how Oracle maintained enterprise dominance despite open-source alternatives. The key differentiator might not be the models themselves, but rather the optimization layer between models and applications.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了