Value accrual vs. Value capture: lessons from web2
Rahul Mudgal
Growth Leader | Ex-Ripple, Superscrypt, NTT, Mercer | LinkedIn Top Voice | GTM Expert | Advisory Board Member | Transdisciplinarian | #web3, #Fintech, #SaaS
Remember this? My prized possession from 2005 is a Motorola flip phone that predates the iPhone by two years. It now serves as my five-year-old's pretend phone. Why do I bring this up?
I watched him have an animated pretend conversation on his pretend phone as I was scrolling through my social feeds, where everyone seemed to be dissecting the implications of Deepseek’s new R1 large language model on the entire Artificial Intelligence ecosystem. I saw some conclusions, such as “foundational LLMs are not moats,” “foundational models will become commoditized,” and that “Inference-based wrapper services” being built by dozens of start-ups will create the most value and impact. In contrast, LLMs will almost become public goods. A bit early to call out winners, I thought!
I have always been an avid believer in abstracting away complexity and love to simplify things by connecting the dots. Just why that Motorola phone in my child's hands, from a time before I got my Blackberry for work or my first iPhone, made me think. We have been here before.
As the pipes or the infra layer, mobile carriers, and internet service providers invested billions of dollars in capex as the world transitioned from 2nd generation to 3G, onto 4G, and more recently to 5G mobile telephony. The new generation of slick yet relatively affordable smartphones has brought millions of people access to the internet. So, back in the 2000s, we were not sure which of these layers, i.e., the infra, the hardware, the interface, or the app layer, would capture the most value. Mobile carriers, as we now realize, were slow to counter the explosion in the Over-the-top or OTT ecosystem that built use cases and engagement without investing in underlying infrastructure. The application layer or the interface to experience video, audio, and ridesharing brought about by OTT services such as Netflix, Spotify, and Uber is where most of the value ended up. However, inferring that user monetization will only happen at the interface or application layer cannot be a sweeping generalization. Infrastructure providers now realize that partnering with OTT players via subscription bundling can help offset some of the capex. Beyond the category leaders like Netflix, we have also seen OTT services (especially from linear TV providers such as Paramount) struggle to gain traction.
Only recently did Netflix settle a long-drawn lawsuit with the South Korean mobile incumbent. The carrier had alleged that the bandwidth spike due to the popularity of the show Squid Games meant that Netflix owed an additional fee. This conundrum plays out differently in various markets. On the one hand, mobile carriers attempt to demonstrate differentiation by the number of OTT services they offer via subscription bundling, and they want to extract more rent for infra usage from the OTT services. To this day, mobile carriers in India and some other markets believe OTT services should pay them an additional network fee while they double down on developing their native digital services ecosystems.
Back to the world of AI, then. Will foundational models be the infra layer or the app layer? It is premature to draw any conclusions. Today's inference-based wrappers could become features in the new foundational models currently under training, and we will see much innovation in this space. DeepSeek's R1 may have been a catalyst for opening up a new design space for training LLMs. Meanwhile, drawing lessons from recent history and having different frames of reference helps one draw signals from the noise.
Next, I will apply the same lens to examine the world of blockchain and Web3 in terms of protocols and dApps.
What do you think?