Copilot is an Incumbent Business Model

The Copilot business model has been the prevailing enterprise strategy of AI. An assistant that helps you write the same code faster in your IDE. Grammar and style assistants that help you write the same documents faster in your word processor. An e-commerce assistant that helps you set up your store or analytics on Shopify.

The “same-but-faster” Copilot model is an incumbent business model. Evolving the same tools but making them faster. That’s not a bad thing, but it’s not disruptive innovation.

Disruptive innovation comes in two flavors: (1) New-market disruption, where the company creates and claims a new segment in an existing market by catering to an underserved customer base, or (2) Low-end disruption, in which a company uses a low-cost business model to enter at the bottom of an existing market and claim a segment.

Copilots don’t create new markets. It’s about making the existing workflows more efficient. Companies will make a lot of money extracting efficiency gains from customers who are willing to pay more to do the same work faster (which is just about everyone).

Copilots raise the cost of software. It’s about adding an extra $10 or $100 per seat for “AI features”. That will be worth it to many customers (ones who want to write emails faster, write code faster, and analyze spreadsheets faster). But that’s not low-end disruption. In fact, raising the price by adding AI features might create a vacuum for a new product to come in and disrupt the low-end.

Copilot as an incumbent business model will be successful. You can always trade time for money. However, the disruptive innovation is radically rethinking the workflows that no longer make sense with AI. Instead of writing code faster, what if we had to write (and more importantly, maintain), less code? Instead of saving hours writing Excel formulas, what if we didn’t have to write them at all?

It’s much harder to see what the disruptive new markets will be for generative AI. But those markets might be magnitudes larger than the ones we have today.

Angus Mitchell

Engineer at Flank

1 年

Benn Stancil has an interesting essay on this... He's looking at the data analytics space, and he poses the question -- What if LLMs work much better with one huge table of data than a normalized SQL database with hundreds to thousands of tables? How many assumptions, how many frameworks, how many products are built on top of the relational model? And I don't think he's arguing that the relational model will be the entrypoint for LLM disruption. I think he's just making the same point you are, which is that it's really hard to predict where that entrypoint will be, and how many assumptions downstream of it will suddenly be violated? https://benn.substack.com/p/the-rapture-and-the-reckoning

回复

要查看或添加评论,请登录

Matt Rickard的更多文章

  • Lessons from llama.cpp

    Lessons from llama.cpp

    Llama.cpp is an implementation of Meta’s LLaMA architecture in C/C++.

  • To be, or not to be; ay, there’s the point.

    To be, or not to be; ay, there’s the point.

    It doesn’t have the same ring to it as the Hamlet that we know, but this is from the first published version of Hamlet…

  • AI Agents Today

    AI Agents Today

    The term AI agent is used loosely. It can mean almost anything.

  • Norvig's Agent Definition

    Norvig's Agent Definition

    There’s no consensus on what an AI agent means today. The term is used to describe everything from chatbots to for…

    1 条评论
  • The Lucretius Problem

    The Lucretius Problem

    Just as any river is enormous to someone who looks at it and who, before that time, has not seen one greater. So, too…

    1 条评论
  • Eroom's Law

    Eroom's Law

    Despite advances in technology and increased spending, the number of new drugs approved per billion dollars spent on…

    1 条评论
  • What if Google Wasn’t The Default?

    What if Google Wasn’t The Default?

    Google has paid Apple to be the default search on their operating systems since 2002. But recent antitrust cases…

  • The Cost of Index Everything

    The Cost of Index Everything

    Many AI products today are focused on indexing as much as possible. Every meeting, every document, every moment of your…

  • Strategies for the GPU-Poor

    Strategies for the GPU-Poor

    GPUs are hard to come by, often fetching significant premiums in their aftermarket prices (if you can find them). Cloud…

  • The Catilinarian Conspiracy

    The Catilinarian Conspiracy

    Quo usque tandem abutere, Catilina, patientia nostra? How long, Catiline, will you abuse our patience? Lucius Sergius…

社区洞察

其他会员也浏览了