The New Business of AI
Geoffrey Moore
Author, speaker, advisor, best known for Crossing the Chasm, Zone to Win and The Infinite Staircase. Board Member of nLight, WorkFusion, and Phaidra. Chairman Emeritus Chasm Group & Chasm Institute.
Several years ago, just before the pandemic hit, the folks at Andreessen Horowitz wrote a very thoughtful piece about the new business of AI and the unique challenges it faces.?As a board member of two AI software companies—WorkFusion, which focuses on intelligent robotic process automation, specifically in relation to regulatory compliance in financial services, and Phaidra, which focuses on industrial applications, specifically in relation to resource efficiency in industrial operations —I have a front row seat as two excellent management teams take on these challenges.?Based on my experience, here is how I would update a16z’s observations.?
The article begins by calling out three core issues AI faces that matter intensely to venture investors:
All three are as relevant today as they were then.?In the paragraphs that follow, I will use indented italics to quote from the article, then add my commentary.?
Gross Margins, Part 1: Cloud infrastructure is a substantial – and sometimes hidden – cost for AI companies
In the old days of on-premise software, delivering a product meant stamping out and shipping physical media – the cost of running the software, whether on servers or desktops, was borne by the buyer. Today, with the dominance of SaaS, that cost has been pushed back to the vendor. Most software companies pay big AWS or Azure bills every month – the more demanding the software, the higher the bill.?AI, it turns out, is pretty demanding.
This is the price of poker.?Over time, no doubt, there will be cloud computing optimized for AI applications, but it is not there today, and the compute is indispensable.?Furthermore, longer term, you should not be surprised to learn your cloud computing vendor is becoming your direct competitor.?AI and MML-enabled applications will be ubiquitous.?That all said, there is a silver lining here for some.?Many large enterprises have annual commitments to spend a given amount with their primary cloud vendors.?In such cases, customers can absorb all or part of the compute, and even the purchase cost, of their AI/ML-enabled SaaS applications as part of burning down their contract commitment.??
Gross Margins, Part 2: Many AI applications rely on “humans in the loop” to function at a high level of accuracy?
Human-in-the-loop systems take two forms, both of which contribute to lower gross margins for many AI startups.?First: training most of today’s state-of-the-art AI models involves the manual cleaning and labeling of large datasets.?Second: for many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into AI systems in real-time.?
Here’s what Peter Cousins, Chief Technology Officer at WorkFusion, has to add about human-in-the-loop:
Human in the Loop - HITL is often seen as a labeling / up front problem, or as a 2nd level approval permanent process requirement.?The best use is somewhere in between - ongoing training from HITL intervention to improve the model, and smarter HTIL invocation using either a risk lens or outlier analysis.?This makes it a lighter burden, allows for continuous improvement, and is an important confidence-building / eternal vigilance measure.?A related issue is how the system must be able to explain the processing or customers will never be comfortable letting it run unattended, as part of the HITL review is to understand if the reasoning is sound and can be trusted.
That all said, wherever possible, human-in-the-loop needs to be systematically driven out.?Early on this can be treated as a crossing-the-chasm problem to be solved by focusing on vertical market use cases, as both WorkFusion and Phaidra are doing, where training the models can be amortized across many implementations.?Because of the commonality of use cases, self-training emerges as an artifact of ongoing operations, and cognitive reasoning becomes embedded into the AI algorithms organically.?That’s the whole point of the technology.?
Human-in-the-loop is a natural precursor to this state, and with every expansion into new use cases, it will come to the fore.??Indeed, we can anticipate an ecosystem of human-in-the-loop service providers emerging to help accelerate the adoption of new use cases, self-organizing around AI/ML companies that are either vertical or horizontal market-makers.?Being one of those market-makers is a future all venture-backed enterprises should aspire to.?For now, however, as the a16z article warns, the key is to avoid getting stuck in the chasm, chained to a bespoke human-in-the-loop model that simply cannot scale.?
Scaling AI systems can be rockier than expected because AI lives in the long tail?
For AI companies, knowing when you’ve found product-market fit is just a little bit harder than with traditional software. It’s deceptively easy to think you’ve gotten there – especially after closing 5-10 great customers – only to see the backlog for your ML team start to balloon and customer deployment schedules start to stretch out ominously, drawing resources away from new sales.?The culprit, in many situations, is edge cases.
This is not an AI problem.?This paragraph describes a company that has had success in the Early Market with a project playbook but is failing to transition to a crossing-the-chasm solution playbook.?Edge cases are indeed the challenge, but they are best managed by excluding them via constraining the industries and use cases served.?In parallel, the executive team must reorient the whole company to a more operational focus, paying less attention to additional customer asks and more to tooling, automation, and processes for the core use case.?In short, it is too soon to seek scaled product-market fit—that is for Tornado markets.?Instead, the focus should be on whole-product-use-case fit for high-value, high-urgency use cases in market segments that are, to quote the chasm-crossing formula, “big enough to matter, small enough to lead, and a good fit with our crown jewels.”
Great software companies are built around strong defensive moats.
Some of the best moats are strong forces like network effects, high switching costs, and economies of scale.?All of these factors are possible for AI companies, too.?The foundation for defensibility is usually formed, though – especially in the enterprise – by a technically superior product. Being the first to implement a complex piece of software can yield major brand advantages and periods of near-exclusivity.?In the AI world, technical differentiation is harder to achieve.?This does not necessarily mean AI products are less defensible than their pure software counterparts. But the moats for AI companies appear to be shallower than many expected. AI may largely be a pass-through, from a defensibility standpoint, to the underlying product and data.
If we are just talking about AI as an enabling technology, I think this is true, although over time a gorilla vendor is likely to emerge, after which ecosystem power will provide a long-lasting moat.?In the meantime, however, AI’s value will be realized at the application layer.?Thus, while AI algorithms per se may provide shallow moats, segment-specific use cases exploiting the data-network effects of an expanding customer base catalyzing an ever-improving set of algorithms, will be highly defensible, having both high switching costs and economies of scale.?
领英推荐
This point is further illustrated by what Peter had to say about defensive moats, very much in the context of WorkFusion’s deep dives into regulatory applications in financial services:
Defensive moats that have increased potential in AI are mostly around the data and the community of customers.?Partially this is data used to train models, but it can also include a myriad of data-centric ways to improve outcomes:?novel assemblies of data, reference data that is community-curated, sharing of data that completes a picture (e.g., money laundering is only apparent when looking across multiple banks), or sharing of decisions on particular cases to help expedite the same decision at another customer.
Finally, we can conclude by updating the list of best practices the team at a16z proposed at the close of their post:
1.?????Eliminate?model complexity?as much as possible.
Yes, and do so by constraining the problem domain as best you can by confining yourself to one or more core use cases.?
2.?????Choose problem domains carefully – and often narrowly – to reduce?data complexity.
Yes, this is the fundamental strategy embedded in the crossing-the-chasm playbook.
3.?????Plan for high variable costs.
High variable costs are endemic to the fuzzy front end of any new market development.?In the Early Market, you take them head-on, using the project model to do “whatever it takes” to get to the desired outcomes.?That said, we know the project model does not scale.?The whole point of crossing the chasm is to drive down high variable costs through repetition and reuse, leading eventually to automation.?
4.?????Embrace services.
For the Early Market and Bowling Alley phases of the Technology Adoption Life Cycle, vendors of disruptive innovation must always incorporate services-led offers because there is no ecosystem support for their emerging applications.?Once market adoption reaches the Tornado phase, third parties will self-organize to provide these services—but not until.?When it comes to pricing these services-led offers, vendors should not yield to the temptation to prop up their product prices by giving away the services.?Instead, do the reverse.?Allocate the revenue to services first, thereby incenting the customer to curtail their consumption and incenting the product team to reclaim those services dollars by designing out the need for them.?
5.?????Plan for change in the tech stack.
This is easier said than done, as anyone who has coped with platform transitions will testify.?Sooner or later you are going to incur serious tech debt, and working your way out of it will consume more and more of your op-ex in the years to come.?It is part and parcel of the fourth and final stage of technology adoption, what we call the Main Street market.?The major compensating factor is that now you have an installed base, something your more nimble next-gen competitor does not.?When you should find such a competitor on your doorstep, your first job is to run a “neutralization” playbook to protect your base, modernizing your operating model by getting to “good enough” fast enough.?This buys time to do the longer haul work of getting onto the new stack and lets your customer base stick with you through the journey.
6.?????Build defensibility the old-fashioned way.?
Can’t say this better than the a16z folks:?
The opportunity to build sticky products and enduring businesses on top of initial, unique product capabilities is evergreen.?
Indeed, it is.?
That’s what I think.?What do you think?
Thanks, a very enjoyable read. What do you think about the authors significant investments in the crypto space?
Empower AgeTech and HealthTech Entrepreneurs to Lead the Revolution in Creating Solutions that Transform Aging, Restore Independence, and Shape a Better Future for Baby Boomers and Their Loved Ones
2 年Fantastic piece. So many great ideas to help organizations build a brighter future with AI!
Healthcare and Technology Entrepreneur | Digital & AI | Health Tech | Venture Builder
2 年Thank you for the Great Insights Geoffrey Moore. While solving for the edge AI we did came across the challenges especially around orchestration /ML Ops. On the value front, what gets visible is Application/Analytics with explainability.
Strategic organizational innovation at @NeatStrategy (Founder). Two degrees: applied physics + engineering science. Pragmatist.
2 年Thank you for your excellent article and your insights into these important issues. Clearly, AI products are being developed rapidly and offer enormous potential opportunities. However, as you describe, crossing the chasm is likely to be very difficult for many products in this field for the foreseeable future. An underlying reason for this is that the technology is still immature: we do not know how to test systems based on probabilistic reasoning … and we do not know how long it will be until we do know. So, until then, let’s not be getting too “carried away” with deploying AI products in safety-critical applications!
Venture Investor | Company Builder | Best-Selling Author of Transformative | Innovation Strategist
2 年Really great update to A16z's analysis of the hurdles. Added to that is the important consideration of what are the market inhibitors that drive market adoption and enable overcoming these challenges. To that end: what are the market expansion efforts of the firm to drive scalling, improve margins, and create advantages?