How New AI Functionality is Getting Priced: Q&A Follow Up
On Thursday, May 23rd, Mark Stiving, Ph.D. and Steven Forth led a webinar “How New AI Functionality is Getting Priced.” The webinar attracted a lot of interest, with about 500 people across four continents registering.
How are you using AI? Do you also mean an AI-based feature or product that uses other ML architectures besides transformer based models?
The focus of our research has been on generative AI applications that use Large Language Models (LLMs) built using transformers. This includes Generative Adversarial Networks (GANs), Retrieval Augmented Generation (RAGs) and other emerging architectures.
That said, there are many other important ways to use deep learning to enhance value and not all AI is or will be built using deep learning. One important example is Google Deepmind’s AlphaFold for exploring proteins. The underlying approach can be applied to many other domains where combinatorial explosion is combined with constraints.
The most impactful solutions may well be those that combine more than one approach to AI. An early example is how Wolfram and generative AIs can be connected.
Do you think "democratized" systems should be free or low-cost?
This is a very big question, one of the key questions facing all of us over the next five years. It is perhaps too big to be answered here. Personally, I (Steven) believe that having full access to generative AIs will be as necessary as access to electricity and that learning how to work with them is as important as learning how to read. The best open source LLMs should perform as well or better as any of the closed models and government’s may need to subsidize access.
One of the challenges with AI is that so much can be done at such a low price relative to the cost of inputs. Value-based pricing could reasonably put AI products in line with value over human costs. However, as AI becomes ubiquitous, do you expect pricing to drop closer to the cost of inputs?
I don’t think this is a full representation of the costs of developing and operating generative AI based applications.
Conventional economic theory holds that prices in markets with perfect competition will fall until they are close to unit costs. Will B2B SaaS markets become closer to perfect competition (symmetric information, perfect substitutes).
Will AI bring us closer to symmetric information and perfect substitutes? I am skeptical.
The other part of this question is what will happen to the cost of generative AI applications and wages if (when) human and AI performance converges. Given that current tax regimes favour AI there will need to be big changes to taxation if there is to be a level playing field.
Will enhanced AI-capabilities pricing eventually be pressured downward (bundled with product -- incorporated into the product pricing tiers - vs priced separately) with general market deployment?
I think what will happen is a bit different. Successful SaaS companies will be forced to re-platform so that AI is the platform upon which other applications are built. I think this is importantly different from AI functionality becoming part of existing functionality (Mark likely disagrees with me on this though, we will dig into this in one of the Impact Pricing Podcasts).
Pricing power will come from being able to offer differentiated value. That will not change.
Will generative AI make it more or less difficult to create differentiated value?
That is the big question, but the safe assumption is to assume that it will still be possible to create differentiated value and to do everything in one’s power to create that differentiated functionality and then to find the pricing model that allows you to price to capture your fair share of that value. (Karen Chiang and Rashaqa Rahman will be talking about this in their June 5 PeakSpan Master Class Maximizing Value: The Art of Measurement).
How can you verify that a synthetic user can provide a valid input?
This question came up in the context of Synthetic Users and their claims about
For any of these claims to be true the synthetic users need to provide responses that give insight into actual users (at least until synthetic users become a class of users and buyers).
The only way to test this is with double-blind studies. Several companies are carrying out such studies (including Ibbaka) but the results are not yet public. I hope to see an independent body take on such work and share the results. We will let readers of this blog know what Ibbaka finds out.
Maturing to performance based pricing has been challenging for certain organizations; in terms of pricing revolution, how is AI expediting or fueling the ability to get to performance based pricing?
I think there are three barriers to the adoption of outcome based pricing.
To what extend will AI help address these barriers?
Causal models will be needed to manage attribution. This is already well advanced in healthcare and HEOR (Health Economics and Outcomes Research). Doing this at scale for B2B SaaS will require new toolsets,. One that I am tracking is PyWhy, a set of Python libraries for causal modeling with Large Language Models.
Predictability is being solved with improved predictive models. The additional data being made available by AI will improve prediction. I think this will be solved in the next few years.
Accountability will be more difficult to manage. It is going to require trust and shared commitments enforced through smart contracts. Smart contracts are likely to be a key enabling technology.
Gartner is saying that accounting reporting is becoming continuous. Do you think AI will upgrade the connection between pricing and reporting, to drive development of nimble real-time pricing adjustments?
Yes. I think that is going to be essential and will be one of the biggest changes to enterprise pricing. If data is available real time, configuration becomes real time and systems have more and more complex interactions then pricing will need to be real time.
This does not mean dynamic pricing, at least not as it is currently imagined. We need a new model for this, something that I think of as generative pricing. This graphic from the webinar is meant to signal that.
A SaaS solution where implementation project is needed and is charged to the client separately (which is currently based on Time and Material model). Customer is expecting reduction in T&M with AI usage. How can this be overcome to have minimal effect on such project revenue?
The time needed for configuration and the cost of configuration is about to contract. In some cases from months to minutes (this is what Totogi is claiming and why we included in in the webinar).
领英推荐
Given this a time and materials approach to configurations will NOT work going forward. If your revenue model depends on this get ready to change.
A better question is “What is the value of a configuration?” and “How do we create additional value through our configuration services?” This goes beyond configuration to strategy, adoption and customer success.
Understand the value of configuration and adoption services and then price to share that value.
Steven, bias is important in pricing and people have bias against AI. How do you think about overcoming this potential double bias for AI enhanced feature to a product?
We have seen this in some work we did last year on a pilot system for price changes. The system used an LLM (Mistral) and a RAG architecture to generate price change recommendations. Some people accepted these recommendations as they were generated by an AI; others resisted then as they were made by an AI.
There are a few things that we need to do.
Not understanding real-time configuration driven pricing? Will you please elaborate/ give an example?
Generative AI is being used to power real-time configuration of complex business applications. Totogi was given above as an example. Real-time configuration implies real-time pricing. If your system can be stood up and configured in minutes buyers will not want to wait hours for a pricing proposal.
As we figure out pricing and take costs into consideration, if development and support costs come down and more spending on AI, will there be a balancing out?
This is an open question at present and will no doubt differ by SaaS vertical and solution. We will explore this more in the AI Monetization in 2025 Research Report that will be published in January 2025. You can get the 2024 report here.
I do not think development costs will actually go down, rather we will be able to do more and be expected to do more, for the same development budget. It is possible that we will also see a move to continuous development where R&D becomes a variable operating cost.
I do expect compute costs to stay relatively high. Price per token (how most LLM companies price access to their models) may go down, but the number of tokens used in inputs and outputs is likely to increase quickly.
Support costs seem to be an open question. Most routine support will be provided by AI. Does that mean support costs will go down or will customers simply up their expectations? I suspect that both will happen and this will become a way to segment customers and offers.
Isn’t the predicted increase in computing costs for vendors similar to what happened with hosting costs for SaaS?
No, I think this is quite different.
Do you think the revenue recognition patterns might change as well as a result? Wall Street likes recurring predictable revenue. If AI is away from this, there will be an impact. Therefore no matter what we think, there will be a bias toward predictable value delivery - even if AI suggests an outcome (which might be less predictable)?
Revenue recognition is an accounting issue and well defined so I don’t see a change here.
The real question is valuation metrics and benchmarks. For example, there are suggestions that for the rule of 40, investors are weighing profitability over growth. I was recently told by Geoff Hansen at Garibaldi Capital that the profit margin is weighted at 1 1/3 and growth at 2/3.
If standard operating margins go down because of the cost of computing and the cost of accessing the large language models (most solutions will be built by enhancing a third-party model, open source or not) this will lead to a recalibration of valuations and new benchmarks. This has not happened yet, but it will, and will take several years to work out.
Both Mark and Steven, considering the potential long-term impacts of generative AI on company valuations, how should B2B SaaS companies prepare for changes in investor expectations and market dynamics? What metrics should they prioritize to ensure sustained growth and attractiveness to investors in an AI-driven market?
This reads like a follow-up to the above question.
As Mark has said, the fundamentals have not changed, although the relative importance of each metric is changing (see note above on Rule of 40).
The most important metric is value to customer (V2C). Karen Chiang and Rashaqa Rahman are giving a webinar on June 5.
The other metrics remain important. Growth and profitability as captured in the Rule of 40.
What are the best metrics for delivering a GPT-like vertical solution?
Value metrics. And these will depend on the specific vertical and the solution. Each solution will have its own metrics that represent its differentiation as well as some standard metrics common to the vertical. CRM applications will include metrics on number of contacts, pipeline velocity, conversion rates, and so on..
In other words, the solution needs a value model. A properly trained and configured generative AI can do a lot of the heavy lifting here.
Other than that, the metrics mentioned above are relevant for any B2B SaaS solution, based on generative AI or not.
Gartner is saying that accounting reporting is becoming continuous. Do you think AI will upgrade the connection between pricing and reporting, to drive the development of nimble real-time pricing adjustments?
Yes. And I think this will be necessary as we move to real-time and dynamic configuration. Given the complexity of the task, this means that pricing will depend on an AI as well.
This approach to pricing will be very different from the current dynamic pricing systems used for revenue and yield management. They will be built off value models and integrate any type of data.
As we figure out pricing and take costs into consideration, if development and support costs come down and more spending on AI, will there be a balancing out?
This is a critical question. I think it will differ from vertical to vertical and solution to solution. Over time we will learn to be more parsimonious in our prompts and outputs and computing will become more cost efficient, bringing down compute costs. But competition and the need for more and more sophisticated solutions, that integrate more and more data of different types will push these back up.
On support costs, basic support costs are likely to go way down, but more sophisticated forms of support that help customers achieve strategic objectives will become more common. So I am not sure that support/success costs will go down, though the skills of the people providing the support will change.
Overall, it is going to be interesting to see how this plays out over the next decade. There will be a lot of work for strategic pricing experts!
#ai #machinelearning #generativeai #llms #transformers #gans #rags #deeplearning #alphafold #googledeepmind #combinatorialexplosion #aiapproaches #wolframai #democratizedai #opensourcellms #generativemodels #airesearch #aivalue #valuebasedpricing #redqueengame #aiapplications #computecosts #economictheory #symmetricinformation #perfectcompetition #aipricing #replatforming #differentiatedvalue #syntheticusers #userresearch