How would you price generic AI services?

How would you price generic AI services?

I recently developed some of these ideas in a post on AI content generation .

And asked ChatGPT about pricing ChatGPT in How to price AI

This is the first in a series of exploratory pieces on pricing and new technology that I am working on while on medical leave. The second is?"Will MACH enable new pricing models? (Microservices based, API-first, Cloud-native SaaS and Headless) "

Artificial Intelligence has become an expected part of many applications. It crops up in all sorts of places, expected and unexpected. There are AI powered chatbots and service bots. All sorts of recommendation engines are powered by AIs, with recommendations ranging from who to hire to what content to recommend. (For some interesting developments here see Scott Rosenberg Sunset of the social network ). AIs are almost always the first step in analyzing any sort of imagery, from chest x-rays to satellite image data. And of course pricing has long used different versions of AI and related technologies to implement dynamic pricing and make price recommendations.

Commodification of AI and Implications for Pricing

What we have seen over the past decade is the commodification of AI. There is now a standard approach to AI that is widely adopted and accepted ways to measure accuracy. This ‘accepted approach’ is the deep learning style of AI realized by Geoffrey Hinton at the University of Toronto, Yoshua Benigo at the Université de Montréal and Yann LeCun from New York University along with many, many collaborators.

The result is the emergence of what one could call generic AI. Generic AI is quite different from General AI and may not even be on the path to this larger goal. General AI is the goal of developing an AI that can solve many different problems and demonstrate creativity beyond what individual humans are capable of. Generic AI is much less ambitious. It is the application of standard deep learning technologies to many different prediction, classification and recommendation problems.

Most of us encounter applications of generic AI to specific problems rather than working with the AI directly. These problems come in all shapes and sizes and there is no one approach to pricing. AI contributes to the effectiveness of the solution, but it is the solution itself that matters and that determines the price. Standard value based pricing approaches work best for applications. Begin pricing design by developing a value model, then design a pricing model that tracks value to customer (V2C).

Many of these problem specific AI applications are built on a generic AI. These are commoditized AI services that can be used to develop many different classification, prediction and recommendation models. They are all based on the same general principles and work in the same way. One could also refer to this as commoditized AI or say that Generic AI is well along the path to commoditization.

Commoditized does not mean that things are standing still. The major generic AI solutions are constantly improving their services. They are engaged in a ‘Red Queen Game .’ The term comes from Lewis Carroll’s book Alice Through the Looking Glass. “You have to run as fast as you can just to stay in the same place, if you want to get anywhere you must run twice as fast as that.” Innovation and improvements are a constant in a Red Queen game but they do not lead to competitive advantage. There are some important new ways to configure connected AI models to drive faster learning. An example is Generative Adversarial Networks or GANs . In this approach one AI model generates examples and a second model tries to classify the examples. The two compete with each other and both improve. AIs competing with each other to train themselves will play a big role in AI over the next few years. We will see this in pricing, where M2M (Machine to Machine) pricing is growing in importance.

Many of the Generic AI solutions are based on one or other of the major open source deep learning libraries. Examples are Tensorflow, Pytorch, Theano and so on (Wikipedia has a good comparison table ). There are also open source tools for organising AI workflows and building solutions using open source tools, such as Keras . So one approach to pricing Generic AI and establishing a baseline is the well known ‘open source plus hosting plus services’ model established by Red Hat and other first generation open source companies.

Given this context, how would you approach pricing Generic AI? Pricing experts, and anyone else interested, are invited to propose pricing designs for this critical part of the go forward information infrastructure. Share your ideas in the comments!

Some possible pricing metrics to consider are as follows.

  • Resources - the cost of resources or infrastructure used to provide the service (deep learning AIs are computationally expensive and run best on hardware and software architected for their specific performance requirements, this is one reason for the sharp increase in demand for Nvidia and other graphic chip makers)
  • Inputs - the data used for training, the number of training runs (providing data for deep learning is an emerging business in its own right); the amount of data that will be consumed by the AI once it is in production
  • Complexity of the model - number of layers, degree of back propagation, advanced models like Generative Adversarial Networks and so on
  • Outputs - number of models, number of applications of model (classifications, predictions, recommendations)
  • Workflows - number of workflows using the model, complexity of the workflows
  • Performance - accuracy of classifications, predictions, recommendations (i.e. do people act on the recommendations)

No alt text provided for this image

Domination by the Large Cloud Vendors and How They Price

The large cloud infrastructure vendors (Platform as a Service, Infrastructure as a Service) dominate the generic AI market. Cloud services is a very large market, about US$200 billion , of which AI services are a very small percentage today. But competition is driving down prices for cloud services and AI is seen as strategically important to keeping profits and demand up.

No alt text provided for this image

How are the major cloud vendors pricing generic AI?

Amazon has a deep AI stack ranging from services such as Vision, Speech and Chat, to various platforms, built on different open source engines. Beneath this is the infrastructure needed to operate the higher levels of the stack.

No alt text provided for this image

There are many pricing examples on the Amazon website for each of these different levels. Some examples

Most of these pages have handy calculators to help you come up with your price. They need these calculators as the pricing is actually quite complex with a lot of different dependencies.

Browsing through these different pricing pages one can see that most of the pricing is driven by the number of inputs: images, pages, objects. The services rely on models that Amazon develops and is constantly improving.

An example of the Texttract pricing calculator.

No alt text provided for this image
No alt text provided for this image

I like that this calculator gives the option of showing how the calculations are made and showing the payment details. And US$44.50 per month to process 10,000 pages is a pretty competitive cost.

Google Vertex AI is much more about building and training AIs rather than the specific services or generic engines that Amazon is focussed on. The pricing is framed in terms of the amount of data and the training time (as measured in hours - note this is training of the model and not training for the people developing or implementing the model). One also has a great deal of choice on the systems (machine types) you want to build your models on. See the Vertex AI Pricing Page .

This is a very different positioning than Amazon. It appeals to organisations that want to build their own models rather than use other people’s models. There is room for both approaches.

As an example, Google prices Consumed ML (Machine Learning) units.?

"The Consumed ML units (Consumed Machine Learning units) shown on your Job details page are equivalent to training units with the duration of the job factored in. When using Consumed ML units in your calculations, use the following formula:

(Consumed ML units) * (Machine type cost)

Example:

  • A data scientist runs a training job on an e2-standard-4 machine instance in the us-west1 (Oregon) region. The field Consumed ML units on their Job details page shows 55.75. The calculation is as follows:
  • 55.75 consumed ML units * 0.154114
  • For a total of $8.59 for the job.

See source ."

MS Azure - The Microsoft approach is between that of Amazon and Google. There is access to a lot of powerful capability here, some of it domain specific (Microsoft Genomics for example), some of it for specific functionality (conversation bots or advanced search functionality).?

No alt text provided for this image

Azure Machine Learning is a complete AI modelling solution built on the open source platform PyTorch (which was originally developed at Facebook and is supported by Meta AI ). If you are willing to commit to that as your ML solution it is a very good solution indeed. Like Amazon, the pricing is based on the hardware you will be using. As you gather more and more data and build more complex models you will generally need to upgrade your hardware.

No alt text provided for this image

IBM Watson - IBM has taken a different approach, building on its own proprietary AI technologies, known collectively as Watson. There are many domain specific solutions available , from advertising and business operations to health and finance.

There are a lot of different techniques included in Watson, it is not a straight deep learning technology. The Wikipedia article gives a good summary of its development .

No alt text provided for this image

Watson Studio is where one can use Watson to develop AI models. The pricing here, like Amazon and MS Azure, is tied to the hardware made available.

No alt text provided for this image

I am not convinced that tying the price of AI and DL (Deep Learning) services to the underlying hardware that they run on makes a lot of sense. It seems to me like a hangover from the old AWS (and Rackspace) model of leasing virtual servers in an ‘elastic cloud,’ It is an IT centric view of the world and one that is much closer to cost-plus pricing than to value based or outcomes based pricing. There is a lot of room for innovation and change here.

There are Specialist Alternatives for Generic AI

The big cloud infrastructure companies may dominate generic AI services, but they do not own the market. Let’s look at what some smaller and more focussed vendors are doing. Two companies that come to mind are H20.ai and OpenAI .

H2O.ai can be thought of as enterprise open source and ‘no code AI.’ They have a very attractive positioning.?

“H2O.ai is the leading AI cloud company, on a mission to democratize AI for everyone. Customers use the H2O AI Cloud platform to rapidly make, operate and innovate to solve complex business problems and accelerate the discovery of new ideas.”

Price optimization is given as a use case for their AI platform and there are some case studies on the website worth reading through. The focus is retail pricing but the approach could easily be extended to B2B.

I was able to find mention of a 90 day free trial for H2O.ai but I could not find any guidance on how the solution will be priced after the trial. It appears that they use some form of solution pricing where all the different hosting, functionality and support are bundled together into a subscription which tends to run to six figures. This is the approach that a consulting firm might take to winning subscription revenues for providing a complete AI solution.

OpenAI takes a different approach. It supports a narrower set of use cases, focussed on natural language processing and text to code applications. OpenAI provides APIs to its models that can be used to develop many different applications (IBM Watson also works this way for many of its use cases). OpenAI prices per token. A token is a piece of a word. To get a feel for this, they provide a Tokenizer Tool . This paragraph has 93 tokens.

No alt text provided for this image

Pricing is based on two dimensions, the input and the model. The input pricing factor is the number of tokens. The model pricing factor depends on two aspects of the model, the power of the model and the speed of processing, with a tradeoff between the two. Pricing is well presented on the website .

No alt text provided for this image

If you need to fine tune the model there is an additional charge for training. The more tokens used in training the higher the price.

No alt text provided for this image

OpenAI can also generate rich data models of a piece of text that can be used in other applications. They refer to this as embedding.

No alt text provided for this image

The embedding service uses output pricing as you pay for a token plus metadata that you can then use in other applications.?

Conclusions on Pricing Generic AI

People engaged in pricing generic AI services have some of the most interesting jobs in pricing today. They will frame how we think about the value of generic AI,how and how much we end up paying for these services. As AI is going to be used in almost all applications this is an important place to be.

Application specific uses of AI will be priced using value based pricing and outcomes based pricing.

For the large vendors, the dominant trend is to price based on the hardware and data storage needed to operate the AI solution. This is a form of marginal cost pricing, which suggests that price will decline to the marginal cost. The vendors with the lowest marginal costs will win, which generally means that the largest vendors will win and that Amazon will dominate the market for Generic AI. This does not have to be the outcome though. Vendors that find ways to tie pricing to workflows, outputs or outcomes will have a strategic advantage over those that rely on infrastructure or input based pricing.

H20.ai represents a different approach, in which a solution is priced holistically. The price will be different depending on the specific solution, making it difficult to publish pricing. Solution pricing is most often used for complex technical solutions, which AI can be. It is also a well understood business model in the open source community, which H2O.ai contributes to.

OpenAI is an example of input pricing combined with model pricing. Tokens are the input and the four different models (Ada, Babbage, Currie, Davinci) determine how much you pay per token.

No alt text provided for this image

At this point I have not found generic AI services that are priced per output, outputs being the number of classifications, predictions or recommendations, but this is an obvious pricing model and will no doubt surface in the next couple of years if it has not already. From there it is a short step to pricing based on the accuracy of the outputs.

In the academic world, AIs are measured by their performance on set tasks and data sets. It was the success of a CNN (Convolutional Neural Network) in the ImageNet challenge that led to the current widespread adoption of the deep learning approach . There are three things to note here. The rapid dominance of CNNs; the rapid improvement in accuracy; and the compression in the results with most of the entrants delivering roughly the same level of accuracy. This happened very quickly in the eight years from 2010 to 2017.

No alt text provided for this image

Not surprisingly, there has been a lot of work on the quality of AI based outcomes in the healthcare industry, where AIs are being used to classify x-rays and other images, select patients for treatments, optimise scheduling,? and many other applications. For some recent work, and to get an idea of the general approach, see Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: a meta-research study by? Shruti Jayakumar et al. from January 2022.

In the SaaS world, pricing often varies with service level agreements (SLAs). The same approach could be used for generic AI, where the accuracy of the outcomes could serve as a pricing metric. In many cases this is more appropriate for domain specific applications that use AI than it is for generic AIs, but for standard applications with established performance criteria outcomes based pricing is likely to become the winning pricing model.

Piotr Kulaga

UX Analyst + Designer (Systems Thinking), M.Des.Sc. (Des.Comp.)

2 年

Well beyond the key issue of 'pricing', the context of attributing value to capabilities or services, provided by this analysis would serve very well to put some sense into public debate and in particular, commonplace misrepresentations and hyperbole of statements relating to AI initiatives. While somewhat confusing against the fallacy of 'general' AI, referring to off-the-shelf 'narrow' AI implementations as generic is very instructive, especially in terms of misplaced expectations of making inroads towards possibilities that could only be realised with General AI. so far an impossible dream. Without getting into a debate on misinterpretations of a supposed continuum from current narrow AI techniques to something we don't even know how it may be addressed, formalising an approach to 'pricing' is a great start to a sober discussion. In other words, discernment in a level of capability and competency involved in AI projects, e.g. using services of on a ready made 'consumer' platform versus bespoke deployments employing still generic yet more broadly customisable AI libraries and frameworks, represents an undertaking at a different level of possibilities and in turn, potential decisions about authentic or unique value propositions.

回复
Adam Hendin

CEO / Co-Founder at Radium

2 年
回复
Michelle S.

Early-Stage Investor | Purpose-Driven Philanthropist | Experienced Board Director

2 年
回复
Vignesh Thiyagarajan

Monetization| Commercial and Pricing Strategy| Product Management| Global Executive| Power BI enthusiast| AI Solution seeker

2 年

Wow, a thorough article explaining AI models, and infrastructure services in today’s, possibly near term landscape. Thank you and well done

要查看或添加评论,请登录

Steven Forth的更多文章

社区洞察

其他会员也浏览了