Generative AI business Model - How OpenAI makes money and why it matters
Kamalika Poddar
Award-Winning FinTech Product Leader | Grew to 1.5M+ women (~?32cr. AUM) | Solving for frictionless wealth transfer | Global AI & Finance leader | TradFi -> Innovation | Fintech Chronicler for 70k+ pros | Animal Lover
With the explosion of ChatGPT usecases, have you wondered how OpenAI, the company behind it makes money? This newsletter edition explores the business models of Generative AI companies, and what each of it means for the future of this nascent industry. Read On to know more.
Over the last 2 months, I would’ve downloaded and read at least 50 different PDFs, ranging from ChatGPT for superhuman Productivity, to ChatGPT for first time Product Managersv. (The latter I was hoping would help me in my new role, but never mind.)
And of course, I started using a lot more AI generated art for my content as well. But all that exuberance got me thinking, How really is this ecosystem going to mature? And what is the business model of Companies like Stability or OpenAI, because a lot of what happens behind the scenes will help decide consumer sentiment too!
Unfortunately, I could barely find any business case written on them. Instead now my ad recommendations looks something like this:
So, I decided to take the next best course of action. Try researching and writing about it myself.
But before you dive in, a warm welcome to all the readers of Fintech Chronicler. This edition is going to be a little different from all my usual ramblings and ruminations on the world of Fintech, Crypto and Web3. So I would eagerly be looking forward to your comments, and thoughts on this piece.
Also, because its so out of my comfort zone, I would love to know if you feel I didn’t cover some aspects fully. Or if I got something incorrect in this edition.
So I have looked at 5 aspects: Customer acquisition strategy, Pricing Models, Costs involved, Network Effects and Product Differentiation.
Customer Acquisition
When ChatGPT launched, there was a huge lineup to access their beta product. There were stats everywhere, of how all other internet company took decades to reach their first million customers. and here OpenAI did it in just under a month.
Except they all got the definition of Customer wrong there!
Because at that point in time, ChatGPT was free to use, meaning they were all trying out the product, not making commitment to buy it. In the physical world, we’d call that the population to whom a sample was distributed!
And that’s the thing with sampling. You never know how many would end up liking your product enough to pay for it. Which is why, despite having cross a 100 million users, Open AI still hasn’t made the news for the number of people who’ve signed up for their $42 a month services.
While everyone knows to attract retail customers you’ll need to throw freebies and other goodies , till you burn away like a supernova, there is another customer segment, that given the right product proposition, wouldn’t be so allergic to paying.
Businesses.
Open AI and Microsoft’s partnership is a step in that direction. This makes so much of sense.
For starters, Open AI can leverage Microsoft’s existing business clients, to pitch an upgrade to the existing suite of microsoft products they have. Imagine, if in addition to Office 365, they bundle in GPT, to help Product Managers like me to churn out PRDs like a pro, in as little time as possible! Or for us lazy folks, to create a stunning presentation of our product vision, to pitch to the leadership team, and get them to rally behind it as well?
But all that would depend on the value they put on the table, for the money people pay for it.
Which takes us to the next section.
Pricing Strategy
When ChatGPT soft launched, back in 30 Nov’22, throngs of us signed up for the experience. And for us early adopters, it was a free ride. That is till mid Jan’23 at least, when the servers started acting erractic, and the service faced frequent outages. This was followed by the $42 a month subscription plan for a “Pro” feature, which promised to not have such outages.
This brings us to the pricing of Generative AI platforms.
Broadly, there are 2 models out there, each with its own unique set of advantages.
Pay-per-Use Model
This is what Stable Diffusion has to offer, where each image you generate will cost you $0.2.
The advantages for customers are obvious! Especially for dilettante creators like me, who’d need one or two images a week, for their newsletter. Like this random scenery.
So that also means, I can keep my customer acquisition costs low. And as the business scales, my infrastructure costs will be the only ones I will need to bear in order to support my customers.
However, planning and adding infrastructure on the run (read: cloud servers to manage their data loads, and run their models seamlessly) will be difficult. Because in this case, revenues are a lagging indicator of demand!
Subscription based Pricing Model
This brings us to the other model, that is a subscription based fee model. Here I charge my customers upfront for them using my platform int he coming months. And I know their max usage limits, because that’s a part of the subscription model. So, as a business I can plan ahead better, and not have much of a down time!
However, the fees (like the $42 a month for ChatGPT pro) may be a inhibitor for new users to come and try my product. Which means the business has to constantly allure new customers, with freebies and other offers. Jacking up the customer acquisition cost, as the business scales.
And honestly, I can still see this work and provide a steady revenue stream for the business. But only after the hype for GPT based content dies out. And users have found that one sweet spot for them to use the model for.
This means, while initially people, out of FOMO, may end up paying for the services, after the initial few months, if a strong use case does not emerge, we will see massive attrition of users from the platform.
Tiered Pricing Model
Then of course there are hybrid pricing models.
You pay for XXX credits a month, and if you happen to use that up, well you can pay per use.
领英推荐
This to me is a classic model, becuase it gives generative AI companies a sense of the load, and they can also scale dynamically as the load goes up.
However, the challenge in this case will be pricing in the subscription plan. Place it too low, and you;d have a stream of customers always paying per use, and often getting frustrated in the process. This could lead to attrition, especially given the number of similar competitors out there in the market. All that is needed, is for a competitor to undercut you by a sliver.
Or, the business ends up pricing their subscription so high with crazy high usage levels, that they alienate most of the population. Resulting in a trickle of revenue stream.
Cost Structure
I would bucket costs into 2 categories here. The first would be a fixed costs of having developed the infrastructure, the second the dynamic cost associated with scaling the services.
Let’s look at the fixed costs first. And take the case of Midjourney to delve in details.
So Midjourney opened their Beta sometime mid 2022, but before that in early 2022, they put out a tweet, inviting people to submit their pictures along with detailed captions describing the pictures. They also used their Discord channel to source the same.
So acquiring of base data was free in this case, but that may not always be the case. Take for example ShutterStock. They typically pay out a commission to their community of contributors, although it’s a small percentage of what they make. And ShutterStock AI continues to pay their contributing artists, for both the stock images as well as for training their models.
Alright, acquiring is just one part, the next is storing all those massive, high resolution pictures, without compromising on the quality. This leads to buying a lot of cloud storage. And as the data sets expand, so does the need for more storage(so this is also a part of the dynamic costs).
Next is the cost to develop the models. This means being able to attract and retain the top AI talent. And in 2021 through 2022 that did not come in cheap. Even now, AI engineers, despite the layoffs, demand a hefty premium!
But to build these models, we need superior computing powers. After all, even Leonardo Da Vinci couldn’t have created those sculptures if he didn’t have the right quality marble stones and chisels in front of him. That is where customised Chips and microprocessors come into play. And Nvidia has been a company that has strongly benefitted from this race to building the next biggest Generative AI model. But with Moore’s law, the costs of future AI models, will not be as expensive as it was in 2020 to build them.
Once we have the model ready, and taught it to learn through a nice little feedback loop, the R&D costs can now be amortised. But then we will start incurring other variable costs in order to scale the business, and ensure the product serves a need.
This is where the need for talented Project and Product managers are going to be needed.
Then is again the infrastructure costs, of adding additional servers, in anticipation of demand.
Also, there is the costs of future upgrades and maintenance in order to stay relevant to our target audience.
And finally, the biggest chunk of the costs. Marketing and acquiring new users onto the platform.
Network Effects
So what are some leading indicators of which business will make it, and stand the tests of time for the next decade or so? That is where network effects come into play, and why these generative AI companies are racing against each other.
When more people utilise a certain product, service, or platform, the value of that item, service, or platform increases, which is the phenomenon that is referred to as the "network effect." In the vast majority of situations, the value that is offered by an offering increases in direct proportion to the number of individuals who make use of it, regardless of whether these individuals are buyers, sellers, or users.
The lecture notes that Professor Bharat Anand of the Harvard Firm School took down during one of his classes state that "in other words, the readiness to pay, for a buyer, increases as the number of buyers or sellers for the business expands."
So how does that apply to Generative AI models?
At its core Generative AI is nothing but unsupervised and semi-supervised machine learning algorithms to generate new material from existing content such as text, audio and video files, images, and even code. The goal is to produce novel artefacts that pass for the actual human created thing in most ways. In general, you can choose between two models:
GANs, or Generative Adversarial Networks, are tools that can generate artwork from text as well as images, or unstructured datasets
or
Models built on the Transformer framework can use data scraped from the web to generate content such as blog posts, news releases, and white papers.
And how can these unsupervised or Semi Supervised models get better? Well, yes. By getting more users to try it, and give their feedback to the machine!
Can it become a winner takes all market, thanks to this network effects? I do not think so.
Product Differentiation
And that is because, most generative models know what the attributes are for their models. The efficiency of their outputs depends on the different weights they assign to the thousands of different attributes. Each outcome varies with the slightest variation to these weights.
So, then why do we need so many generative AI models?
The concept that the performance of AI models will eventually level off is one that is widely held. After speaking with app developers, it is evident that this has not yet taken place, as there are clear?leaders in both the text and picture model categories. Their advantages stem not from the fact that their model designs are the only ones of their kind, but rather from the expensive capital needs, proprietary product interaction data, and limited availability of AI talent. Will this be a competitive moat that holds up over time?
Not unless Generative AI models can find a niche area to focus and get better at. However, with everyone trying to get closer to General Artificial Intelligence, I guess we will have to wait for the industry to mature, before we see such use case / industry specific AI models emerge.
Until then, I think I’ll stick to writing my newsletters myself!
PRESIDENT at ACME IMPORTS
2 年At the current prices being banded about, there are very few businesses that will say no to 20 or $40 if it helps your efficiency by even 5% or 10% as that's not even 1 hour of Labor in most markets. Compared to the cost of Microsoft Office for each desk this is a mere pins and that is why they will probably get a huge following over the next year or two
Helpful! This will
Building Datavio | IIT BHU
2 年I think would tend to disagree with respect to the network effect, because the major idea about network effect is the presence of network helps the platform grow itself(eg swiggy grows by network effect since more customers mean more restraunts, and vice versa etc). OpenAI has not yet talked about them improving the product by using the network of users and the prompts used by the users. And even though, there may be a huge number of prompts now being generated, they don't translate to training data. One important aspect here which I felt should be much more focused upon is the B2B aspect. The fact that you can use gpt-3 model based upon token for your own use cases, is the most important aspect about the business model. This very well translates to the idea of `Intelligence-as-a-service`, where different businesses can use the intelligence of their model to build custom use cases for the business problems.
Product Manager @ Unilever | Exploring to get 1% better
2 年Very well written Kamalika Poddar Adding my views here - It makes more sense for small to medium scale companies to build on existing dataset of ChatGPT or Bard instead of building their own dataset to carve a niche. For generating images for your content you might want to look at DALL -E 2 by OpenAi. It will create custom images for you as per your description ( pictures that never even existed on internet :P)
GenAI EM, Flipkart. IITB. Exited AI Startup.
2 年Loved the deep-dive into the business aspect of OpenAI. Not many people talk about it. Building a business based on Industry-specific AI models actually makes more sense to me too. That's why I have been experimenting with running ChatGPT based on my own company data.