The generative AI bill is coming due, and it’s not cheap
[Source Photo: Steve Johnson/Unsplash]

The generative AI bill is coming due, and it’s not cheap

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m looking at the high costs facing enterprises and other organizations trying to integrate generative AI tools into their business. Also, new research sheds light on how AI-generative disinformation might complicate next year’s elections.?

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected], and follow me on X (formerly Twitter) @thesullivan.


The generative AI bill is coming due, and it’s not cheap

As AI developers try to commercialize and monetize their models, their customers are coming to grips with the fact that the technology is expensive. The up-front costs of developing AI models are significantly higher than those associated with developing traditional software. Developing large AI models requires highly talented (and highly paid) researchers. Training the models requires lots of expensive (often Nvidia) servers. And, increasingly, AI developers will have to pay for the text, image, and knowledge base data used to train models. SemiAnalysis analyst Dylan Patel estimated that running ChatGPT costs OpenAI about $700,000 a day, for example. A recent Reuters report says that in the first few months of 2023, Microsoft was losing about $20 per user on GitHub Copilot, the first LLM chatbot it offered, for which users pay $10 per month.

As the developers try to commercialize their models, those high costs must eventually be passed on to customers. The prices of the first available AI products for enterprises are already getting attention. Both Microsoft and Google have announced that they will charge $30 per user for their respective AI assistants within their productivity suites. That’s on top of the license costs customers already pay. Enterprises can also access large language models from companies like OpenAI, Anthropic, and Cohere by making calls on them via an application programming language. The cost per call, and then for the output from the model, can add up quickly.

For its part, OpenAI seems to be making a successful business out of selling subscriptions to its ChatGPT or selling API access to its GPT-3.5 Turbo and GPT-4 LLMs. Bloomberg reported in late August that the company is making $80 million per month, putting it on track for a billion in revenue for 2023. In 2022, the company lost $540 million during the development of ChatGPT and GPT-4, The Information reported.?

But the economics described above applies to the commercialization of huge, general-purpose models that are designed to do everything from summarizing long emails to writing computer code to discovering new cancer drugs. OpenAI, for example, explains that it's trying to offer enterprises a generalized “intelligence layer” that can be used across business functions and knowledge areas. But that’s not the only approach. Many in the open-source community believe that enterprises can build and use a number of smaller, more specialized models that are cheaper to train and operate.?

Clem Delangue, CEO of the popular open-source model sharing platform Hugging Face, tweeted Tuesday: “My prediction: in 2024, most companies will realize that smaller, cheaper, more specialized models make more sense for 99% of AI use-cases. The current market & usage is fooled by companies sponsoring the cost of training and running big models (especially with cloud incentives).”?


AI disinformation in 2024: New details about what U.S. voters might encounter next year?

Senator Mark Warner, one of the smartest members of Congress when it comes to AI, fears AI-generated disinformation could wreak havoc during election season next year.? “[Russia's actions were] child's play, compared to what either domestic or foreign AI tools could do to completely screw up our elections,” he told Axios.?

A new study from Freedom House puts some facts behind the fear. The researchers found that generative AI has already been used in at least 16 countries to “sow doubt, smear opponents, or influence public debate.” Surprisingly, the two most recent examples of widely distributed AI-generated disinformation were audio. Politico notes that right-wing operatives released fake audio clips depicting the voice of a liberal candidate talking about plans to rig the election and raise the price of beer. And Poland’s centrist opposition party used AI-generated audio clips mimicking the country’s right-wing prime minister in a series of attack ads.

Generative AI tools, including image, text, audio, and even meme generators have quickly become more available, more affordable, and easier to operate over the past few years. And social media platforms such as X and Facebook provide ready distribution networks that can reach millions of people very quickly. To make matters worse, the U.S. and many other countries have no binding regulations requiring that the developers and users of these tools make clear that their output is AI-generated.??


Americans want the government to develop its own AI braintrust, not rely on big tech, consulting firms

New polling on AI policy from the Vanderbilt Policy Accelerator finds that most people want the government to develop its own braintrust for regulating AI, and for deciding how federal agencies should use the technology. The government has traditionally relied on tech companies and consulting firms for the technical expertise needed for these things, but much of the public seems to believe that the stakes of AI regulation are too high to allow tech companies to define regulation, or regulate themselves. More than three-quarters (77%) of the thousand-plus surveyed support the creation of a dedicated team of government AI experts to improve public services and advise regulators. But that number dropped to 62% when confronted with the argument that such a team of experts might amount to “big government.”


More AI coverage from Fast Company:

From around the web:

Dr. Thomas P.J. Feinen

Managing Director | Consumer, Media & Digital | eCommerce | D2C

11 个月

Thanks for shedding light on the growing costs and challenges of integrating generative AI. It's evident that as AI technology advances, the expenses associated with its development and implementation are substantial. The call for the government to have its own AI experts for regulation is an interesting shift in perspective. It raises important questions about who should define AI policies and how we can ensure transparency and public trust in AI's governance.

回复
Danny Bammer

Textile Mill Worker at Shaw Industries

1 年

I'm not so sure if I trust either the Government or Bid Tech when it comes to establishing an A.I. system.

回复
Mark Sullivan

Senior Writer at Fast Company

1 年

(Would love to see more discussion here, folks. Please toss in some questions and opinions–there are no wrong ones, we're all learning–and it could help me understand things that I'm missing in my reporting.)

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了