Is OpenAI becoming too big to fail?
VentureBeat
VB is obsessed with transformative technology — including exhaustive coverage of AI and the gaming industry.
In 2009, financial journalist Andrew Ross Sorkin 's acclaimed book "Too Big to Fail," chronicling the collapse of Lehman Brothers and the onset of the 2008 global financial crisis (GFC), hit store shelves.
Since then, the phrase "Too big to fail" — originally coined in 1984 by then-U.S. Congressman Stewart McKinney (R-Conn.) referring to Continental Illinois bank — has remained in common currency to describe a corporation or organization whose impact is so large and important to the functioning of society, governments will do anything and pay nearly any sum to ensure it does not collapse in on itself the way Lehman Brothers did 16 years ago.
There's mounting evidence that the term now applies to OpenAI , as well.
Despite an army of vocal critics and doubters — including many former researchers who have left the company over what they see as a lack of appropriate focus on AI safety and "existential risk" or "x-risk" — OpenAI has continue to climb in both its number of users and importance on the global stage.
Just this week we got word that OpenAI's signature chatbot ChatGPT has increased its total weekly active users from 100 million (already a staggering number for a product less than two years old) to 200 million in the last 10 months.
Furthermore, the company told us that it its products — which include the underlying GPT large language models (LLMs) powering ChatGPT and several other major models and services — are in use by 92% of Fortune 500 firms.
So even while there are prominent doubters in the entire generative AI craze — including on Wall Street — the money shows that most of Corporate America's biggest firms are reliant on OpenAI, at least for the time being, and that its technology is becoming more commonplace amongst them than even Microsoft Office 365.
Add to that the fact that two of the top 5 most valuable firms in the world by market capitalization — NVIDIA and 苹果 — are reportedly eyeing investing in OpenAI directly in a new round that would value it more than $100 billion, and you begin to see how the startup that ushered in the generative AI era isn't going anywhere.
That's despite a big rise in competition in the form of open source AI models from the likes of Meta — which now counts 40 million daily users for its Meta AI chatbot introduced only 4 months ago —and many other companies, including Nvidia itself and Chinese e-commerce giant Alibaba's Cloud services division.
It's not an official measure of status, but the fact that OpenAI leader Sam Altman is set to appear in a new TV special on AI hosted by Oprah Winfrey is also evidence of its mounting importance to media and American culture.
But the biggest indication that OpenAI is cementing itself as a "too big to fail" firm is in the news that it and rival Anthropic inked a deal with the U.S. National Institute of Standards and Technology (NIST), an agency in charge of what it sounds like — coordinating standards for technology relied upon by the nation and its people — to provide researchers at NIST's AI Safety Institute with pre-release versions of new AI models for safety evaluation.
A number of people theorized that OpenAI was getting cozier with the government when it announced the appointment of retired U.S. Army General Paul M. Nakasone, the former head of the National Security Agency (NSA) — yes, the spy agency that was secretly surveilling millions of Americans' phone and web user data through a program called PRISM, exposed in 2013 by whistleblower Edward Snowden.
The news this week that it is giving NIST's AI Safety researchers preview access to unreleased public models only further solidifies that narrative. It's interesting timing, too, coming ahead of what is likely to be another hotly contested U.S. presidential election wherein the incumbent is not even running. Typically, new presidents bring their own policy and new government officials into federal agencies such as NIST, so wouldn't it make more sense for Anthropic and OpenAI to wait until after the election to see who they will be dealing with going forward?
To me, the timing suggests that no matter who is in charge of the federal government come January 2025, OpenAI and Anthropic want to stay close and on the right side of it.
领英推荐
And it goes both ways: the government wants to keep an eye on OpenAI and Anthropic, too, because of how powerful and integral to the country's economy generative AI is becoming. It sees both companies as potential leaders in the space. And it is sensible for it not to pick a winner or pick favorites at this stage.
But with the money lining up behind OpenAI, it seems to me that the Sam Altman-led company in particular is rapidly becoming "too big to fail."
I'll end with a caveat that some prior tech giants such as IBM and even my old employer Xerox were once thought of similarly, and have since faded from view in terms of economic importance and in receiving attention from the government, media and regular people. But even those companies are still around — albeit in a diminished form from their heydays.
All of which is to say — those hoping for OpenAI's downfall or a sudden cash crunch are probably going to be waiting a while.
That's all for this week. Thanks for reading, subscribing, commenting, sharing, and being you.
Have a nice Labor Day Weekend if you're in the U.S.
Read More
Info Systems Coordinator, Technologist and Futurist, Thinkers360 Thought Leader and CSI Group Founder. Manage The Intelligence Community and The Dept of Homeland Security LinkedIn Groups. Advisor
6 个月It is too big to fail now, with competition continuing No great thoughts VentureBeat and Louis C.
Strategic/ Legal Consultant with unique business expertise. Focusing on blockchain, AI, tech and finance. Providing workshops and consulting. Attorney (former SEC and major firm) and exited founder. Visit Damsker.com
6 个月The entire concept of “too big to fail” is ridiculous. NOTHING is too big to fail - it is only in the interest of particular companies or governments that these concepts even exist. Even institutions can, and should, fail when they no longer serve their purpose, or cannot prove goods or services at a price the market is willing to pay and make a profit doing so. Too big to fail means that the company, the market, and/or the government have collectively suppressed or failed to support competition such that no alternatives were permitted to enter the market. It does not mean that these companies or institutions “won.” It means that all alternatives were and are forced to lose. Then the failing husks of these companies and institutions are awarded life support in the form of unreturned taxpayer dollars (unreturned to taxpayers) with questionable benefits and no vote by referendum. I won’t even get into issue with vote manipulation (lobbyists shouldn’t be able to manipulate elected officials because they are merely proxies and have no vote to offer), and our profound issues with mortality that attempt to negate the fact that companies, governments and humans are not meant to survive indefinitely.