Sam Altman Returns To OpenAI: How The Chaos Changes The AI Field
Alex Kantrowitz
Founder of Big Technology | Tech Newsletter and Podcast | CNBC Contributor
Sam Altman is back. Improbably and dramatically, the ex-OpenAI CEO returned as CEO late Tuesday. Altman’s counter-coup swept out three board members who sparked his firing and included an agreement to investigate what went down this past weekend. The new board —?which now includes Larry Summers and Bret Taylor — will expand to up to nine members, likely including someone from Microsoft.?
The AI field will not go back to ‘normal’ after this. OpenAI was already vulnerable coming into the chaos and will now have to work harder to maintain its lead while facing inspired competition. Though the narrative might frame this as a major win for OpenAI and Microsoft, the reality, as always, is a bit more nuanced. Here’s how the AI field changes after this:
Sigh Of Relief For Microsoft
18,000 Microsoft customers use its OpenAI service on Azure, and the disintegration of OpenAI would’ve left them scrambling. Microsoft had to return its OpenAI partnership to some order, and it could not have hired OpenAI’s entire staff and kept the OpenAI service running. So this is a positive resolution and a relief after a tense few days. There are still some governance issues to resolve. Microsoft doesn’t have an OpenAI board seat after all this.?But this was the least bad option for Satya Nadella & co., who can now press forward with their industry-leading AI efforts, even if having Altman in-house would’ve paid dividends over time.
Golden Opportunity For OpenAI Competitors
Companies building on OpenAI technology freaked out this past week. They trusted in a company that almost evaporated in a weekend. So today, those building on OpenAI are putting contingency plans in place should the situation repeat. The era of model agnosticism is really here. Soon, any serious AI company will be able to substitute OpenAI for Anthropic or any other competitor. Startup founders using OpenAI have already told me they’ve started work on it this week. OpenAI competitors are already trying to exploit the situation. “Utterly insane weekend. So sad. Wishing everyone involved the very best,” Inflection CEO Mustafa Suleyman wrote this week. In the next breath, he said: “Come run with us!”?
AI Talent Wars Heat Up Big Time
OpenAI sold the world’s top AI researchers on a vision and a safety valve: Join us, help us get closer to human-level artificial intelligence, and if things get unsafe, the board will step in. It was a win-win proposition that was ultimately a sham. The OpenAI board was poorly structured, almost blew up the company, and the new structure will be less safety-focused. This will open avenues for competitors to recruit researchers who otherwise might’ve gone to OpenAI. Meta chief AI scientist Yann LeCun is already endorsing the case that his team’s open-source focus will make it an unlikely winner. He might be right.
OpenAI’s Lobbying Efforts Hampered
Altman’s pushed an AI ‘safety’ agenda in Washington and globally, becoming a lobbying force. OpenAI’s corporate structure lent legitimacy to his efforts. The implicit message: We’re the AI safety company, not the for-profit, please listen to us and consider the following rules. With the non-profit board’s decision so quickly reversed after pressure from investors (and well-compensated employees), the myth will take a hit. OpenAI will now become one of the pack, without its special sheen, which will change its ability to influence policy.?
AI Safety’s Muddy Future
OpenAI’s board was supposed to save us from an AI apocalypse. Then, it couldn’t think three steps ahead in a boardroom coup. Much of the blame rests with the specific individuals. But more broadly, it’s hard to imagine anyone will have confidence in our ability to stop harmful AI should we develop it. (And what if the board’s concerns in this area were legitimate?) The future of the AI safety field is in flux.
Chaos Is A Ladder
OpenAI’s chaos may be its own ladder. It moves forward with a board more sympathetic toward accelerating AI development. It will work more closely with Microsoft under the new structure, with fewer speedbumps along the way. And it may have some incredible products en route. But the chaos will also be a ladder to those OpenAI once had on their heels. And some competitor —?whether it’s Anthropic, Inflection, Google, or others — will inevitably exploit the moment and rise.?
Quote Of The Week
领英推荐
You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king.
YCombinator co-founder Paul Graham in 2008
Some Good Stories On The OpenAI Debacle
Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding | New York Times
Altman Alternative Facts | Puck
Thread on Microsoft’s complicated position | Matthew Prince
Listen To The Latest On Big Technology Podcast
Box CEO Aaron Levie will be on later today to discuss the OpenAI resolution. You can subscribe and listen here:
Spotify:?spoti.fi/32aZGZx
Apple:?apple.co/3AebxCK
Your podcast app of choice. https://pod.link/1522960417/ ?
Thanks again for reading. Please share Big Technology if you like it!
And hit that Like Button?if you’re ready for a break from all this drama.
Questions? Email me by responding to this email, or by writing [email protected]
News tips? Find me on Signal at 516-695-8680
Sales and Marketing Representative
12 个月In my opinion AI should be unharmful.
Paid business ads available in Restivo Media Co. LinkedIn Business Page.
12 个月Just an off the cuff post regarding the ChatGPT drama of last week: Some undisclosed dissatisfaction with the CEO: that is why he is fired. MSFT offers to hire him as CEO (?) (of what?). ChatGPT rehires the CEO. Note is released of which not all directors are aware that undisclosed advances put humanity at risk. The perspective reader read of this scenario from CEO Sam Altman himself a few months ago and from at least one other engineer describing AI self-sentience about a year ago 2022. Purportedly a smaller version of LLM works better and can do grade school math. What? Better than LLM's? Anybody can ask Bing for the First Derivative of 1/x^2 and get the correct answer immediately. TMI of the false kind. Investors are advised by?Mike Restivo?to invest cautiously, not fall for falsehoods and dubious drama to influence stock market valuations. "Sucker born every minute." - P.T. Barnum
--
1 年Thanks for posting! Happy Thanksgiving day of November 23rd. 2023!
Says what he Does, Does what he Plans, Plans what he Believes, Believes what he Says.
1 年Most AI techies are secular humanists, believing that if they get excited over something, it is inherently good, and only parochial conservative unimaginative people are against it. AI has been so impressive -- how can anyone think of banning or regulating it? Indeed spectators outside the field cannot well articulate their gut feeling that guidance is necessary. And those who understand believe that they can self police. They cannot. The path from data to conclusion as practiced by modern AI apparatuses is too complex for human tracking. Unlike normal computing results, advanced AI outcome cannot be checked. That means that a deep-inside algorithmic flaw, or contaminated input will put humanity in the mercy of bad AI results, and the more we depend on it, the greater the catastrophe. We don't yet have a good means to face this risk, and no one is taking it seriously enough. The failure of OpenAI board to promote guardrails is indeed alarming.