Is Sam Altman returning as CEO at OpenAI?
Michiel Frackers
Developing solutions that are good for the bottom-line, the community and the planet.
That went quickly. The Verge just reported "The OpenAI board is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes."
Update: The Verge reports that a source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.
Play it again, Sam?
My guess is Altman will wait a bit for the board that fired him, to quit. It's a lot of hassle to start a new company, just opening a new bank account already takes so much time these days. And there are some large checks coming in if Altman returns... Altman surely wants a structure in which he cannot be so easily ousted again by the new board.
Panic among the remaining board members
The panic among the remaining board members is evident after president Greg Brockman resigned on Friday over Altman's resignation, followed yesterday by the departure of three reputable OpenAI researchers and renowned investors in the company such as Ron Conway and ex-Google CEO Eric Schmidt voiced their support for Altman publicly.
Brockman's tweet, which he had obviously drafted with Altman because in it he speaks about both Altman and himself in the third person, must have been especially painful for all of OpenAI's employees, investors and directors. This places?expected new funding round in jeopardy, and with no new money and departing staff, there would be little left of a technology company to build, even for OpenAI. The request for Altman to return, within 48 hours of his ousting, must be seen in this perspective.
Brief match report from OpenAI against Sam Altman
An almost inexhaustible stream of reports, theories and especially rumors has been unleashed since Friday about the resignation of Sam Altman as CEO of OpenAI, known as the creator of ChatGPT, the stunning service that introduced AI to a global audience. I will try to summarize events as briefly as possible and then share my initial thoughts.
Last week, ChatGPT had another top week. The DevDay for developers with the new 'Do It Yourself ChatGPT' product turned out to be a huge success worldwide. The story is that the new round of funding will take place at a valuation of around $90 billion; triple that of earlier this year, when Microsoft invested a quick $10 billion and, after an earlier 2019 investment of a billion, purchased a 49% stake in OpenAI at a valuation of $30 billion.
And then suddenly, on Friday afternoon California time, this message appeared on OpenAI's site, announcing Sam Altman's immediate dismissal without any pleasantries. It was soon leaked that Microsoft knew nothing about it and had been notified only minutes before the news of his ungraceful exit was announced.
Microsoft CEO Satya Nadella furious
Microsoft quickly came out with a bizarre statement of a few curt sentences expressing support for OpenAI; it could do nothing else after already investing a total of over $11 billion in the company whose applications also run entirely in Microsoft's cloud environment. The absurdity was that Microsoft did not mention Altman's name a single time; it's like writing an article about a family event that takes place in late December when children receive presents from an old man with a bunch of reindeer, but avoiding the name Santa Claus.
New interim CEO Mira Murati was mentioned by Microsoft only as "Mira". Because oh well, the founder and CEO was fired and then we don't even mention his name anymore, but while you're here we are so West Coast cool that we only do first names, you know because that's how we roll, bro. This Mira did do something strange herself: she deleted her LinkedIn profile, to which I had linked back in July when I first wrote about her.
Before I lapse into a chronological summary of this absurd Friday as if it were a Tour de France stage, I refer followers of this power struggle to two excellent articles:
Organization structure of OpenAI absolutely unworkable
As far as I am concerned, it is too early to discuss whether Altman is coming back and who is right or wrong in the religious struggle within OpenAI, because the reporting still relies too much on rumors. The question that must be asked is: How can a company apparently worth toward $90 billion and developing such a fundamentally important product for our society, worldwide, operate so ridiculously amateuristic? The answer lies in its organizational structure.
OpenAI has an unusual structure in which its commercial arm is owned and operated by a nonprofit charitable organization. Until Friday, that nonprofit was controlled by a board of directors that included CEO Sam Altman, President Greg Brockman, Chief Scientist Ilya Sutskever and three others who are not OpenAI employees: Adam D'Angelo, the CEO of Quora; Tasha McCauley, an adjunct senior management scientist at RAND Corporation; and Helen Toner, director of strategy and basic research grants at Georgetown's Center for Security and Emerging Technology. Currently, only Sutskever, D'Angelo, McCauley and Toner remain.
Like CEO Altman and president Brockman, Sutskever, D'Angelo, McCauley and Toner own no shares in OpenAI. Investors find that unpleasant, because it means they almost always earn less at OpenAI than at any other job where they do get shares which makes the team members vulnerable to good offers elsewhere. But those investors, including such absolute legends as Vinod Khosla (Sun Microsystems, Juniper), Reid Hoffman (founder LinkedIn) and Eric Schmidt (ex-CEO Google), have as much to say at OpenAI as Santa Claus' reindeer.
No doubt they only agreed to this lack of control because OpenAI was so clearly winning the battle in the AI market that they were willing to accept this deal.
领英推荐
Et tu, Ilya?
A complicating factor is the American form of governance with a Board of Directors that consists of a combination of executives who work full time at the company, and a number of external directors. ?
So at OpenAI on Friday morning there were six Directors, three from OpenAI and three externally, and since Altman was fired without Brockman's knowledge, it was immediately clear that Chief Scientist Ilya Sutskever had either abstained, as cowardly countries tend to do in the United Nations, or had voted along for the resignation of his own colleague and CEO Sam Altman. It's going to be a fun moment if and when Altman returns and they run into each other at the coffee machine. But who is Ilya Sutskever, anyway?
Ilya Sutskever is an AI fundamentalist and that's a good thing
The name Ilya Sutskever and his background (Russian-Israeli-Canadian) suggests a double life as a villain in an old James Bond movie, including a creepy cat on his lap. I love his?old school personal homepage. I don't know him personally, but what I read from and about Sutskever is many times more interesting than anything I've heard coming out of Sam Altman's mouth so far. For example, read this excellent recent piece by Nirit Weiss-Blatt, who spoke with Sutskever at an event this summer. A few quotes:
'When asked about specific professions – book writers/ doctors/ judges/ developers/ therapists – and whether they are extinct in one year, five years, a decade, or never, Ilya Sutskever answered (after the developers’ example):
“It will take, I think, quite some time for this job to really, like, disappear. But the other thing to note is that as the AI progresses, each one of these jobs will change. They'll be changing those jobs until the day will come when, indeed, they will all disappear. My guess would be that for jobs like this to actually vanish, to be fully automated, I think it's all going to be roughly at the same time technologically. And yeah, like, think about how monumental that is in terms of impact. Dramatic."
Weiss-Blatt concluded:
'He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.” The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'
This idea of AI as causing a total economic apocalypse, with the disappearance of all jobs that are based on information analysis and decision-making, is in itself not new and more often proclaimed by science fiction writers, techno utopians and people carrying a can of beer early mornings at the park.
Only Ilya Sutskever is not a crackpot, alcoholic or villain from a James Bond movie; he is the Chief Scientist of OpenAI. And you don't get that job through a phone call from your dad to a friend. After his time at the University of Toronto and work at Google, he started at OpenAI back in 2016, and in everything, Sutskever seems thoughtful and responsible. Call him the anti-Zuckerberg.
The world has no use for a trillion-dollar OpenAI
Despite all possible efforts to limit OpenAI's profits and the resulting warped organizational structure, I do believe in the mission Sutskever sees for OpenAI. More than in the muddled vision of lobbyist Altman, who travels the world meeting politicians and then talks about accountability and regulation, but in actual fact does everything he can to dethrone Google.
But how will this benefit the world? What do we gain from yet another American company with enormous power and influence over the way we deal with knowledge, information and communication and which may eventually take over our jobs? Have we learned nothing from Facebook and Cambridge Analytica?
AI must love people like we love babies
According to Sutskever, AI must learn to love people. In AI, the process by which AI systems are taught things is called "imprinting," specifically the phase in which the system must learn to recognize and conform to certain values, goals or behaviors.
AI systems such as ChatGPT, according to Sutskever, must learn to behave in ways that are beneficial or non-harmful to humans, even as the system becomes more intelligent and autonomous. It is a strategy proposed to mitigate risks associated with advanced AI by establishing a positive, protective relationship with humans from the start.
Sutskever: "The bottom line is that eventually AI systems will become very, very, very capable and powerful. We will not be able to understand them. They will be much smarter than we are. By that time, it is absolutely critical that the imprinting be very strong, so that they feel toward us as we feel toward our babies."
Keep that in mind when Sutskever is portrayed in the media as the evil genius who secretly worked that lovable treasure Sam Altman out of the company they cofounded.
And now it's time for the Formula One race in Las Vegas. An absurdist spectacle in the desert, with $15,0000 tickets to the Paddock Club described in a way no AI system could have dreamed up: "Come and enjoy a recovery brunch, with aerial champagne pours and silent meditation." My favorite Formula One analyst is a mohawked Englishman who clearly puts too much sugar in his tea before recording his videos. See you next week!
Vice President of People
10 个月Unprecedented shifts causing major turbulence.
I bet you a six pack that ChatGPT could not hallucinate this episode twist :) Elizabeth Holmes vibes.
Michiel, even thoughI I was glued to the laptop yesterday reading updates from OpenAI, I still find your sense of humor and love of the sport so much more informative and pleasant to read. Rock on, buddy, it makes us all better off.