OpenAI: A Four-Day Sprint Retrospective

OpenAI: A Four-Day Sprint Retrospective

"Meet the new boss. Same as the old boss." - The Who (1971)

As The Who took the stage at Woodstock, activist Abbie Hoffman grabbed the mic to incite the crowd with his message of revolution. Lead singer Pete Townshend chased him off the stage.

A few years later, in an interview, Townshend said, "I wrote 'Won’t Get Fooled Again' as a reaction to all that – ‘Leave me out of it: I don’t think your lot would be any better than the other lot!' All those?hippies wandering about thinking the world was going to be different from that day. As a cynical English arsehole, I walked through it all and felt like spitting on the lot of them, and shaking them and trying to make them realise that nothing had changed and nothing was going to change."

Pete Townshend after chasing Abbie Hoffman off the stage at Woodstock

Not exactly the flower-power sentiment you'd expect from one of the artists performing at Woodstock, but did he ever nail the drama that would unfold at OpenAI some 52 years later. What happened in the last week? Has anything changed? Is anything going to change?

In the case of OpenAI, the "new boss" turns out to be the "old boss." Sam Altman (and Greg Brockman) have triumphantly re-claimed their place atop the AI rocket ship. And the impetus for this four-day Silicon-Valley-meets-Succession dramedy seems a lot like the idealist Abbie Hoffman trying to grab the mic only to get spit on by an angry Townshend.

I got sucked into the drama, and I don't want to get fooled again. The stakes are too high. So let's break down what happened, what we know, and what we don't know. If you've followed at the headline level, it's a pretty simple story:

Crazy, activist board-members make an historically bad decision to oust rock-star CEO who returns on the shoulders of 750 employees who do their best impersonation of Notre Dame football players when Dan DeVine refused to let Rudy suit up (at least as it was portrayed in the movie).

Joseph Campbell would be proud. Young Luke Skywalker Altman has been thrust into an unknown world, nearly pushed into the arms of big tech company Microsoft, but he has returned to the known world with the magic elixir, having defeated the dark side board. He is now well poised to usher in this era of abundance. "We are so back," Brockman posts. Roll credits and start working on the sequel.

If only things were that simple.

Here's my take on the situation, starting with…

Key Unanswered Questions

What did Altman do to precipitate the firing?

The Board's original blog post references that he was "not consistently candid in his communications." There? should be tension between a Board and a CEO, and in some ways, this Board was set up to ensure a high degree of tension. Hell, Altman just did a several-month victory lap where he talked about their "unique governance structure" as one of the ways that they would ensure safe release of AI and eventually AGI. But tension is no excuse for not being candid. We still don’t know what that means, and apparently the Board wouldn't even offer interim CEO Emmett Shear specifics.

Some speculate that this had to do with his approach to Dev Day or some of the ways that he's been looking to invest in a chip maker. It's also possible that he just pushed forward with releases with the full knowledge that the Board disagreed, leaning into that tension and recognizing his role was to commercialize the product and raise money for OpenAI's insatiable compute needs.

Other reports suggest Altman attempted to remove Board members (e.g., Helen Toner because her research painted the OpenAI approach in a negative light). Is that acceptable political maneuvering or a fireable offense in light of the unique charter of OpenAI? Hard to say.

With the charter of the Board not to maximize shareholder value but to protect humanity, it's understandable why a lack of candor could trigger a nuclear reaction, particularly if this has been a persistent issue, and the perceived stakes are the future of humanity. Without this puzzle piece, we need to be cautious when we draw conclusions and we need to not accept the simple story.

Was there some internal discovery (like huge advances in Q-learning) that suggest OpenAI is on the brink of a major breakthrough? Do they see a reasonable path to AGI in the next two years?

This has warranted much speculation and has sent people scouring recent Altman speeches in search of clues. Real or perceived, it's a big deal. Check out this section of OpenAI's charter:

"We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be 'a better-than-even chance of success in the next two?years.'"

It's become fairly common to characterize the safety community as almost religious in their fervor, and, as such, dismiss them. That's dangerous. We don't know what we don't know on this front, and we'd be wise to assign some level of probability to reaching AGI in the nearer term. For me, I assign a low probability, but it's much higher than it was twelve months ago, and I don't have the benefit of knowing what Ilya Sutskever knows.

What We Think We Know (how I'd interpret the events vs. the simple story)

Simple Story: The Board made a terrible decision to fire Altman.

My Take: Without some answers to the questions above, we don't know if the Board made a terrible decision. What we can say without question is that the Board could not have been more inept in its execution of that decision. They botched the timing, the communication, and they clearly didn't read the room. If only they had access to some model that would help them learn from past firings, teach them about game theory, and maybe even simulate the various ways this could play out. Maybe they could have even asked such a model for a thoughtful plan.

Simple Story: Sam Altman is the vanquished and now returning hero.

My Take: Again, it's hard to say with incomplete information. There are winners and losers, and Sam Altman comes down on the winning side, but it's hard to know whether the winners are heroes or antiheroes. Altman has proven a very effective operator. He's great at commercializing a product and great at raising money. He's also proven very effective at winning political battles. OpenAI is the story of schisms (Elon Musk (initial investor), Dario Amodei (founded Anthropic)), and so far, Sam has emerged victorious every time. In time, we'll better understand his role and whether we should paint him as conquering hero, masterful politician, or both.

It's been reported that the new Board will investigate his behavior and, at least for now, he's lost his Board seat. Much to learn.

Simple Story: The employees rallied around Altman because he is an inspiring leader. They would follow him anywhere.

My Take: The employees rallied around Altman for a lot of reasons. Some of them would follow him anywhere. Others likely saw the implosion of OpenAI as the end of their dream. Even if they were lukewarm on Sam, it was fast becoming clear the company would not survive his exit. They also had financial incentives to bring the company back together and get through the next valuation.

I suspect some still have grave concerns about the pace of commercialization, but recognize that if OpenAI implodes, the race doesn't stop. And while they might vehemently disagree with how Sam was executing the strategy, they trust "good people" at OpenAI to act in humanity's best interest more so than competitors that they've no doubt villainized during their race for AI dominance. It's what companies and people do. Finally, some likely signed the petition because of pressure (overt or covert). "If you're not for us, you're against us." Make no mistake, the outpouring of support for Altman was impressive. I'd assume there were a range of motives, and this outpouring doesn't mean 95% of the company agrees with the speed vs. safety approach.

Simple Story: The board should have been more forthcoming/ transparent relative to what Altman did.

My Take: Without knowing what Altman did, it's impossible to say whether the public should know more. Did it have to do with something that could be deemed a trade secret? Would disclosing specifics undermine OpenAI's position in the market? Or would it just expose the pettiness of the players and chip away at OpenAI's credibility? The Board's charter is not to satisfy the press or the public's need for answers. They are there to protect the company. They might have "won" their battle had they exposed Altman in some way, but in winning that battle, they could have undermined the company's position in the marketplace.

Simple Story: Ilya Sutskever waffled. He saw the error in his ways and came around to Team Sam.

My Take: Ilya Sutskever doesn't appear to get politics and power. He naively thought that the Board could eject Altman and the company would persist under new leadership. Once it became clear that he'd be cleaning out his desk and working for Google or Meta or Microsoft, he realized that his best course of action would be to support Altman's return. I suspect he still has the same safety concerns he had last Friday. He's just more pragmatic about how he can best influence the company's direction.

Simple Story: Nadella nearly pulled off the acqui-hire of all acqui-hires. He could have essentially bought OpenAI for nothing.

My Take: Nadella likes the arms-length relationship with OpenAI. It gives him a buffer when it comes to the current and future safety questions. His announcement to hire Altman and Brockman was masterful. It boxed out competitors, sent a clear message to the OpenAI employee base and bought time to maneuver. But I'd suggest that it was a move to get to where OpenAI persisted with Altman and Brockman back in the fold. This ended exactly where Nadella wanted it to end.

My other take: if Altman and Brockman had joined Microsoft, they would not have stayed for the long haul. These are start-up guys who would not like working for Big Tech. Satya knew this. Most of the actors in this drama played checkers while Sam Altman played chess. Satya was playing three-dimensional chess. Oh, and the Board was Tic Tac Toe. Badly.

Microsoft didn't emerge with a Board seat. Yet. I suspect that Nadella didn't want that to become a talking point, but I'd be surprised if that doesn't change as new members are added.

Simple Story: The zealots who fear extinction are out of touch with reality.

My Take: Beware of the techno-optimists and beware of the safety zealots. In the end, there are not two sides to the question of safety and existential risk. There's a continuum, and we'd be wise not to keep dismissing "the other side" as its most polar extreme.

For this, and for so many things, we need to find Aristotle's golden mean. We don't know what we don't know, and just because past industrial revolutions have created jobs and improved most measures of quality of life, there's no guarantee this one will. Black swans exist. Externalities happen.

One other question: what percent chance of a plane crash dissuades you from getting on a plane? If the safety zealots are largely wrong in their catastrophizing, but there's a 1% chance that they are right, is that enough to apply brakes and guardrails? I don't stop onto a plane with those odds.

Simple Story: The mixing of a for-profit company inside of a non-profit was destined to fail.

My Take: The mixing of a for-profit company inside of a non-profit was worth a shot, even if it was likely to fail. This Turducken of a corporate structure forced tough conversations and more intentional decisions than are likely happening at the other big tech companies. It's disappointing that it went down in flames, and it will take some time to sort out whether the cause was the structure or pettiness and egos of the players. I suspect it was both. I also suspect this will become a cautionary tale for those looking to supplant the more traditional, shareholder-driven role of a company and its Board. That is concerning.

Simple Story: Nothing's changed. OpenAI persists with most of the same players. The beat goes on.

My Take: This will go down as one of the most consequential inflection points in the AI revolution. The techno optimists have emerged stronger. Microsoft has emerged stronger. The race will accelerate. Altman senses his strength and mandate. Ilya and the safety wing of OpenAi got a cup of shut up. Meta disbanded its safety team while this drama was unfolding.

So Where Does This Leave Us?

If "Won't Get Fooled Again" serves as the opening theme song for this future miniseries (Four Days in the Valley? Ctrl-Altman-Delete?), I'd propose Springsteen's "Thunder Road" play as the credits roll at the end of the series.

"Well now, I'm no hero, that's understood

All the redemption I can offer, girl, is beneath this dirty hood

With a chance to make it good somehow

Hey what else can we do now?"

The only thing to do now is press forward. Better, faster, cheaper. Optimize the models. "Heaven's waiting on down the track."

The pace of change and the pace of releases will accelerate. If Altman fired the starting gun almost a year ago with the release of Chat GPT 3.5, we've just heard the next gun go off. Let's hope this isn't the bell lap and we have ample time to sort out some of the central questions raised in this drama.

As referenced above, Microsoft and techno optimists and Altman are the winners of the week. But the real winner: Moloch. Moloch, the Hebrew God of Child Sacrifice, is Allen-Ginsberg's personification of our never-ending quest for more. Moloch sets up multi-polar traps and convinces each side to go faster, even if they fear the pace is not healthy or sustainable. Moloch discourages cooperation and collaboration and governance. Moloch pushes everyone to dismiss safety and alignment as nice-to-haves in a world where others might get there first.

In the end, our real challenge isn't about technology. Our real challenge is to not destroy ourselves while a victorious, laughing Moloch pulls puppet strings. That destruction might not be some machine takeover. It could be exponentially scaling our attention economy. It could be propagating misinformation during our next election.

We've spent the last year learning about vectors and large language models and the dangers of hallucination, but this drama highlights the human nature of the road ahead. Humans have egos and are fallible and subject to so many biases. Our incentive structures will continue to push the commercialization of these products. They hold almost limitless potential to solve some of our most vexing problems.

That said, the last week tells us how far we have to go to solve the Alignment Problem. We have to find ways to inject a common set of human values into these tools. We have to find ways to do this in a collaborative, cross-company, cross-border way. That sounded pretty daunting before six people with a shared charter and personal relationships couldn't do it within a single company.

We could easily be fooled by accepting the simple story of OpenAI's near-failed coup. I don't think anything is that simple. I'll leave the last words to Townshend and violently agree with his post-revolution hope that we don't get fooled again:

"I'll tip my hat to the new Constitution

Take a bow for the new revolution

Smile and grin at the change all around

Pick up my guitar and play

Just like yesterday

Then I'll get on my knees and pray

We don't get fooled again."

Jessi Guenther

Leader | Wife | Boy Mom | Optimist

1 年

That you drew an analogy to a Turducken ?? might be my favorite among the many masterful analogies in this piece.

Terrance M.

Founder @ SequenceStack, helping companies execute.

1 年

As usual, excellent post. The Moloch reference is strong. Thank you, Andy.

Joseph McGowan

Strategic Finance Executive - Tax & Treasury | Inspiring Leader | Mergers & Acquisitions | Finance Transformation | Risk Management

1 年

Great quote

Kevin Holland

Girl Dad | Husband | GTM Leader

1 年

This is the most coherent and cogent take I have read by a country mike- beware of the overzealous and ill-informed

Ranjit Doshi

Country Manager @ RE India, helping Start-up's and SMEs scale for max ROI.

1 年

I wonder what would ChatGPT write for this saga of Altman's resurrection at Open AI. As of now ChatGPT is updated only till Jan 2023 ??

要查看或添加评论,请登录

Andrew Hilger的更多文章

  • We Can Be Heroes...

    We Can Be Heroes...

    What George from Modesto and Joe from West Chester County Can Teach Us About AI, the Future of Work, and Making Sense…

    11 条评论
  • A Hundred-Year-Old Man Shares the Key to the Universe

    A Hundred-Year-Old Man Shares the Key to the Universe

    You ask me what I've learned. So much really.

    6 条评论
  • Welcome to The United States of Ticketmaster (note: prices do not include fees)

    Welcome to The United States of Ticketmaster (note: prices do not include fees)

    It's a New Day. Whether you like or hate the new administration's policies, we're witnessing a populist revolution.

    27 条评论
  • Favorite Books of 2024

    Favorite Books of 2024

    Every so often, something you read fires off a constellation of neurons that results in an intense moment of clarity…

    24 条评论
  • A Eulogy I'd Like to Deliver

    A Eulogy I'd Like to Deliver

    Dearly beloved, We are gathered here today To pay our respects to a leadership style Borne out of a desire for safety…

    12 条评论
  • The Intelligence Race

    The Intelligence Race

    From the Cockroach Perspective A few weeks ago, Dario Amodei (Anthropic founder) told Lex Fridman human-level AI could…

    13 条评论
  • Keep Smiling...

    Keep Smiling...

    ST LOUIS, December 1991 – A dozen co-workers and I rolled into a hotel ballroom gala with the false bravado of a crew…

    15 条评论
  • SERENITY PRAYER REMIX

    SERENITY PRAYER REMIX

    (2024 Campaign Season Edition) ?????? ?????????? ???? ???????????????? To accept that I can't change some jackass's…

    10 条评论
  • The Revolution Will Not Be Brought To You by Live Nation and White Claw

    The Revolution Will Not Be Brought To You by Live Nation and White Claw

    Reflections on Oceans Calling, Industry Maturation, Brain Hemispheres, Music as Revolution, and Our Declining…

    21 条评论
  • The Absolute Lunacy of Sports Fandom

    The Absolute Lunacy of Sports Fandom

    BOSTON, Mass., September 2024 – Sophia sent some pre-game pictures from outside Fenway Park.

    23 条评论

社区洞察

其他会员也浏览了