Game Theory for a different perspective on the OpenAI/Altman drama & Spatial Computing News!
Gimbal Cube
?? Innovation & Interactivity for Brand Experiences | Experiential Marketing | Retail | Digital & Physical Activations
Welcome to the Digital Explorers Diary #60!
A curated collection of thought-provoking topics about interactive technologies, AI, web3, sense-making, entrepreneurship, and psychology.
Executive summary:
Using Game Theory for a different perspective on the OpenAI/Altman drama
As of today, November 20th, 2033, you have probably heard about the drama in place at Open AI. I won’t dive deep into the details, but in short, the board fired the CEO, Sam Altman, only to face a massive backlash from almost everybody. Then, Altman was rumored to return, and the board would be replaced. But finally, he’s joining Microsoft with his former partner Greg Brockman.
I see a lot of surprise and astonishment at what happened, with claims like, “How can a board with such high-profile people make such stupid decisions?”.
In this article, I’ll use game theory concepts to analyze the dynamics at play and draw lessons for the future of corporate governance.
The game
First, we must remember that OpenAI’s structure is very unusual. The board of directors’s role is not to maximize profitability; it’s to ensure that OpenAI stays true to its original mission: “ensure that artificial general intelligence benefits all humanity.” Sam Altman, the most famous and prominent voice in AI, has advocated this mission by calling governments and companies for AI regulation.
But at the same time, since their 2019 restructuring to accept corporate capital, especially since GPT-3, OpenAI has relentlessly released new features under their risky strategy of “release first, correct later.”
Combine that with the highly competitive market with other companies such as Meta, X, or Anthropic, and we can start seeing the game's complexity.
Yes, everything is a game, and each player has incentives to follow or bend the rules. In such a competitive environment, one might think everyone acts with deep reflection and rationality, but game theory does account for seemingly irrational decisions. Let’s dive in.
The board’s decision - was it irrational?
With hindsight bias, it’s trivial to dismiss the board’s decision as stupid, but to them, it wasn’t - they wouldn’t have done it otherwise, right? Here are a few tools to help understand:
The big picture
Zooming out, the backlash faced by the board is easily explained by a game theory concept called multi-polar trap.
This situation arises in most competitive environments: each player is incentivized to make decisions that benefit them in the short term but have negative consequences for everybody overall.
In the race towards general artificial intelligence, all actors gain in the short term by releasing new models and features as fast as possible, even if this is detrimental to ensuring safety and alignment for the benefit of all humanity.
Assuming that the OpenAI board was true to their mission, firing Sam Altman must mean that he wasn’t aligned anymore with creating a General Artificial Intelligence that benefits all humanity. But for everyone else, employees and investors, the short-term goal is to ensure that OpenAI stays the leader in the face of competition from Meta, Anthropic, or X.ai. Therefore, backlashing at the board is the best move for them to ensure their short-term goals to the detriment of the long-term big picture.
A success or a failure of safe corporate governance?
Using this lens to examine the events of the past days, we can see that if Altman had come back as CEO and the board was fired, it would have been an actual failure of the structure put in place to ensure the company’s mission. But now, the board stayed firm on their decision, and Altman is working for OpenAI’s biggest investor.
The big question is: contrary to popular opinion, does the conclusion of this drama show a successful example of that company structure? Did the board succeed in preserving the company mission?
And everyone criticizing the board for being stupid might have been, in turn, irrational, mistaking their short-term goals for the safe, aligned long-term mission of a safe general artificial intelligence.
Of course, that is highly speculative, and many assumptions about unknown information are made, but these are valuable tools for gaining a different perspective.
领英推荐
Speaking of different perspectives, reach out here or on LinkedIn to brainstorm with me!
This Weeks News:
From the Podcast
Every week, I’m teaming up with Guillaume Brincin and Sébastien Spas on the Lost In Immersion podcast. This week:
Watch the podcast below, and tune in at 10 am UTC on Tuesdays on Twitch, and listen to the podcasts on Youtube, Spotify, Apple Podcasts, Google Podcasts, or Amazon!
Did this newsletter spark some ideas? There are many ways we can work together: