Game Theory for a different perspective on the OpenAI/Altman drama & Spatial Computing News!

Game Theory for a different perspective on the OpenAI/Altman drama & Spatial Computing News!

Cross-posted from our Substack!

Welcome to the Digital Explorers Diary #60!

A curated collection of thought-provoking topics about interactive technologies, AI, web3, sense-making, entrepreneurship, and psychology.

Executive summary:

  • Using Game Theory for a different perspective on the OpenAI drama,
  • Apple Vision Pro updates,
  • Microsoft Teams get Virtual Reality meetings,
  • Meta’s Butterscotch prototype,
  • And a fascinating study about social VR!


Using Game Theory for a different perspective on the OpenAI/Altman drama

As of today, November 20th, 2033, you have probably heard about the drama in place at Open AI. I won’t dive deep into the details, but in short, the board fired the CEO, Sam Altman, only to face a massive backlash from almost everybody. Then, Altman was rumored to return, and the board would be replaced. But finally, he’s joining Microsoft with his former partner Greg Brockman.

I see a lot of surprise and astonishment at what happened, with claims like, “How can a board with such high-profile people make such stupid decisions?”.

In this article, I’ll use game theory concepts to analyze the dynamics at play and draw lessons for the future of corporate governance.

The game

First, we must remember that OpenAI’s structure is very unusual. The board of directors’s role is not to maximize profitability; it’s to ensure that OpenAI stays true to its original mission: “ensure that artificial general intelligence benefits all humanity.” Sam Altman, the most famous and prominent voice in AI, has advocated this mission by calling governments and companies for AI regulation.

But at the same time, since their 2019 restructuring to accept corporate capital, especially since GPT-3, OpenAI has relentlessly released new features under their risky strategy of “release first, correct later.”

Combine that with the highly competitive market with other companies such as Meta, X, or Anthropic, and we can start seeing the game's complexity.

Yes, everything is a game, and each player has incentives to follow or bend the rules. In such a competitive environment, one might think everyone acts with deep reflection and rationality, but game theory does account for seemingly irrational decisions. Let’s dive in.

The board’s decision - was it irrational?

With hindsight bias, it’s trivial to dismiss the board’s decision as stupid, but to them, it wasn’t - they wouldn’t have done it otherwise, right? Here are a few tools to help understand:

  • Bounded Rationality. The board might have had limited information, limited time to make the decision, and limited cognitive processing ability. They might have acted based on the information and resources available to them at the time.
  • Incomplete Information. The board might have decided without completely understanding or anticipating Sam Altman's, employees', investors', or the public's responses.
  • Risk Aversion versus Risk Seeking. Their decision might reflect a risk-seeking approach, prioritizing a specific objective (AGI safety) over the stability of maintaining the status quo.
  • Non-Expected Utility Theory. The board's decision may have been influenced by factors other than utility maximization—for example, internal politics, personal biases, or misaligned visions of the company's future and mission.
  • Playing the long game. This event might make more sense when viewed as part of a broader strategy or a series of moves over time.
  • Signaling and Reputation. The distrust of Altman might be a strong signal that his alleged actions are unacceptable and sufficient for a dismissal.
  • Emotional and Psychological Factors. The board's decision could have been made under pressure, fear of missing out on new opportunities, or the desire to assert control, pushed by their mission.

The big picture

Zooming out, the backlash faced by the board is easily explained by a game theory concept called multi-polar trap.

This situation arises in most competitive environments: each player is incentivized to make decisions that benefit them in the short term but have negative consequences for everybody overall.

In the race towards general artificial intelligence, all actors gain in the short term by releasing new models and features as fast as possible, even if this is detrimental to ensuring safety and alignment for the benefit of all humanity.

Assuming that the OpenAI board was true to their mission, firing Sam Altman must mean that he wasn’t aligned anymore with creating a General Artificial Intelligence that benefits all humanity. But for everyone else, employees and investors, the short-term goal is to ensure that OpenAI stays the leader in the face of competition from Meta, Anthropic, or X.ai. Therefore, backlashing at the board is the best move for them to ensure their short-term goals to the detriment of the long-term big picture.

A success or a failure of safe corporate governance?

Using this lens to examine the events of the past days, we can see that if Altman had come back as CEO and the board was fired, it would have been an actual failure of the structure put in place to ensure the company’s mission. But now, the board stayed firm on their decision, and Altman is working for OpenAI’s biggest investor.

The big question is: contrary to popular opinion, does the conclusion of this drama show a successful example of that company structure? Did the board succeed in preserving the company mission?

And everyone criticizing the board for being stupid might have been, in turn, irrational, mistaking their short-term goals for the safe, aligned long-term mission of a safe general artificial intelligence.

Of course, that is highly speculative, and many assumptions about unknown information are made, but these are valuable tools for gaining a different perspective.


Speaking of different perspectives, reach out here or on LinkedIn to brainstorm with me!

Midjourney



This Weeks News:

  • A couple of news about the Apple Vision Pro:At $3,500, it’s pretty expensive, but it gets worse for developers who want to use the Unity game engine: a MacBook costs $2000+, the Apple developer account is $100/year, and the Unity license starts from $2,400… That’s at least $8,000 upfront, a massive huddle for small businesses and creators!Relatedly, the cheaper version of the Vision Pro, called Project Alaska, is set to be released at the end of 2025 or the beginning of 2026.Finally, Apple registered a patent for a “privacy cloak” – by going close to someone in a Virtual Environment, the conversation will switch to a private mode that only that close group can hear. I’m unsure why this would need a patent, but it’s a great idea!

XR News


  • In 2022, Microsoft stopped supporting AltspaceVR but is resurrecting it into Microsoft Teams.Starting in January, coworkers will be able to meet directly from Microsoft Teams in a Virtual Reality environment.Microsoft Mesh, the backbones behind this environment, didn’t seem to have the expected success. By integrating it into Teams, Microsoft hopes to bring virtual meetings to its 300+ Million users.Will this corporate version of AltspaceVR succeed? Oh, and you won’t have legs. But does it matter?

The Verge


  • Last week, I had the fantastic opportunity to try Butterscotch, a VR headset prototype by Meta's Reality Labs, and its most impressive feature, the Varifocal.What does it mean? Currently, in most VR headsets, the image you see is focused at a fixed distance, usually a few meters. Your eyes can look at objects at different distances but can’t properly focus on them, causing eye strain and loss in object sharpness.This prototype solves that issue by tracking your eyes, detecting which object you are looking at, and physically moving the displays to adjust the focus distance.Does it work? In short, yes! The sharpness of the objects is impressive at any distance, and it dynamically adjusts very quickly. It’s almost unnoticeable and is helped by the high resolution of the displays at 56 pixels per degree, nearly the 60 PPD of the human eyes.Will it hit the market? Maybe, although the production of such mechanics might be challenging. A combined hardware-software solution is most likely viable for a mass-market headset.In summary, it’s a very impressive prototype, a preview of how sharp Virtual Reality's future will be!

Gimbal Cube



From the Podcast

Every week, I’m teaming up with Guillaume Brincin and Sébastien Spas on the Lost In Immersion podcast. This week:

Humane


  • A fascinating study went out a couple of weeks ago, analyzing the behavior of users of social-oriented Virtual Worlds. Here are the most exciting results:Most users are male (~80%), but most avatars are feminine (~75%).An intriguing 41% claim to have fallen in love on these platforms.Most of the users are willing to spend money on VR content,However, a minority is monetizing their activities.Related, and that’s a great insight: ~35% of users have purchased a physical product after experiencing a tour or demonstration in VR.Finally, most users have reported phantom sense during sessions, pointing out how strongly immersive Virtual Reality can be.This is encouraging, but to me, there is still a long way to go to get more people from diverse age ranges and demographics involved.

NemChan


Watch the podcast below, and tune in at 10 am UTC on Tuesdays on Twitch, and listen to the podcasts on Youtube, Spotify, Apple Podcasts, Google Podcasts, or Amazon!


Did this newsletter spark some ideas? There are many ways we can work together:

要查看或添加评论,请登录

Gimbal Cube的更多文章

社区洞察

其他会员也浏览了