The OpenAI Debacle e/acc versus ea
A reminder for new readers.?That Was The Week?collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast:
Thx to: @kwharrison13,. @Om, @karaswisher, @eladgil, @vkhosla, @RonConway, @satyanadella, @elonmusk, @sama, @jason, @BasedBeffJezos, @jon_victor_, @steph_palazzolo, @anissagardizy8, @KateClarkTweets, @rex_woodbury, Liam @LevelVC, @jasonlk, @stevemollman, @jonstewart, @Kantrowitz, @emollick, @cdouvos, @kirstenkorosec, @davemcclure
Editorial
It’s a day late for That Was The Week. In mitigation, what a day it was, with the firing of Sam Altman at OpenAI, and the demotion, then resignation of Greg Brockman, Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at the startup .
The dust is beginning to settle, and my best interpretation of the events comes from these x posts by Kara Swisher and Elad Gil, focused on effective altruism (e /a) and the e /acc belief in unfettered AI.
The history of OpenAI reinforces this interpretation. In February this year, one of the founders, Elon Musk, tweeted:
“Not what I intended at all” sums it up, and the org chart in a recent x post from Jason Calacanis shows how the attempt to fit a for-profit company inside a not-for-profit shell led to power residing with the not-for-profit Board of Directors.
At the root of this bifurcation is a hot vision schism in Silicon Valley between the e /acc supporters and the e /a supporters. e /acc stands for “effective acceleration” and is focused on innovation without limits. Or at least without regulatory constraint. e /a we are more familiar with it. It stands for “effective altruism” and is focused on innovation for good rather than for its own sake or for profit.
The Center for Effective Altruism defines it this way:
Everyone wants to do good, but many ways of doing good are ineffective . The EA community is focused on finding ways of doing good that actually work.
This led to many ideas like the following:
You can read a longer version here :
e /acc - Effective Accelerationism - is identified with the somewhat hilariously named Beff Jezos (@BasedBeffJezos). His substack is here:
His most well-known supporter is Marc Andreessen, but the entire Silicon Valley optimist manifesto supporters are aligned with e /acc.
Sam Altman and Greg Brockman are heroes to e /acc and villains to e /a
Here is a taste:
"Science, technology and intelligence still have very far to go, saying that we should seek to maintain humanity and civilization in our current state in a static equilibrium is a recipe for catastrophic failure and leaving behind huge potential benefits of dynamic adaptation
Effective accelerationism (e/acc) in a nutshell:?
It combines a belief in technology with some fairly “out there” post-humanist views.
The e /a supporters weighed in on AI this week by signing a “Responsible AI” letter. And e /acc supporters also weighed in, describing the signatories (many VCs) as people to avoid taking money from.
Martin Casado, a good investor at Andreessen Horowitz, published an x post explaining the rift and siding with the e /acc worldview.
At OpenAI, the technical lead (Ilya Sutskever), hired by Elon Musk, fought what appears to be a rearguard fight on behalf of the e /a worldview to oust Sam Altman and Greg Brockman. Because the independent board at the top of the OpenAI hierarchy had a non-profit mandate, it sided with the “slow down” and “take less risks” view. That resulted in the earthquake felt in the world yesterday.
The Board of Directors is now squarely in the firing line of all supporters of unrestricted innovation. You can assume I am more minded to support that group than the e /a group and read accordingly.
Ron Conway, a prolific and respected angel and seed investor, said it best:
领英推荐
The e /a lobby represents a pessimistic view of AI's likely, or at least potential, outcomes and is biased toward protecting us all from AI. They seem to buy the rhetoric around the “dangers” of AI rather than be excited by its potential. Insecurity dominates their mood, and fear of rapid innovation is at their core. This is all disguised in a worthy-sounding shell that masquerades as humanist. But its real goal is to slow or stop innovation. They also assume that the profit motive is incompatible with good outcomes. Human history seems to disagree. The drive for profit has fueled much innovation - alongside the passion for science and discovery.
The e /acc point of view is varied and isn’t an organized “group” at all. It is a loose conglomeration of science enthusiasts who believe restrictions on innovation can only lead to worse outcomes. It is comfortable with for-profit and has no issues with recent commercialization at OpenAI.
There is no time to deep-dive here, but there are many investors in OpenAI, and it was about to raise new capital, apparently valuing the company at almost $90 billion. One of those investors is Vinod Khosla, who posted on x this morning Pacific time.
Aside from the “self-goal” glitch (haha, Vinod, you should have used a cricket metaphor like a run-out), Vinod has it right. But a Board atop a non-profit, taking decisions that impact the for-profit part of OpenAI., is actively destroying value and slowing future value in service of a myth (the dangers of AI software). I would be shocked if there were no consequences for those Board members.
Kyle Harrison nails it in this week’s Essay of the Week (emphasis mine):
Also, there will be plenty of poking into the dynamics of the board. It’s a wild group of people that have no business being in control of the most important company in AI. Investors like Reid Hoffman, who could have had helped balance this situation,?stepped off ?the board of OpenAI to avoid conflicts of interest. Turns out when one company represents critical infrastructure for a huge swath of AI companies, its hard to also be investing in those companies.?
But that’s another argument for why the fate of OpenAI shouldn’t have been left to a bunch of randos, all of which Ilya was more than capable of pushing around, right?
Randos. Love that.
Aside from that, there is a lot on venture capital this week. I loved this quote from Chris Douvos:
Chris Douvos, an LP in early-stage venture funds, told me earlier this week that an upcoming surge of down rounds will give firms no choice but to mark down investments.?
There is a different kind of video this week as Andrew is traveling. Join me for a “Walk in the Park”.
Late Update: There is an attempt to reverse course and ask Sam to return as CEO. He is saying he will if the entire Board is replaced. You can’t make this stuff up.
Contents
Editorial:? e /acc versus e /a
Market-moving B2B brands, delivered | DuckRow&Co
9 个月Should we not be calling Effective Altruism “e/alt”? Kind of has a double meaning then too vs. the “at any cost” acceleration. Just a thought.
Kapitanist
1 年M. Hafizul Hilmi C.ErgHF
Technologist | Founder | Scientist
1 年E/acc to the ??
Serial Entrepreneur, Angel Investor, and Professor based in Tokyo Japan
1 年Excellent read, Keith. I just wrote a blog in Japanese based on your editorial to share what seems to be the reasons for his dismissal drama with the Japanese people. Also, I enjoyed your talk during your walk in the park ;-)?
Co-Founder, eStreamly- VideoCommerce, Social Commerce- The fastest and easiest path to purchase from videos
1 年Great rundown Keith. I’ll be digging into each of the articles now. I’ve been looking forward to this since I found out. Btw, I enjoyed the walk in the park!