Tech's Tug-of-War: Billionaires, Power, and the Future of AI
Lauren "??" Vriens
Scaled Startup to $50M in 1.5 Years | AI Obsessor | Startup Advisor | Ex-Revel General Manager, Fulbright Scholar *All sarcasms are my own*
The OpenAI drama of last month is not another tale of power-drunk boards and boomerang CEOs (a la Steve Jobs).
It is much more interesting.
There's a schism in tech ideology brewing. It’s a story filled with intrigue, billionaires, and the looming threat of whether AI will kill us all.
If you work in tech, startups, or policy, you probably want to read this. If you're a board member, you probably want to read this.
Which ideology comes out ahead will shape the industry landscape (and possibly, the world) for the next couple decades.
As Daily Show comedian Ronny Chieng put it: “Am I going to have a robot sex slave or am I going to be the robot sex slave?…I just want to know.”
It may not be that scenario, but there is a real tension here between worldviews on whether AI will bring dystopia or utopia - and it's driving some interesting behavior.
If you missed the OpenAI drama because you (smartly) stayed off your devices during the holidays, here’s a humorous primer re-told through X/Twitter posts.
So what is this grand battle that is brewing? Let’s start by reviewing the players.
Effective Altruism: Billionaires Betting on Terminator AI
Say hello to Effective Altruism (EA). Also known as long-termism, EA is concerned with the long-term survival of the human race. According to these folks, Artificial Intelligence is more of a threat to us than drought, pandemics, rogue meteorites, or nuclear war.
EA became NYTimes-famous because of Sam Bankman-Fried, the former crypto king founder of FTX. Dustin Moskovitz, billionaire ex-Facebook co-founder (and Asana CEO), is doing his part. Elon Musk is also part of this crew.
EAs believe the solution to keeping AI from destroying the human race is slow, cautious development. They prefer regulation that keeps highly powerful AI in the hands of a few players.
These folks have been spending serious dollars and podcast hours making sure none of us sleep well at night.
In September 2023, a tech titan parade, featuring Elon Musk, Mark Zucky-berg, Bill Gates, and Sundar Pichai of Alphabet descended on Capitol Hill for the first of seven AI Insight Forums.
In the closed-door session, they got free reign to scare the living daylights out of our governmental leaders. One senator's sole take-away were the words "civilizational risk."
We can surmise what was covered, like the Paperclip Maximizer thought experiment, which goes as follows:
If you tell AI to make as many paperclips as possible, it will plow through available resources. Eventually it will realize humans contain the same elements. Then it will grind us up to make more paperclips. What a way to go, right?
There are several fun ways AI may end up killing us all, but I will save that for a subsequent piece.
A sub-section of the EA group believe that AI systems are currently keeping a list of the people who are “nice” to them. Remember that, for next time you threaten to feed Siri to New York City subway rats.
We will get into what this has to do with OpenAI. But first, the counter-movement: Effective Accelerationism (e/acc).
Effective Accelerationists: Silicon Valley Racing towards AI-for-All
Effective Accelerationists are the libertarians of AI combined with Silicon Valley morals - let’s move fast and break things. (Look, if we didn’t break a few eggs, we wouldn’t have life-changing things like SnapChat and 15 minute delivery!)
Marc Andreessen, billionaire co-founder of VC firm Andreessen Horowitz, is a strong believer. (There are A LOT of billionaires in this story). Many in the startup community have even started to put “e/acc” in their social media bios.
They believe the solution to our human problems - like pandemics, cancer, low wages, bioterrorism - is to allow unbridled progression of AI.
Andreessen in his Techno-Optimist Manifesto rant, lays out the core beliefs of e/acc:
While this feels like a battle between two juggernauts, there is another, smaller ‘naut. They have fewer billionaires (that I know of) and no fancy name that starts with E. So I hereby will call them the Excluded Humanists.
Excluded Humanists: Advocating for Inclusive AI
The Excluded Humanists are concerned that AI was created in a room filled with only white male engineers (though this isn’t entirely true). They also believe AI will likely determine the socio-economic winners of the future (likely true).
Any woman who has ever had to ask a man to tell Alexa to turn off the lights is familiar with this concern.
This group has a long list of AI fears: black box discrimination, uneven accrual of wealth, the entrenchment of predatory capitalism. I lay out their concerns in greater detail in this past piece.
The Excluded Humanists are searching for broad-based inclusion in the creation of LLMs and dispersion of AI-generated wealth.
Summarizing the AI Ideological Divide
Okay, so in summary:
领英推荐
Who will win? And why should we care? Let’s dive into the consequences of these different positions.
A Scare-Tactic Power Play? The Politics of AI Regulation
The Effective Altruists believe that to keep us from being turned into paperclips, regulation is required. Only a few trusted companies should be allowed to do the advanced stuff.
This would likely entrench power for the behemoths and front-runners - OpenAI, Anthropic, Meta, Microsoft.
These players will then be able to stifle innovation for smaller startups.
If anything goes wrong, these same players will be the only ones that can help. “Oh, sorry Mr. Congressman, you don’t like that your toaster is yelling slurs at your dog? We will send over our experts right away.”
The subtext here is that Effective Altruists aren’t actually afraid that AI will kill us all. They are using this to scare politicians into gifting them a regulatory moat. Conversely, they could be actually scared…but like why not benefit from the panic? Who knows.
To help drum up fear, Elon Musk likes to tell the story of how accelerationist Larry Page of Google/Deep Mind called him a "species-ist." As in, overly attached to the survival of our species. Implying that e/acc folks are not.
As an aside, I really hope "species-ist" catches on as a tech-bro insult: “don’t talk sh*t about Microsoft Teams - you’re such a species-ist.”
But Growth is Good, Right? The Accelerationist Creed
As you can imagine, the accelerate-it-all e/acc folks want less regulation.
Their perspective is that the clumsy politicians should stick to regulating Big Pharma and Big Sandwich and allow the tech industry heroes to solve all the problems.
In fact, they believe existing laws cover the full scope of crimes, so absolutely no need to add any new ones.
But existing laws may not be sufficient to deal with novel AI use cases. For example: AI-generated nudes or AI-conducted warfare.
If we outsource the trigger to end human life to AI, who is liable for war crimes under the Geneva Convention? If an autonomous Tesla makes the decision to mow over your grandma instead of endangering its passenger, who pays for the funeral?
E/acc folks sound a little reminiscent of oil and gas companies in the 2000s: “did you know that petroleum products are in medical equipment? So if you regulate us, you must hate medical miracles!” (Y’all don't remember that?)
In an interview with America's fav Professor Andrew Huberman, Andreessen does concede that AI may come up with new crimes, and we can regulate them when that happens. Okay, fine. Sorry to the poor sucker that is ground zero on this one.
The e/acc group does have a point about the drawbacks of a curmudgeon government regulating technology, however.
In 2021, to fight monopolies and reduce waste, Europe passed a new rule requiring all tech products to have USB-C ports (take that, Apple.)
Problem is, the legislation specified USB-C, so now no one will innovate on chargers for the next, uh, forever. A heavy government hand tends to stymie innovation and progress.
OpenAI Drama: A Microcosm of the Global AI Power Struggle
In the OpenAI saga, some speculate Sam Altman was ousted for not prioritizing AI safety, including in how quickly they released ChatGPT and continue to release new features to the public.
It is easy to see how there could be tension between Sam Altman and his now defunct board (who everyone speculates were more EA-leaning).
OpenAI started as a non-profit with a mission to ensure that AI smarter than humans (Artificial General Intelligence) benefits all of humanity. In order to minimize risks, they are in favor of gradual progress.
Altman, raised in the Y Combinator startup world, is accustomed to the “move fast and break things” Silicon Valley mantra. He deeply believes AI will bring utopia. But to do so, it needs to get into the hands of the people quickly so society has time to acclimate.
In October, ex-board member Helen Toner, Director of the Center for Security and Emerging Technology at Georgetown, wrote a paper praising competitor Anthropic for their safety positioning and their slow release of AI chatbot Claude. Altman was allegedly not happy about having a shade-throwing board member.
Two days before his dismissal, Altman admitted his frustration on the Hard Fork podcast for being seen as a villain for his views: “all gas, no brakes, certainly not.” The podcast host referred to Altman as “accelerationist-adjacent.” Altman did not disagree.
The firing feels like EA back-slapping e/acc, but then being forced to capitulate. X/Twitter believes the reinstatement of Altman means e/acc has won the war.
I suspect this was only one battle and the war is far from over.
The war will likely take place on Capitol Hill. I also suspect board members are having similar debates in boardrooms across the world: the tension between winning the race and not being the company that ended the world.
Even if causing civilizational harm is out of reach for that company, there will be no shortage of embarrassing front-page stories about AI-generated snafus to worry about.
AI's Future: Concluding Thoughts on Tech's Biggest Battle
In conclusion, the OpenAI saga is about more than a mere power struggle; it's provided us with insight into a pivotal moment in tech history, shaping the future of AI and, by extension, our world.
Who do you think is going to win? The Effective Altruists' cautious approach, the Accelerationists' rapid all-out strategy, or the Excluded Humanists' call for inclusive development?
Or to put it another way, are we going to be AI sex robots or are we going to have AI sex robots? Just kidding!
Hope you enjoyed this humorous take on tech’s biggest trends.
Follow me for the next installment: "Is AI really going to kill us all?" where I unpack how AI could possibly kill us all and why on earth it would want to.
Software Engineer and Linux power user with 17 years of Python experience and 12 years in AWS
11 个月Thanks for sharing Lauren. Do you have a rough recollection of when in the 3hr Huberman interview (linked in the article) where “Andreessen does concede that AI may come up with new crimes, and we can regulate them when that happens” … I’ll likely listen to the whole thing later when the kids are down for bed, but just in case ??
CEO & Founder. Board Member. Speaker. Transformation | Strategy | Governance
1 年Lauren, this piece is fascinating and novel take on what can happen when ideological differences in the boardroom that are not discussed openly and transparently can culminate in a (very public) governance crises that was easily avoidable.
The Science-Backed Relationship Coach | Secure Leadership ??| Giving C-Suite Execs' Clarity on their Marriage & Career | From “DO I STAY OR GO?” to “SUCCESSFULLY IN LOVE? WITH LIFE”??| Uni of Oxford M.St | Podcast Host??
1 年I'd argue we should all read this...AI battles are so misunderstood yet so talked about. Thanks for sharing Lauren Vriens