OpenAI has grand ‘plans’ for AGI. Here’s another way to read its manifesto
VentureBeat
VB is obsessed with transformative technology — including exhaustive coverage of AI and the gaming industry.
This is The AI Beat, one of VentureBeat’s newsletter offerings. Sign up here to get more stories like this in your inbox every week.
Am I the only one who found Open AI’s latest blog post, “Planning for AGI and Beyond,” problematic?
From its inception in 2015, OpenAI has always made it clear that its central goal is to build artificial general intelligence (AGI). Its stated mission is “to ensure that artificial general intelligence benefits all of humanity.” To be fair, the blog post was no different — it discussed how the company believes the world can prepare for AGI, both in the short and long term.
Some found the manifesto-of-sorts, which has a million “likes” on Twitter alone, “fascinating.” One tweet called it a “must-read for anyone who expects to live 20 more years.” Another tweet thanked Sam Altman, saying “more reassurance like this is appreciated as it was all getting rather scary and felt like @openai was going off-piste. Communication and consistency is key in maintaining trust.”
Others found it, well, less than appealing. Emily Bender, professor of linguistics at the University of Washington, said:
“From the get-go this is just gross. They think they are really in the business of developing/shaping ‘AGI.’ And they think they are positioned to decide what ‘benefits all of humanity.'”
And Gary Marcus, professor emeritus at NYU and founder and CEO of Robust AI, tweeted, “I am with @emilymbender in smelling delusions of grandeur at OpenAI.”
Computer scientist Timnit Gebru, founder and executive director of the?Distributed Artificial Intelligence Research Institute (DAIR), went even further,?tweeting:
“If someone told me that Silicon Valley was ran by a cult believing in a machine god for the cosmos & “universe flourishing” & that they write manifestos endorsed by the Big Tech CEOs/chairmen and such I’d tell them they’re too much into conspiracy theories. And here we are.”
OpenAI’s prophetic tone
Personally, I think it’s notable that the blog post’s verbiage, which remains remarkably consistent with OpenAI’s roots as an open, nonprofit research lab, gives off a far different vibe today in the context of its current high-powered place in the AI landscape. After all, the company is no longer “open” or nonprofit, and recently enjoyed a reported infusion of $10 billion from Microsoft.
In addition, the release of ChatGPT on November 30 led OpenAI to enter the zeitgeist of public consciousness. Over the past three months, hundreds of millions of people have been introduced to OpenAI — but surely most have little inkling of its history with, and attitude toward, AGI research.
Their understanding of ChatGPT and DALL-E has likely been limited to its use as either a toy, creative inspiration or a work assistant. Does the world understand how OpenAI sees itself as potentially influencing the future of humanity? Certainly not.
OpenAI’s grand message also seems disconnected from its product-focused PR of the past couple of months, around how tools like ChatGPT or Microsoft’s Bing might help in use cases like search results or essay writing. Thinking about how AGI could “empower humanity to maximally flourish in the universe” made me giggle — how about just figuring out how to keep Bing’s Sydney from having a major meltdown?
With that in mind, to me Altman comes across as a kind of wannabe biblical prophet. The blog post offers revelations, foretells events, warns the world of what is coming, and presents OpenAI as the trustworthy savior.
The question is, are we talking about a true seer? A false prophet? Just profit? Or even a self-fulfilling prophecy?
With no agreed-upon definition of AGI, no widespread agreement on whether we are near AGI, no metrics on how we would know whether AGI was achieved, no clarity around what it would mean for AGI to “benefit humanity,” and no general understanding of why AGI is a worthwhile long-term goal for humanity in the first place if the “existential” risks are so great, there is no way to answer those questions.
That makes OpenAI’s blog post a problem, in my opinion, given the many millions of people hanging onto Sam Altman’s every utterance (to say nothing of the millions more waiting impatiently for Elon Musk’s next existential AI angst tweet). History is filled with the consequences of apocalyptic prophecies.
Some?point out?that OpenAI has some interesting and important things to say about how to tackle challenges related to AI research and product development. But are they overshadowed by the company’s relentless focus on AGI? After all, there are plenty of important short-term AI risks to tackle (bias, privacy, exploitation and misinformation, just to name a few) without shifting focus to doomsday scenarios.
The Book of Sam Altman
I decided to take a stab at reworking OpenAI’s blog post to deepen its prophetic tone. It required assistance — not from ChatGPT, but from the Old Testament’s?Book of Isaiah:
1:1 – The vision of Sam Altman, which he saw concerning?planning for AGI and beyond.
1:2 – Hear, O heavens, and give ear, O earth: for OpenAI hath spoken, our mission is to ensure that artificial general intelligence (AGI) — AI systems that are generally smarter than humans — benefits all of humanity.
1:3 – The ox knoweth his owner, and the ass his master’s crib: but?humanity?doth not know, my people doth not consider. For lo, if AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of?possibility.
1:4 – Come now, and let us reason together, saith OpenAI:?AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and?creativity.
1:5 – If ye be willing and obedient, ye shall eat the good of the land. But if ye refuse and rebel, on the other hand, AGI would also come with serious risk of misuse, drastic accidents and societal disruption.
1:6 – Therefore saith OpenAI, the mighty One of Silicon Valley,?because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.
1:7 – And the strong shall be as tow, and the maker of it as a spark, and they shall both burn together, and none shall quench them. We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of?humanity. Take counsel, execute judgment.
1:8 – And it shall come to pass in the last days, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence — a gradual transition to a world with AGI is better than a sudden one.?Fear, and the pit, and the snare, are upon thee, O inhabitant of the earth.
1:9 – The lofty looks of man shall be humbled, and the haughtiness of men shall be bowed down, and OpenAI alone shall be exalted in that day. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are?existential.
1:10 – Moreover OpenAI saith we will need to develop?new alignment techniques?as our models become more powerful (and tests to understand when our current techniques are failing). Lift ye up a banner upon the high mountain, exalt the voice unto them, shake the hand, that they may go into the gates of the nobles.
1:11 – Butter and honey shall he eat, that he may know to refuse the evil, and choose the good. The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time.
1:12 – If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary. Howl ye; for the day of AGI is at hand.
1:13 – With arrows and with bows shall men come thither; because all the land shall become briers and thorns. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that?too. The earth mourneth and fadeth away.
1:14 – Behold, successfully transitioning to a world with superintelligence is perhaps the most important — and hopeful, and scary — project in human history. And they shall look unto the earth; and behold trouble and darkness, dimness of anguish; and they shall be driven to darkness. And many among them shall stumble, and fall, and be broken, and be snared, and be taken.
1:15 – They shall not hurt nor destroy in all my holy mountain: for the earth shall be full of the knowledge of OpenAI, as the waters cover the sea. Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of?us. Therefore shall all hands be faint, and every man’s heart shall melt.
1:16 – And it shall come to pass, that we can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet. And now, O inhabitants of earth, we hope to contribute to the world an AGI aligned with such?flourishing. Take heed, and be quiet; fear not.
1:17: Behold, OpenAI is my salvation; I will trust, and not be afraid.
Read more from Sharon Goldman, Senior AI Writer, on VentureBeat.com.
This is The AI Beat, one of VentureBeat’s newsletter offerings. Sign up here to get more stories like this in your inbox every week.
Follow VentureBeat’s ongoing AI coverage:
Calling all gaming industry executives
Register for GamesBeat Summit 2023, happening May 22-23 at the Marina del Rey Marriott with a virtual content day on May 24.
Gain insights from thought leaders on the latest industry trends, learn about cutting-edge gaming tech, effective monetization strategies, mobile gaming trends, virtual and augmented reality innovations, the rapidly growing esports scene, and much more.?
We’re offering an exclusive 25% off promo code for our LinkedIn newsletter subscribers: LIVBWEEKLYPROMO25. Register now!
All of my relationships are a positive influence in my life.
1 年This is a great
Engineering/Innovation Executive ● New Product Development ● Semiconductors to Systems ● Change Agent
1 年Good choice using Isaiah as the model. It illustrates the hubris of OpenAI.
Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.
1 年Very good Sharon. The culture of OpenAI is scary -- conversion from nonprofit to super profit is highly unethical, and then an exclusive deal that effectively gives control to MSFT even scarier. Very few people seem to have their arms around this, including I think very likely OpenAI and MSFT. My Feb EAI newsletter on systems of integrity in AI (or lack thereof in LLMs) https://www.dhirubhai.net/pulse/ai-systems-integrity-mark-montgomery And my post this morning warning about Windows promoting Bing's version of ChatGPT: https://www.dhirubhai.net/posts/markamontgomery_microsofts-new-windows-11-update-adds-a-activity-7036358635328868353-293a