No, We Really Shouldn't Regulate AI like Nukes.

No, We Really Shouldn't Regulate AI like Nukes.

(original post)

Well that was something. Yesterday the Center for AI Safety, which didn’t exist last year, released a powerful 22-word statement that sent the world’s journalists into a predictable paroxysm of hand-wringing:

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear?war.”

Oh my. I mean, NUCLEAR WAR folks. I mean, the END OF THE WORLD! And thank God, I mean, really, thank the ever-loving LORD that tech’s new crop of genius Great Men — leaders from Microsoft, Google, OpenAI and the like — have all come together to proclaim that indeed, this is a VERY BIG PROBLEM and not to worry, they all very much WANT TO BE REGULATED, as soon as humanly possible, please.

The image at top is how CNN responded in its widely read “Reliable Sources” media industry newsletter, which is as good a barometer of media groupthink as the front page of The New York Times, which also prominently featured the story (along with a requisite “the cool kids are now doing AI in a rented mansion” fluff piece. Same ice cream, different flavor).

But as is often the case, the press is once again failing to see the bigger story here. The easy win of a form-fitting narrative is just too damn tasty — confirmation bias be damned, full steam ahead!

So I want to call a little bullshit on this whole enterprise, if I may.

First, a caveat. Of course we want to mitigate the risk of AI. I mean, duh. My goal in writing this post is not to join the ranks of those who believe AI will never pose a dire threat to humanity, or of those waiting by their keyboards to join the singularity. My point is simply this: When a group of industry folks drop what looks like an opportunistic A-bomb on the willing press, it kind of makes sense to think through the why of it all.

Let’s review a few facts. First and foremost, the statement served as a coming out party for The Center for AI Safety, a five-month old organization that lists no funders, no phone number, and just a smattering of staff members (none of whom are well known beyond its academic director, a PhD from Berkeley who also joined five months ago). Its mission is “to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.” Well Ok, that’s nice but…who exactly is doing all that equipping? And where might their loyalties and incentives lie? And do they have any experience working with real life governments or policy?

Hmm. Did The New York Times, CNN, or The Verge ask about this in their coverage yesterday? Nope. Strange, given the last time we saw a similar effort, it turned out that the organization behind it was funded in part by Elon Musk. The golden rule of journalism is Follow The Damn Money.

OK, next. Look at the signatories. A ton of well-meaning academics and scientists, and plenty of previously vocal critics of AI (Geoffrey Hinton being the most notable among them). OpenAI’s network is all over the list — there are nearly 40 signatories from that company alone. OpenAI partner Microsoft only mustered two, but they were the two that mattered — the company’s CSO and CTO. Google clocked in with nearly 20. But not a one from Meta, nor Amazon, Apple, IBM, Nvidia, or Snowflake.

Hmmm. Did any of the mainstream media pieces note those prominent non-signatories, or opine on what they might imply? Only in the case of Meta’s Yann LeCun, who is already on record stating that AI doomsday scenarios are “completely ridiculous.”

So what’s this really all about? Well, in a well-timed blog post just last week about how best to regulate AI, OpenAI’s CEO Sam Altman called for “an International Atomic Energy Agency for superintelligence efforts.” There’s that nuclear angle, once again — this AI stuff is not only supremely complicated and above the paygrade of mere mortals, it’s also as dangerous as nuclear fissile material, and needs to be managed as such!

Altman’s testimony at Congress two weeks ago, his blog post equating AI with nukes the week after, and then this week, the newly minted Center for AI Safety’s explosive statement — come on, journalists: Can you not see a high-level communications operation playing out directly in front of your credulous eyes?

Before I rant any further, two apparently contrary ideas can in fact both be true. I am told by folks who know Altman that he truly believes “super-intelligent” AI poses an existential risk to humanity, and that his efforts to slap Congress, the press, and the public awake are in fact deeply earnest.

But it can also be true that companies in the Valley have a deep history of using calls for regulatory oversight as a strategy to pull the ladder up behind themselves, insuring they alone have the right to exploit technologies and business models that otherwise might encourage robust innovation and by extension, competition. (Cough cough privacy and GDPR, cough cough). Were I in charge of comms and policy at OpenAI, Google, or Microsoft, the three current leaders in consumer and enterprise AI, I’d be nothing short of tickled pink with Altman’s heartfelt call to arms. Power Rangers, Unite!

I’ve written before, and certainly will write again, that thanks to AI, we stand on the precipice of unf*cking the Internet from its backasswards, centralized business and data models. But if we declare that only a few government-licensed AI companies can pursue the market — well, all we’ve done is extend tech’s current oligarchy, crushing what could be the most transformative and innovative economy in humankind’s history. I’ll save more on that for the next post, but for now, I respectfully call bullshit on AI’s “Oppenheimer Moment.” It’s nothing of the sort.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

Igor Portugal

Technology Innovator | Fractional CxO | Best Selling Author | Investor | AI | Cyber Security | Cloud Computing | Empowering businesses, enriching lives with technology and human insight for a smarter, safer world.

1 年

Corporations want to regulate AI to protect their monopoly. Regulation will push AI out of the hands of the open-source community and give monopoly power to large corporations. To that extent, regulation like licensing or patenting AI will have a disastrous effect. Imagine living in a world where the only people controlling AI are Elon Musk, Vladimir Putin and Kim Jong Un. This is why AI regulatory restriction is a bad idea and we must reject it at all costs. The only thing worth legislating is ensuring there is a human always liable for any action of AI.?This thesis explores this further: https://liberty-by-ip.blogspot.com/2023/05/why-ai-regulation-is-bad-idea.html I am interested to hear your feedback!

Bruce W.

Retired Business Manager | Possessing a strong environmental & community purpose | Interests: Technology, Nuclear Energy, Photography and Sailing

1 年

Surely, this is pure panic driven by bad SciFi where an AI goes rogue. The AI's we are talking about today are just extremely complex software programs that can only do what they are programmed to do...correct? The Aha! moment when a sentient AI is developed hasn't happened but when it does that's when we need to be concerned. Wouldn't it be prudent to establish a framework today to account for this eventuality?

回复

要查看或添加评论,请登录

John Battelle的更多文章

  • Google Claims News Has No Value. I Don't Buy It.

    Google Claims News Has No Value. I Don't Buy It.

    There's an old maxim in the news business: Stories in which a dog bites a man are uninteresting. But a man biting a…

  • Can Google Win In AI Search?

    Can Google Win In AI Search?

    Two years ago I wrote a series of posts exploring the business model and interface implications of generative AI-based…

    1 条评论
  • Why Haven't AI Agents Happened Yet?

    Why Haven't AI Agents Happened Yet?

    Three months ago I published my annual predictions, and while I rarely revisit them in the middle of the year, I do…

    9 条评论
  • AI Hype: Bad Data Is Bad Data.

    AI Hype: Bad Data Is Bad Data.

    New data highlighted in Casey Newton's Platformer newsletter codifies what most of us have already assumed: AI chatbot…

    2 条评论
  • When Did Tech Stop Being Magical?

    When Did Tech Stop Being Magical?

    I’ve been pondering something for a while now, but have held off “thinking out loud” about it because I was worried I…

    25 条评论
  • What Are You Reading, and How?

    What Are You Reading, and How?

    Nearly every conversation I've had over the past month has involved some variation of this question: What are you…

    3 条评论
  • Tech Has Replaced Finance As Too Big Too Fail

    Tech Has Replaced Finance As Too Big Too Fail

    I opened my annual predictions last week by noting that the technology industry had leapfrogged finance as the most…

    2 条评论
  • 2025: The Year of the Big Tech Flex

    2025: The Year of the Big Tech Flex

    This isn't going to be a normal year. 2025 will be strange, frenetic, and full of surprises, particularly for those of…

    7 条评论
  • Grading My 2024 Predictions

    Grading My 2024 Predictions

    2024 is in the books, so it’s time to grade my own homework. One year ago I posted my 2024 predictions, fresh off a…

    3 条评论
  • Bluesky Is Getting Big. Does That Mean Advertising Is Coming? (Yep).

    Bluesky Is Getting Big. Does That Mean Advertising Is Coming? (Yep).

    I’ve been in the business of making new kinds of media companies, media platforms, and media technologies since before…

    2 条评论

社区洞察

其他会员也浏览了