AI politics: From pausing to regulating AI, it’s all about winning hearts and minds

AI politics: From pausing to regulating AI, it’s all about winning hearts and minds

“The Letter” was just the beginning. Welcome to the AI politics show. Grab some popcorn, or better yet, get in the ring.

“I got a letter from the?[government]?Future of Life Institute?the other day
I opened and read it, it said they were suckers
They wanted me for their army or whatever
Picture me giving a damn, I said never
Here is a land that never gave a damn
About a brother like me and myself because they never did
I wasn’t with it, but just that very minute it occurred to me
The suckers had?[authority]?power”

Black Steel In The Hour Of Chaos Lyrics by Public Enemy

The connection between Public Enemy and the state of AI today may not be immediately obvious. But if you swap “government” for “Future of Life Institute” and “authority” for “power”, those lyrics can be a pretty good metaphor for what’s happening in AI today.

“The Letter”, as it has come to be known on Twitter, is an?Open Letter compiled by the Future of Life Institute?(FLI) and signed by an ever-growing number of people. It calls for a pause on training of AI models larger than GPT-4 in order to “develop and implement a set of shared safety protocols for advanced AI design and development”.

FLI’s letter mentions that “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” A statement few would disagree with, including people who raised justified concerns about “The Letter”.

I signed “The Letter” too. When I did, it had less than 1.000 signatories. Today, it has about 50.000?according to the FLI’s FAQ. I did not sign because I fully agree with FLI or its framing – far from it. I also have reservations about the above statement, and I am extremely aware and critical of the so-called AI hype.

I signed “The Letter” expecting that it could raise attention and get a very much needed conversation going, and it did. The only other time I recall AI backlash to have fueled such hot debate was in 2020. That was the time when?Google fired a number of researchers who raised concerns about the practice of building ever-bigger AI Large Language Models?in a paper known as “Stochastic Parrots”.

Of course, 2,5 years is a lifetime in AI. That was?pre-ChatGPT, before AI’s break into the mainstream. But that does not necessarily mean that the issues are widely well-understood today either, even if they are hotly debated.

The Future of Life Institute and TESCREAL

A first line of criticism against “The Letter” cites its origins and the agendas of the people who drafted and signed it – and rightly so. Indeed, the Future of Life Institute is an Effective Altruism,?Longtermist?organization.

In a nutshell, that means people who are more concerned about a hypothetical techno-utopian future than the real issues the use of technology is causing today. Even though FLI’s FAQ tries to address present harms too, somehow Peter Thiel and Elon Musk types citing “concentration of economic power” as a concern does not sound very convincing.

Philosopher and historian Emile P. Torres who was previously a Lontermism insider has coined the acronym?TESCREAL?to describe Lontermism and its family of ideologies. Claiming that we need to go to Mars to save humanity from destroying Earth or that we need super-advanced AI to solve our problems speaks volumes about TESCREAL thinking.

These people do not have your best interest at heart, and I certainly did not see myself signing a letter drafted by FLI and co-signed by Elon Musk. That said, however, it’s also hard not to. The amount of funding, influence and publicity Elon Musk types garner is hard to ignore, even for their critics.

Funding and goals

Case in point:?DAIR, the Distributed AI Research Institute, set up by AI Ethicist Timnit Gebru. Gebru was one of the people who were fired from Google in 2020. DAIR was founded in 2021?to enable the kind of work that Gebru wants to do.?

DAIR is “rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial”. That sounds commendable.?

DAIR employs a number of researchers to work on its mission and has?raised $3.7 million?from the Ford Foundation, the MacArthur Foundation, the Kapor Center, George Soros’ Open Society Foundation and the Rockefeller Foundation. Research has to be funded somehow. But perhaps it’s worth pondering about the source of this funding too.

No alt text provided for this image
Compromising and playing the influence game is part of AI politics

Gebru is aware of the conundrum and has spoken about “Big Tech billionaires who also are in big philanthropy now”. Presumably, DAIR founders believe that the use of these funds towards goals they find commendable may be more important than the origins of the funds. But should this line of thinking be reserved exclusively for DAIR?

DAIR published?a “Statement from the listed authors of Stochastic Parrots on the “AI pause” letter”. In this otherwise very thoughtful statement, its authors write that they are “dismayed to see the number of computing professionals who have signed this letter, and the positive media coverage it has received”.

Motives, harms and politics

While I know and have worked with some of the professionals who signed FLI’s letter, I can’t speak for anyone but myself. But I do think it would be fair to give them the benefit of the doubt.?

Some like?Gary Marcus?have stated that while they do not fully endorse “The Letter”, they signed in order to achieve a specific goal they find very important. Sound familiar?

People have questioned the motives of the signatories, claiming that some may simply wish to stall the ones currently leading AI in order to catch up. Case in point,?Elon Musk is setting up a new AI company called x.ai. And?OpenAI now says that maybe ever-larger AI models is not the way to go.

But not everyone who signed is motivated by self-interest. And the harms resulting from the deployment of AI systems today are real.

Worker exploitation?and?massive data theft;?reproduction of systems of oppression?and?the danger to our information ecosystem; the?concentration of power. The harms that DAIR cites are all very real.?

The powers that be are either actively promoting or mindlessly enabling these via AI.?Building coalitions?to raise issues, draw awareness and undermine Big Tech’s march is the pragmatic thing to do.

If that sounds like politics it’s because it is, as?people have noted. That means it’s about “opinions, fears, values, attitudes, beliefs, perspectives, resources, incentives and straight-up weirdness” – plus money and power.?

That’s what it’s always been about. Gebru is not a stranger to this game, having tried to change things from inside Google before setting out to play the influence game from the outside.

Read the rest of the article on Orchestrate all the Things

Nikolaj Van Omme

Developing a better AI that optimise complex industrial problems by 20-40% in production!

1 年

This really talks to me, George Anadiotis (as it should to anyone working in the field). Being one of the "unheard" (although some do hear me, like you and some others in this post) advocating since many years now that there are other ways to do AI (but who cares? Actually, our customers do! ;-) ). That said, even with a better AI that can be controlled the questions remain exactly the same. As you said, this is political, as everything is. What kind of society do we want? And maybe the more depressing question "What kind of society are we able to reach?". I'm also reaching out to those that want to help foster a better society. 1. AI can be controlled and some of us know how. 2. AI could be used to significantly depollute our planet but not with ML only. Message in a bottle... ;-)

Dr Alex Antic

Author of 'Creators of Intelligence' | Honorary Professor (Data Science & AI) | AI Flaneur | Speaker

1 年

Great article George! As I keep saying, we're at a pivotal point in history, with an opportunity to shape what happens next.

Prof J. Mark Bishop

Professor of Cognitive Computing (Emeritus), Goldsmiths, University of London and Scientific Adviser to FACT360.

1 年

要查看或添加评论,请登录

George Anadiotis的更多文章

社区洞察

其他会员也浏览了