Let’s not bomb those AI data centres just yet
B-52 bomber cockpit view by SSGT Scott Stewart (NARA & DVIDS Public Domain Archive)

Let’s not bomb those AI data centres just yet

Last work week has been heavy with news about artificial intelligence risks and their management, sometimes with unconventional means, such as military force. Do the newsmakers and commentators make sense? Let’s see.

Eliezer Yudkowsky’s article: a good way to start a moral panic

The most audacious was arguably the Time article by Eliezer Yudkowsky. A man whom we all thought of as an AI alignment expert seems to have embraced (hopefully, not for long) a role of an international politics pundit and an authority on conventional and non-conventional warfare.

In his latest piece, Mr. Yudkowsky seemed to have gone into full panic mode, advocating that all AI labs must immediately stop the training of all AI systems more powerful than GPT-4. Not doing so now, in his view, will — at an unknown point of time in the future — cause us to face “smarter-than-human intelligence” which will have not been aligned with human values and therefore will be a threat.

Mr. Yudkowsky argues that right now we don’t have a bulletproof plan to tackle the AI value alignment problem. Because of that, in his view, we need an indefinite and worldwide moratorium on new large language model training, enforceable by means of “immediate international agreements” backed, if necessary, by the willingness to “destroy a rogue datacenter by airstrike”, even at a risk of nuclear retaliation.

Future of Life Institute letter: still too futuristic?

On a less militant but still somewhat overly futuristic side of the spectrum we have an open letter from the Future of Life institute (FLI) — the one that originally proposed a global moratorium on training “AI systems more powerful than GPT-4”.

The letter has a lot of valid points. Namely, it reasonably highlights the deficiencies in the currently mainstream way of adopting new AI systems — with little public oversight, in the absence of effective regulation and without safeguards such as implementation of AI risk management frameworks and independent third-party audits of high-risk AI systems.

I myself share these concerns and highlight them — alongside relevant implementable solutions to tackle these issues — during my Responsible AI webinars.

However, the FLI letter is not without its flaws — it seems to have an unfortunate techno-solutionist bent. Already in its second paragraph it poses rhetorical questions, such as whether we should really “develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us”.

The authors therefore, perhaps unintentionally, lend credibility to the claim that it is the further development of the large language models — similar to ChatGPT or GPT-4 in architecture — that will lead us to artificial general intelligence (AGI). These claims are unproven and are actively questioned.

Moreover, by focusing on alleged risks posed by the next generation of AI and machine learning systems — after GPT-4 — the FLI letter authors obscure the problems we already have with GPT-4 and other already existing algorithmic systems.

In a rush to monetise these systems, some of their developers and operators seem to be happy to externalise the costs and risks resulting from deficiencies in the development process, architectural limitations, lack of governance and risk management, which result in undesirable bias, privacy threats, serving non-factual information, occasional misuse and damage to public health and children’s wellbeing.

“General purpose AI” is not AGI

It might be true that, as suggested by the AI alignment researcher Robert Miles, there is “nontrivial overlap” between the above practical problems with the current machine learning based systems and the research into broader value alignment issues associated with future, more capable, AI.

But we need to put this straight: current “general purpose AI”, exemplified by the recently popular large language models and derivative products, such as ChatGPT, Bard and the new AI-powered Bing, are not examples of artificial general intelligence (AGI). These systems do not possess internal representations of the world they operate in. They are just products — artefacts that cannot reason and cannot generalise and approach an indefinite set of intellectual tasks at least on par with humans, like a theoretical AGI would.

As Robert Miles correctly notes further, finding a solution to the problems with current machine learning based systems (hallucinations, bias, etc.), or, even better, identifying and shifting to a paradigm that doesn't have these problems, “puts us in a much better place to try to make AGI that doesn't kill everyone”.

Yet, I would argue that we need to remain realistic in both how we assess AI risks and how we approach possible mitigations.

Although in a certain possible world a global temporary AI training moratorium might be a way to go, in a version of reality where you're actually reading this article this seems like a highly implausible endeavour.

Things we could actually do

1. Get rid of a techno-solutionist worldview

Thanks to this sometimes prevailing viewpoint current societal problems are approached as if they all have technological solutions: to solve anything, just build and use more and better technology, and in particular AI. This is, frankly speaking, a dangerous delusion, as correctly detailed in the recent piece by Yannick Meneceur.

And unsurprisingly, techno-solutionist views are propagated through AI systems themselves, not necessarily because of intent of their developers but very likely thanks to the bias in the data these systems were trained on.

Consider a fresh example. As an OpenAI insider, Reid Hoffman was granted early access to the GPT-4 last summer and used it to write a book. At some point, Reid asked GPT-4 a question on the optimistic, mixed and negative scenarios around AI use in education (page 44).

(Un)surprisingly, all scenarios which the system was able to generate differed only to the extent how extensive the AI use would be, with the negative scenario associated only with least AI use.

This is funny and sad at the same time — because of how obvious a pro-technology bias is in this answer.

What would come to my mind instead is a kind of an Aristotelian framework:

  1. overuse or abuse of [whatever],
  2. moderate, appropriate use of [whatever],
  3. underuse of [whatever],

where the second scenario would be a golden mean, optimistic scenario, and 1 and 3 negative, pessimistic scenarios.

Sure, maybe that is just because of my bias towards Greek philosophers versus the ambassadors of techno-solutionism. But it seems a safer bet.

2. Use common sense

Every time you think about using a certain AI system for your business or personal matter, think again. Step back and reflect: what is the benefit of relying on technology in the first place, and what are the downsides? We all know that when you constantly over-rely on some technology, you eventually lose your natural skills in the same domain.

Think about GPS navigation. If you always rely on it, you will never be able to confidently navigate the streets on your own.

Same for writing. As George Orwell aptly put it: “If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.” The more thinking you delegate to a machine, the less apt you become at thinking in the long run. Brains need permanent training just as muscles do. So if your goal is to remain a prolific thinker (and probably also delay the onset of dementia) I would not advise over-relying on delegating your writing to machines.

That doesn’t mean you can’t be more productive by using some writing assistance from ChatGPT, or bouncing some ideas off of it, for example. Surely you can, provided you use common sense.

Furthermore, let’s assume you’ve identified a use case where employing a certain AI system seems to make sense. Let’s further assume that the apparent benefits outweigh the downsides for you — and, importantly, for other people.

In this case, I would still think about the following points, especially for high-stakes decisions:

  1. Are you using the right kind of technology for the job? What evidence do you have the technology use in this case is science-based and actually makes sense?
  2. Are you competent to verify the quality of outputs the technology produces? Objectively competent, as certified by diplomas, tests, peers, and people who pay you money for this as your work. If you're not paid for that, you're not a professional and thus not competent to verify the technology’s outputs.
  3. Are you comfortable taking legal liability and moral culpability for any missed errors in the technology-generated outputs? The question is relevant whenever you use these outputs in real life and this might affect someone besides yourself.
  4. Aren’t you over-relying on the technology, trusting it blindly, because of automation bias? Algorithmic outputs may seem authoritative, and research shows you might even disregard evidence to the contrary. How are you making sure this is not the case?

3. In a business setting, consider proper AI risk management

If you’re developing or using algorithmic systems in a business setting, you will need more than a simple personal checklist. You will have to consider if your organisation has put frameworks and procedures in place to identify, evaluate and mitigate AI/ML-related risks. And whether it has actually implemented safeguards and mitigations.

It would not be an overstatement to say that, so far, AI/ML systems in many cases are still in their infancy in terms of becoming trustworthy (in terms of accuracy, reliability, resilience, etc.) as compared to classic engineered systems.

The awareness and adoption of AI-specific risk management by organisations is also generally nowhere near the level it must be at, compared to the level of adoption of AI systems themselves.

There is some demand for that already — indeed, some clients already come to me proactively, with questions on how to manage AI-specific legal compliance and ethics risks in their line of business. But the general understanding — that organisations need AI risk management, and how to do it — is not good enough in the business community yet, and much less in start-ups in particular.

The results of this low level of awareness and adoption are evident: we’re already having significant AI incidents quite often, and the more AI systems progress, the more incidents there will be, unless the organisations that develop and operate AI systems start taking AI safety and AI risk management seriously.


The author of the article is an IAPP Certified Information Privacy Professional and a Certified Contributor to ForHumanity, a non-profit entity engaged in developing ISO 31000 based?AI Risk Management Framework?and certification schemes for the independent audits of AI systems based on the requirements of the forthcoming EU AI Act.

If you’d like to find out more about AI risk management, check out my Responsible AI webinars or consult my website. As my newsletter subscriber, you’re eligible for a 15 percent discount on all webinars (use promo code NEWS).

Lydia K.

Privacy and AI governance ninja // Pilates teacher apprentice // Human(e) // Iconoclast // Border collie vibes: I am loyal and love being busy! //

1 年

Indeed - I share these concerns. I was also wondering where all the public outrage was when even Facebook thought that the whole data for training circus went too far: That was when Clearview’s scraped data to train their models on publicly available Facebook pages. There was no “letter” - expect a cease and desist letter by Facebook LOL

要查看或添加评论,请登录

Aleksandr Tiulkanov LL.M., CIPP/E的更多文章

社区洞察

其他会员也浏览了