Worry about the dumb machines as well as the smart ones
Photo Credit: Laura Ockel via Unsplash

Worry about the dumb machines as well as the smart ones

We have been warned about the dangers of intelligent machines for over 150 years. In his satirical novel Erewhon by Samuel Butler, published in 1872, the protagonist visits a mythical country where machines have been outlawed. The novel quotes the fictional Book of Machines, the supposed trigger for a civil war between ‘machinists’ and ‘non-machinists’: I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something very different to what they are at present. No class of beings have in any time past made so rapid a movement forward.

This novel seems to have predicted today’s debate about the balance between AI safety and AI innovation. I think it is right to seek a balance between reckless advance and fearful inertia: we must figure out how to put new AI tools to work, and also figure out how to secure and control them. Doing the former responsibly is part of doing the latter.

However, I also think that we shouldn’t forget about the machines which still run all of the systems on which we rely every day. The biggest threat to those systems is not that computers are too intelligent, or that they elude our control, but that they are perfectly stupid, and they do exactly what we tell them.

It does not always feel like this. When we are grappling with a poorly designed computer system, or waiting for a laptop to finish updating, or stuck in a cycle of authentication (‘I can’t tell you how much I have in my account because I can’t get into the app to see it - and you won’t reset my password unless I tell you how much I have in my account!’) then computers can feel malicious or capricious. It is common for us to shout, swear and beg at a screen or an app as if it can hear us. This frustration is particularly acute for people who have never had the chance to learn how to program computers: the system behind the screen is a force of impenetrable and malign mystery.

But those of us who spend time programming machines know just how stupid they are - even if we, too, find ourselves yelling at our code as if it could hear us. We know that the computer will do exactly what we tell it - and that the reason it goes wrong is because we tell it to do the wrong things.

The dumb obedience of computers manifests itself in three dangers, each of which we can guard against, as long as we understand and anticipate them - and as long as those who sponsor the creation of systems allow the necessary time and work.

The first, and most obvious danger, is that we make mistakes when we instruct the machine: we create bugs by writing code which doesn’t say what it means to say. Bugs can be as crude as a line of code which adds instead of subtracting, or as subtle as a thousand lines of code which gets lost in loops of logic. But they are all human creations, and the result of machines doing exactly as they are told.

You may wonder why technical people make so many mistakes. The answer is that writing code is an inherently error prone activity. The defence against bad code is not simply to get better and make fewer mistakes - it is to write automated tests and to share your code with other, critical humans.

The second danger is that, unless we put special measures in place, computers will obey the instructions of anyone who has access to them - including people who have bad intent. Despite the portrayal in films and TV programmes of hackers engaged in intense battles of wits, the simple goal of a cyber attack is to get a machine to obey your instructions.

All of the security mechanisms we put in place - identity and authorisation schemes, firewalls, malware detectors - are to protect our stupid machines from doing what they are told by someone who is not supposed to be able to tell them. If you wonder why we spend so much time and effort on defences, simply imagine your unprotected system obediently following someone else’s instructions.

And the third danger is that, even if we could write perfect code and build perfect defences (neither of which is possible) we can design and build systems which are simply bad. Those apps and websites (and the computer controlled call centres and computer generated correspondence) which make you so frustrated and angry are not bad because of some weird malice in the heart of the machine - they are bad because humans made them bad.?

All of the work of user research, design and user testing that we do is intended to help us make things which are good - or at least less bad.

AI is powerful, and may or may not, in its current form, deserve the label ‘intelligence’ - this is a disputed topic which we don’t have time to cover here. But, just like every other computer system in the world, it is built on top of dumb machines which do exactly what they are told. For all of these systems, we have to spend the time to make sure that we are telling them to do the right things, that we have stopped bad actors from telling them what to do, and that we have told them to do things which are useful for their users. We know how to control computers - we just have to take the trouble.

(Views in this article are my own.)

Mohammed Brueckner

Strategic IT-Business Interface Specialist | Microsoft Cloud Technologies Advocate | Cloud Computing, Enterprise Architecture

6 小时前

Your "perfectly stupid" machines reveal a profound truth about human systems. Organizations, like computers, faithfully execute flawed instructions, creating institutional bugs through unquestioning compliance. Both vulnerabilities share a common root: systems optimized for obedience rather than intelligence. This parallel suggests our organizational challenges mirror technical ones—not from people thinking too independently, but from structures designed to encourage perfect, mechanical execution regardless of outcome.

Jim Johnston

Lecturer at University of The West of Scotland

6 小时前

David, comments remind of the book called Dangerous Enthusiasms, forgotten authors and not home now will share, Fourth risk is computers can magnify impact of blunders by doing some Stupid tasks faster than humans can do manually

回复
Joseph Connor

Lead: AI Investment, governance, design, alignment, safety and research. Funder: Public Interest AI. Professor: AI innovation. Focus: Building value each day.

9 小时前

I do wish commissioners would start with how wrong can we afford to be with AI, especially in healthcare, and then start a meaningful / powerful conversation with the supply chain. As much of the power to do good and bad things with AI sit there.

Daniel. Susser

I help teams improve. Writer, podcaster, agile coach, delivery manager, generally useful person to have around.

9 小时前

When we think about how 'dumb' and literal computers are - doing exactly what we tell them like the fabled monkeys paw - it makes you realise how much of *human* intelligence is about correctly inferring the right thing from context. When we set up our organisations to give people - both leadership and people on the ground - rich and nuanced context, we tend to do better. When organisations make decisions based on just cold, rational data, we run the risk of emulating the dumb and literal machines that we administer.

要查看或添加评论,请登录

David Knott的更多文章