Brainchildren: Exploding robots & AI
My home built RC2014 computer, capable of running Eliza (from 1981) on CP/M in 64K RAM. Also not an AI

Brainchildren: Exploding robots & AI

This article was originally written and published by me in November 2018 on my now defunct blog. With the renewed AI hype of the last few weeks it seems an appropriate time to republish my thoughts.

It also makes sense to define what I believe an AI "is". For me, it is any system that can efficiently simulate human cognition, contributing to the explanation of how intelligence emerges from the brain.

This is not the problem that most current supposed AIs attempt to solve.


I've recently been dipping into Brainchildren - essays on designing minds, by the philosopher Daniel C. Dennett. The essays in the book were written between the mid 1980s and 1998. There's a whole section dedicated to artificial intelligence, hence my interest. It's instructive to look at this topic from a philosophical rather than a pure technology perspective. It certainly makes a pleasant change from being constantly bombarded with the frenzied marketing half-truths of the last couple of years. I mean you, shouty Microsoft man.

My conclusion from reading Brainchildren is that many of the problems with AI, known in the 80s, have not been addressed. They've simply been masked by the rapidly increasing computer power (and decreasing costs) of the last three decades. Furthermore, the problems that beset AI are unlikely to be resolved in the near future without a fundamental shift in architectural approaches.

Exploding Robots - The Frame Problem

One such hard problem for AI is known as the frame problem. How do you get a computer program (controlling a robot, for example) to represent its world efficiently and to plan and execute its actions appropriately?

Dennett imagines a robot with a single task - to fend for itself. The robot is told that the spare battery it relies on is in a room with a bomb in it. It quickly decides to pull the cart its battery sits on out of the room. The robot acts and is destroyed, as the bomb is also on the cart. It failed to realise a crucial side effect of its planned action.

A rebuilt (and slightly dented) robot is programmed with the requirement to consider all potential side effects of its actions. It is set the same task and decides to pull the cart out of the room. However, it then spends so much time evaluating all of the possible implications of this act - Will it change the colour of the walls? What if the cart's wheels need to rotate more times than it has wheels? - that the bomb explodes before it has had time to do anything.

The third version of the robot is designed to ignore irrelevant side effects. It is set the same task, decides on the same plan, but then appears to freeze. The robot is so busy ignoring all of the millions of irrelevant side effects that it fails to find the important one before the bomb explodes.

AI is impossible to deliver using 20th century (or current) technologies

Dennet concludes that an artificially intelligent program needs to be capable of ignoring most of what it knows or can deduce. As the robot thought experiments show, this can't be achieved by exhaustively ruling out possibilities. In other words, not by the brute-force, data-churning algorithms commonly used by chess playing and chatbot programs or presumably by this fascinating system used in the NHS for identifying the extent of cancer tumours. Something far more elegant is required before we truly have an AI that can properly simulate human cognition and help us to explain how it operates.

The hardest problem for an AI isn't finding enough data about its world. It's making good decisions - efficiently - about the 99% of data held that isn't relevant.

Human brains do this qualification task incredibly well, using a fraction of the computing power available to your average mobile 'phone. Artificial "brains", unless ridiculously constrained, simply don't perform with anything like the flexibility required.

My belief is that one problem lies with the underlying computing architectures used for current "AI" systems. These architectures have been fundamentally unchanged since the 1940s. An entirely new approach to system architecture (hardware and software) is required, as the computational paradigm is unsuitable for the task of faithfully simulating human cognition.

Steve Ponting

Regional Director passionate about: Transformational Leadership | Personal, Professional, and Business Growth | Operational Excellence | Customer Centricity | Inspiring People-first Cultures

1 年

What a great piece Tim. You’ve captured the challenges of solving the general AI problem really well. Although I do feel sorry for the crash test robot!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了