Turing's Mistake
Image created by OpenAI's DALL·E

Turing's Mistake

Or, what are we talking about when we talk about Artificial General Intelligence?

First off, let me provide the disclaimer that I regard Alan Turing as a truly remarkable visionary? and one whose insights and achievements catapulted us into the digital age. Nothing in what follows is meant to take away any of that. But I do believe he made an intellectual error which has seriously impacted every subsequent conversation about digitally based, aka artificial, intelligence. I believe this is a pot that needs to be stirred, so please bear with me.

Turing’s Insight: From Numeric Computation to Information Processing

Turing, amongst his many accomplishments, had the key insight that numeric computation could be a substrate for information processing. In other words, that numbers could be used to represent all sorts of information, and therefore that the logical and arithmetic manipulation of numbers, which is what computers do incredibly quickly, could be used to programmatically reason about information.This insight provided the connective tissue between information theory and the digital realm. Operating on numbers might be helpful in calculating a payroll, but reasoning on information can lead to decisions, and that is a huge leap. Based on that he went further and made the connection between information processing and “intelligence”.?

The Slippery Slope of Defining Intelligence

But here’s where the terrain gets treacherous, because “intelligence” is a slippery term that is almost always used without a rigorous, or even consistent definition, and is often thrown around more for its rhetorical impact than its clarity. It’s a catchall term that expands and contracts depending on circumstances, often used as a stand-in for a wide range of cognitive abilities, including problem-solving, learning, memory, adaptability, decision making, and various forms of reasoning. Just consider the endless debates about what exactly IQ tests measure. It’s so easy to make the link between computation and intelligence, and then slide into the presumption that this is the same as establishing a link between computation and “mind”.

Redefining Intelligence: A Minimalist Approach

To escape this trap, a good, reasonably rigorous, and therefore more minimalist and constrained, definition of “intelligence” is required.? I humbly suggest that such a definition is “the ability to process information in a context”.? A word on context: information is just data without a setting within which it is imputed to have meaning. That’s the context. We humans impute meaning to data from our senses in the context of the world we experience through our brains, it’s a pretty broad context. Is it the same as “reality”? That’s another conversation.? For programmatic information processing, the context is embedded in the program design, or, in the case of machine learning, the training goals. These are much narrower contexts. For example, there are systems that are designed to predict failures on a piece of machinery. These systems reason over data streams from sensors on that machine. The context may be a physics based model of the parts of the machine, or a trained Machine Learning model. Either way, the data becomes information only in the context of the model, and its possible meanings, its information content, are constrained by the concerns of the model. Our brains have an extraordinarily broad range of concerns, and a correspondingly complex model of this thing we call the universe. The equivalent bit of sense data might produce a dramatically larger information content because of the scale of the context.

Intelligence in the Natural World

With this definition, “intelligence” is a pretty basic feature of all living things, from the simplest bacteria on up. It might even be the essential definition of life (another longer discussion). It’s a definition of intelligence that is likely to make typical social usage a lot more cumbersome (rigor has a way of doing that), but it has the advantage that it aligns well with the way in which programmatic numeric computation supports information processing. If we accept this definition, then the “general” part of artificial general intelligence really is about expanding the context of digital intelligence from the concerns of, say, my digital watch to the concerns of even the simplest living entity.?

Bacteria: Organic Machines vs. Minds

This would be no small achievement. But does a bacteria possess a mind? Or at least, does it possess a mind remotely on the order of a human mind? It certainly doesn’t seem to me that it does. A bacteria is a remarkable organic machine that does remarkable things like synthesizing complex proteins and adapting to changing circumstances. But it is not a “thinking” machine, not in any human, or even mammalian sense of the term. It seems to me that “thinking” goes beyond processing information and responding to it. My cat thinks, my gut bacteria don’t. This gives my cat a far wider adaptive ability than any single member of my gut bacteria. To adapt, my bacterial friends must evolve, which is usually bad for individual members. In other words, thinking, whatever it actually is, provides the individual a level of adaptive capacity that intelligence alone doesn’t, and is categorically different. I’ll return to this a little further on.

Turing and the Equivocation Fallacy

Getting back to Turing. There is, in rhetoric, something called the equivocation fallacy wherein an ambiguous term is demonstrated to have truth in one meaning or understanding, and then asserted to therefore have? truth based on a different understanding of the same term. Politicians do this deliberately all the time, the goal in science is to avoid it. But things can get tricky.? And so, when Turing says “I propose to consider the question, Can machines think?”, he is, while certainly no politician, nevertheless guilty of the fallacy of equivocation. Sorry Alan, I mean no disrespect.

The Hard Problem: Intelligence vs. Mind

So, what is the gap between “intelligence” and “mind”? I’m definitely not prepared to answer that. Physicists and philosophers (to the extent they still exist) refer to

the “hard problem” of consciousness, and this is certainly related. In the 1970’s and early 80’s, I watched Marvin Minsky and his team at MIT struggle to produce a computational theory of mind that bridged this gap. They produced many interesting results (see The Society of Mind for a summation of this), but ultimately failed to achieve anything practical, and that produced a prolonged dark period for the world of AI. I can remember a time when one simply didn’t use the term AI in polite technology society, or risked the experience of scathing contempt.?

I can’t say I know definitively why Minsky’s work never yielded much in the way of practical results, but I’d guess at least part of it is that a “mind” isn’t a simple extrapolation from “intelligence” and while intelligence is relatively easy to model computationally, mind is not. I also think that it is by no means a given that the digital realm by itself is an adequate substrate for mind. Having said that, the extraordinarily expansion in computational capacity in the intervening years, coupled with a determined focus on pragmatic “brain like” techniques like neural networks and knowledge graphs has returned AI to respectability. But it also has, with equal determination, ignored the question of whether intelligence is the same thing as mind.

The AI Debate: End of the World, Or Utopia?

Why does this matter? Much energy today is devoted to the ping pong between whether AI is the key to the future, or to humanity’s doom. For this to not be an enormously destructive distraction, it’s essential to be clear about whether we are talking about, and worrying about, super humans or digital bacteria. Both have enormous potential for good and harm, but the paths to dealing with them are very different. In the superhuman case, well, call it evolution and hope the new world masters like pets. The good news is that, based on everything I’ve said here, I’d argue that this is not even remotely on the table. Not without some very fundamental new discoveries about the stuff that lies beyond Intelligence, and maybe not even then.

Developing Symbiotic Cybernetic Entities

However, in the latter case, everything depends on our ability to develop useful cybernetic entities with which we, as humans, can have effective symbiotic relationships. The difference here is between unleashing the equivalent of a world-killing death bacteria, or developing the equivalent of superior gut bacteria that becomes part of us, and us of them, and creates a whole greater than the sum of its parts. This adaptation could be fairly radical, and might in fact be very painful, existing civil structures could be destroyed and things could get seriously ugly. Still, one of these scenarios is coming and I strongly prefer this scenario over the alternative.

Yury Shamrei

CEO & Founder at SumatoSoft

7 个月

Great read, Keith!

回复
Sabeur Siala

Photonics, Networking equipment, IoT, Sensors

10 个月

Thank you, Keith, for writing and sharing this thought-provoking article.

回复

Keith Absolutely fantastic and thought provoking - keep publishing! - Harv

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了