The Chinese room argument, or Why Artificial Intelligence Doesn't Really Understand Anything
There was an American philosopher - John Searle: he was squinted in one eye and studied speech as a social phenomenon. In the 1980s there was a boom of discoveries in the field of artificial intelligence and, like me, John couldn't pass by and started studying it. It didn't take long for the results to come in - his "Chinese Room" mental experiment is still the subject of heated debate in scientific circles. Let's find out where the cat-wife is hiding, and does John deserve a bowl of rice?
Why did John explode?
John Searle was an exponent of analytic philosophy, which, in short, is when thinking is not just free-floating, but is backed up by rigorous chains of logic, analysis of semantics, and does not run counter to common sense.
Even before Chinese Room, he was known for his definition of the Indirect Speech Act.
You know, when instead of "Give me money," they say, "Can I borrow it from you?
That is, they use a questioning form instead of a request, while in fact, they don't wait for an answer to their question.
They are waiting for money. And it's better if You send it to the card, and without asking too many questions.
So, while John was digging into the language and the reasons for the human being's special love of all kinds of manipulation, a number of important inventions in the field of Artificial Intelligence happened in the 1980s:
This number of discoveries, as is often the case, generates a huge amount of talk, professional and not so professional, in kitchens and conferences, but all about the same thing:
Are we on the verge of creating that very, scary, yet delightful, artificial intelligence? And will it have consciousness?
Conversations in kitchens did not bother Searle too much, but the scientist could not go quietly past his colleagues' concerns:
In 1977, Roger Schenk and Co. (we'll skip the details) developed a program designed to mimic the human ability to understand stories.
It was based on the assumption that if people understood stories, they could answer questions about those stories.
"So, for example, imagine being given the following story: "A man went into a restaurant and ordered a hamburger. When the hamburger was served, it turned out to be burnt, and the man left the restaurant in a rage without paying for the hamburger or leaving a tip." And so if you're asked: "Did the man eat the hamburger?" you will probably answer, "No, he didn't." Likewise, if you are presented with the following story: "A man went into a restaurant and ordered a hamburger; when the hamburger was served, he really liked it; and when he left the restaurant, he gave the waitress a big tip before paying the bill," and will be asked: "Did the man eat his hamburger?" you will apparently answer, "Yes, he did."
John Searle (Minds, Brains, and Programs, 1980)
So Schenk's program was quite successful in answering such questions, from which a number of fans of strong AI (I mean AGI) drew the following conclusions:
This is where Johnny blew up:
"It seems to me, however, that Schenk's work in no way supports either of these two assertions, and I will now attempt to show it"
John Searle.
Chinese Room Argument.
So, the experiment:
1. I am locked in a room and given a huge text in Chinese. I don't know Chinese - from the word "at all", to me it's just a bunch of meaningless squiggles.
2. Then I'm given a second batch of Chinese texts, but now with a set of rules (in a language I understand) - how to compare this batch of text with the previous one.
3. Then I'm given a third batch of Chinese text - again with instructions, allowing me to compare elements of the third text with the first two. And also instructions on how to compose a new text in Chinese from these texts, arranging the characters in a certain order.
The first text in Chinese is called a "manuscript," the second a "story," and the third a "question".
And what I compose in Chinese is "answers".
But I don't know all this, because I still don't know or understand Chinese.
So, starting with the 3rd iteration, I start to bring back perfectly readable Chinese texts. And the further - the better, because I learn to match these scribbles faster, as well as redraw them, to give them back.
For the purity of the experiment, let's add a parallel story - that I also receive the same 3 types of texts in my native language - and I also return answers to them.
From the outside it will seem that my "answers" to the Chinese "questions" are indistinguishable in quality from those I give out in my native language.
However, in the case of Chinese "answers" - I only give out answers by manipulating the order of the unknown squiggles. According to the instructions.
That is, I behave like an ordinary computer program: processing the algorithm, making calculations.
The conclusions from this experiment I will quote John - our syllables are very similar:
"And so AGI's claim is that the computer understands stories and, in a sense, explains human understanding. But we can now examine these claims in light of our mental experiment:
1. Regarding the first claim - it seems quite obvious to me that in this example I do not understand a single word in the Chinese stories.
My input/output is indistinguishable from a native Chinese speaker, and I can possess any program I want, and yet - I understand nothing. On the same grounds, Shenk's computer understands nothing about any story: Chinese stories, English stories, whatever. Because in the case of the Chinese stories: the computer is me, and in the cases where the computer is not me, it does not possess anything more than I possessed in the case in which I understood nothing.
领英推荐
2. As to the second claim, that the program explains human understanding, we see that the computer and its program do not provide sufficient conditions for understanding, because the computer and the program work, but in the meantime, there is no understanding."
Johnny-bro
For the most observant and ruthless, you correctly noted that this proof, while logical, is far from exhaustive. In fact, it is dangerous to call it a proof.
However, this example is only meant to show the implausibility of claims about the presence of Understanding in Artificial Intelligence.
Criticisms and commentators
Let me say in advance - this experiment is relevant even now. Especially, now. I mean that it has been discussed for 43 years, and I believe it will continue to be discussed.
I will name only the main claims and brief comments to them:
1. If we load a machine with all information at once - in all languages - and it can behave indistinguishably from a human - will this mean understanding?
2. If we load such a program into the robot, add computer vision and control - would that be true understanding?
3. If we create a program that not only follows a script, but also excites neurons in the right sequence, mimicking the excitation in the brain of a native Chinese speaker - what then?
(Otherwise - we're still a long way from the risk of creating AGI)
4. If you take and combine the 3 claims into one - a robot, with a computer brain, with all the synapses, with perfectly duplicative behavior - then it claims to Understanding?!
So far there is only one working example - Man.
What, then, is the difference between us and AI?
Here we need a definition of the word intentionality.
Intentionality is the ability of consciousness to relate to, represent, or express things, properties, and situations in some way.
So the difference is that no manipulation of symbol sequences is intensional in itself. It makes no sense.
In fact, it is not even a manipulation - because these symbols do not symbolize anything for the machine/program.
All conversations around Consciousness in Artificial Intelligence are based on the same intentionality - only those who actually possess it:
The people who makes requests/prompts - get and interpret the answers. And that is what Consciousness and the capacity for Understanding is all about.
Extra level
If you've made it all the way here, congratulations! We went from the simple to the complex, and for you I will separately describe the purpose of the experiment:
With it, we were able to see that if we put anything truly intensional into a system, when a program of such a system is running - it creates no additional intentionality at all!
That is, everything that was Conscious and Human in this machine - that remains. It does not multiply.
Discussions about this experiment are still going on. But I agree with Searle that the very emergence of such a discussion is rather an indication that its initiators are not too well versed in the concepts of "information processing". Believing that the human brain does the same thing as the computer in terms of "information processing" - is deliberately false.
After all, a computer answering "2x2" = "4" has no idea what "four" is and whether it means anything at all.
And the reason for this is not the lack of information, but the absence of any interpretation in the sense in which Man does it.
Otherwise we would start attributing Consciousness to any telephone receiver, fire alarm, or, God bless, a dried-up cookie.
But that is a topic for a new article.
Semantic AI @AICYC | Executive Chairman @ IKNOWit.WORLD | CEO at INTELLISOPHIC.
2 个月A new semantic AI model pushes the boundaries of Searl https://aicyc.org/2024/12/23/sam-llm-and-searles-chinese-room/