The Chinese room argument, or Why Artificial Intelligence Doesn't Really Understand Anything
Consciousness by Matthew Willis

The Chinese room argument, or Why Artificial Intelligence Doesn't Really Understand Anything

There was an American philosopher - John Searle: he was squinted in one eye and studied speech as a social phenomenon. In the 1980s there was a boom of discoveries in the field of artificial intelligence and, like me, John couldn't pass by and started studying it. It didn't take long for the results to come in - his "Chinese Room" mental experiment is still the subject of heated debate in scientific circles. Let's find out where the cat-wife is hiding, and does John deserve a bowl of rice?

Why did John explode?

John Searle was an exponent of analytic philosophy, which, in short, is when thinking is not just free-floating, but is backed up by rigorous chains of logic, analysis of semantics, and does not run counter to common sense.

Even before Chinese Room, he was known for his definition of the Indirect Speech Act.

You know, when instead of "Give me money," they say, "Can I borrow it from you?

That is, they use a questioning form instead of a request, while in fact, they don't wait for an answer to their question.

They are waiting for money. And it's better if You send it to the card, and without asking too many questions.

So, while John was digging into the language and the reasons for the human being's special love of all kinds of manipulation, a number of important inventions in the field of Artificial Intelligence happened in the 1980s:

  • The first expert systems appeared - which could model expert knowledge in different fields and use that knowledge to make decisions;
  • New neural network training algorithms were developed that formed the basis of the neural networks we have now, which threaten to take our jobs away;
  • Developed the first industrial robots - which gave a boost to modern robotics;
  • The emergence of the first computer vision systems - those that are now easily found by photo where to buy your favorite mug .

This number of discoveries, as is often the case, generates a huge amount of talk, professional and not so professional, in kitchens and conferences, but all about the same thing:

Are we on the verge of creating that very, scary, yet delightful, artificial intelligence? And will it have consciousness?

Conversations in kitchens did not bother Searle too much, but the scientist could not go quietly past his colleagues' concerns:

In 1977, Roger Schenk and Co. (we'll skip the details) developed a program designed to mimic the human ability to understand stories.

It was based on the assumption that if people understood stories, they could answer questions about those stories.

"So, for example, imagine being given the following story: "A man went into a restaurant and ordered a hamburger. When the hamburger was served, it turned out to be burnt, and the man left the restaurant in a rage without paying for the hamburger or leaving a tip." And so if you're asked: "Did the man eat the hamburger?" you will probably answer, "No, he didn't." Likewise, if you are presented with the following story: "A man went into a restaurant and ordered a hamburger; when the hamburger was served, he really liked it; and when he left the restaurant, he gave the waitress a big tip before paying the bill," and will be asked: "Did the man eat his hamburger?" you will apparently answer, "Yes, he did."

John Searle (Minds, Brains, and Programs, 1980)


So Schenk's program was quite successful in answering such questions, from which a number of fans of strong AI (I mean AGI) drew the following conclusions:

  • You could say that the program understands the story and answers the questions;
  • What the program does is explain the human ability to understand the story and answer the questions.


This is where Johnny blew up:

"It seems to me, however, that Schenk's work in no way supports either of these two assertions, and I will now attempt to show it"

John Searle.


Chinese Room Argument.

So, the experiment:

1. I am locked in a room and given a huge text in Chinese. I don't know Chinese - from the word "at all", to me it's just a bunch of meaningless squiggles.

2. Then I'm given a second batch of Chinese texts, but now with a set of rules (in a language I understand) - how to compare this batch of text with the previous one.

3. Then I'm given a third batch of Chinese text - again with instructions, allowing me to compare elements of the third text with the first two. And also instructions on how to compose a new text in Chinese from these texts, arranging the characters in a certain order.

The first text in Chinese is called a "manuscript," the second a "story," and the third a "question".
And what I compose in Chinese is "answers".
But I don't know all this, because I still don't know or understand Chinese.

So, starting with the 3rd iteration, I start to bring back perfectly readable Chinese texts. And the further - the better, because I learn to match these scribbles faster, as well as redraw them, to give them back.

For the purity of the experiment, let's add a parallel story - that I also receive the same 3 types of texts in my native language - and I also return answers to them.

From the outside it will seem that my "answers" to the Chinese "questions" are indistinguishable in quality from those I give out in my native language.

However, in the case of Chinese "answers" - I only give out answers by manipulating the order of the unknown squiggles. According to the instructions.

That is, I behave like an ordinary computer program: processing the algorithm, making calculations.

The conclusions from this experiment I will quote John - our syllables are very similar:

"And so AGI's claim is that the computer understands stories and, in a sense, explains human understanding. But we can now examine these claims in light of our mental experiment:

1. Regarding the first claim - it seems quite obvious to me that in this example I do not understand a single word in the Chinese stories.

My input/output is indistinguishable from a native Chinese speaker, and I can possess any program I want, and yet - I understand nothing. On the same grounds, Shenk's computer understands nothing about any story: Chinese stories, English stories, whatever. Because in the case of the Chinese stories: the computer is me, and in the cases where the computer is not me, it does not possess anything more than I possessed in the case in which I understood nothing.


2. As to the second claim, that the program explains human understanding, we see that the computer and its program do not provide sufficient conditions for understanding, because the computer and the program work, but in the meantime, there is no understanding."

Johnny-bro


For the most observant and ruthless, you correctly noted that this proof, while logical, is far from exhaustive. In fact, it is dangerous to call it a proof.

However, this example is only meant to show the implausibility of claims about the presence of Understanding in Artificial Intelligence.


Criticisms and commentators

Let me say in advance - this experiment is relevant even now. Especially, now. I mean that it has been discussed for 43 years, and I believe it will continue to be discussed.

I will name only the main claims and brief comments to them:

1. If we load a machine with all information at once - in all languages - and it can behave indistinguishably from a human - will this mean understanding?

  • No, because the ability to reproduce is not understanding. So if a machine didn't have understanding before, it doesn't have it now.

2. If we load such a program into the robot, add computer vision and control - would that be true understanding?

  • No, because the Robot in this case is no different than claim #1.

3. If we create a program that not only follows a script, but also excites neurons in the right sequence, mimicking the excitation in the brain of a native Chinese speaker - what then?

  • One has to wonder, then, who is making such claims - since the idea behind creating AGI is, after all, that we don't have to know how the mind works in order to know how the brain works.

(Otherwise - we're still a long way from the risk of creating AGI)

4. If you take and combine the 3 claims into one - a robot, with a computer brain, with all the synapses, with perfectly duplicative behavior - then it claims to Understanding?!

  • Yes. Okay. But how to implement it is unknown.

So far there is only one working example - Man.


What, then, is the difference between us and AI?

Here we need a definition of the word intentionality.

Intentionality is the ability of consciousness to relate to, represent, or express things, properties, and situations in some way.

So the difference is that no manipulation of symbol sequences is intensional in itself. It makes no sense.

In fact, it is not even a manipulation - because these symbols do not symbolize anything for the machine/program.

All conversations around Consciousness in Artificial Intelligence are based on the same intentionality - only those who actually possess it:

The people who makes requests/prompts - get and interpret the answers. And that is what Consciousness and the capacity for Understanding is all about.


Extra level

If you've made it all the way here, congratulations! We went from the simple to the complex, and for you I will separately describe the purpose of the experiment:

With it, we were able to see that if we put anything truly intensional into a system, when a program of such a system is running - it creates no additional intentionality at all!

That is, everything that was Conscious and Human in this machine - that remains. It does not multiply.


Discussions about this experiment are still going on. But I agree with Searle that the very emergence of such a discussion is rather an indication that its initiators are not too well versed in the concepts of "information processing". Believing that the human brain does the same thing as the computer in terms of "information processing" - is deliberately false.

After all, a computer answering "2x2" = "4" has no idea what "four" is and whether it means anything at all.

And the reason for this is not the lack of information, but the absence of any interpretation in the sense in which Man does it.

Otherwise we would start attributing Consciousness to any telephone receiver, fire alarm, or, God bless, a dried-up cookie.

But that is a topic for a new article.

#research #ai #agi #artificialintelligence

George Burch

Semantic AI @AICYC | Executive Chairman @ IKNOWit.WORLD | CEO at INTELLISOPHIC.

2 个月

A new semantic AI model pushes the boundaries of Searl https://aicyc.org/2024/12/23/sam-llm-and-searles-chinese-room/

回复

要查看或添加评论,请登录

Alex Sherman的更多文章

  • Israeli Startups - 2024 recap

    Israeli Startups - 2024 recap

    If you’ve ever wondered whether the “Startup Nation” ???? is still firing on all cylinders — or if you’re just tired of…

  • Think Like an Owner

    Think Like an Owner

    The Ownership Delusion Let's cut to the chase: you're probably deluding yourself. Yes, you.

  • Rat Race: why VC game often misses the point?

    Rat Race: why VC game often misses the point?

    Imagine: You're at a swanky startup conference, surrounded by a sea of hoodies and MacBooks. The air is thick with the…

    1 条评论
  • Am I or just AI? Is it possible to build a Unicorn $$$ alone?

    Am I or just AI? Is it possible to build a Unicorn $$$ alone?

    Picture this: You're sitting in your favorite coffee shop, sipping on an overpriced latte, staring at your laptop…

    4 条评论
  • Stop Pretending, Start Building - or how I spent 5 hours after I quit smoking

    Stop Pretending, Start Building - or how I spent 5 hours after I quit smoking

    Picture this: a founder with a sleek website, thousands of social media followers, and zero revenue. Sound familiar?…

  • Hello, World!

    Hello, World!

    What { Name } Did you know that in Jewish culture, 613 is not just a number, but a symbol of knowledge, commitment, and…

    4 条评论
  • Alice in Wonderland - Startup Lessons

    Alice in Wonderland - Startup Lessons

    Just finished "Alice in Wonderland" - the smallest book I ever read. And yeah, I've missed it when I was a kid.

    3 条评论
  • Israeli Startups 2024: Market Outlook

    Israeli Startups 2024: Market Outlook

    The State of Unicorns and Leading Tech Companies In the rollercoaster world of startups, where a unicorn status is…

    1 条评论
  • Wrapping Up 2023

    Wrapping Up 2023

    The time of 2023 is running out. And I want to wrap up this year as I usually do with any project that's coming to its…

    5 条评论
  • #week4 - pivot ∞

    #week4 - pivot ∞

    Is anybody here? ?? Well, let's start then: Weekly Highlights The third week has been mercilessly skipped for endless…

    1 条评论

社区洞察

其他会员也浏览了