Jacques Attali's vision of a world ruled by AI - Debunked: Understanding What It Can and Cannot Do.

Jacques Attali's vision of a world ruled by AI - Debunked: Understanding What It Can and Cannot Do.

After having tried his hand at ChatGPT-4, Jacques Attali gives us his vision, in his March 30th post entitled "ChatGPT 25", on the future of AI, projecting a “AI taking over the world" vision. Let's see how to provide a reasonable answer to the fears he raises.

First, if I may, I would say the method is questionable. A bit like if "after having tried a Telsa car", he was also taken with a spontaneous vision on the impact of autonomous cars in the next decades. But after all, everyone is free to share their opinion. So that's what I'll do in response.

Complete misunderstanding

Let’s focus on this extract: "These software [ie: ChatGPTs] will then be able to take initiatives. To begin with, they will be able to write emails in your style, using your mailbox, or that of a private or public leader, to give orders, reveal secrets, order embargoes. By multiplying, they will be able to create indescribable disorders. And even worse, artificial intelligences could join forces to use such applications against humans, or against humanity in general”.

The *only* reason people are hyperventilating about AI risk is the myth of the "hard take-off": the idea that the minute you turn on a super-intelligent system, humanity is doomed. This is preposterously stupid and based on a *complete* misunderstanding of how everything works — Yann Le Cun

If the Wachowskis or James Cameron have exploited these ideas in movies like The Matrix or Terminator, it is perhaps useful to reason a little further.

To begin with, it’s essential to keep in mind that it’s not possible for AI tools like ChatGPT to take initiatives and act autonomously. They are just models (say computer programs for simplicity) that are designed to generate text from training data. It is important to understand that they do not have self-awareness, emotions or decision-making abilities. They are not moved by emotion.


LLMs are programmed but have no consciousness

Under no circumstances can LLMs act outside the boundaries of their programming. They are completely unable to take any initiative whatsoever. Therefore, they cannot "decide" to send emails, perform actions or give orders without being given specific instructions [including harm to humanity!].?

[As far as cybersecurity or private data protection is concerned, the fact remains that individuals, companies and governments must implement and maintain security solutions to prevent manipulation or abuse of access to accounts and private data by humans (!) or algorithms they design].

In the current state of technology and the projections we are able to conceive, the notion of a "coalition" between artificial intelligences is in the realm of science fiction. AI technologies are designed to perform specific tasks but are not capable of communicating with each other autonomously.?


Analogy to autonomous driving

To understand the difference between human intelligence and of AI tools, Yann le Cun (Meta AI's Chief AI Scientist), offers a comparison that illustrates: "A teenager who has never sat behind a steering wheel can learn to drive in about 20 hours, while the best autonomous driving systems today need millions or billions of pieces of labeled training data and millions of reinforcement learning trials in virtual environments. And even then, they fall short of human's ability to drive a car reliably

Let's be reasonable, if machines need dizzying amounts of data and processing, just to drive a car with a very relative level of performance, how can we imagine that they are able to compete against humanity? According to Yann le Cun, we shouldn't worry about machines taking over the world, because that would assume that computers will have human failings, like greed or the tendency to become violent when threatened. Interesting point isn’t it?


Competence without comprehension

To explain perhaps, in which way laymen in matters of advanced technologies that can be erudite in other fields, fall into the trap of overestimating the danger or the genius of AI matters and, these times of generative AI applied to the language (ChatGPT and other LLMs), there is this fascination that comes from the fact of this very appearance that they master the language.

But it’s not so. Melanie Mitchell, Professor at the Santa Fe Institute, explains that "[LLms]? have some of the attributes of language: they're able to spit out convincing paragraphs or dialogues or whatever, but it doesn't seem like they have the understanding of language that humans have. Philosophers have a term for this: it's competence without comprehension”.

To go further in this perspective, I invite the reader to learn about a thought experiment imagined by John Searle called "The Chinese Room". This specialist in the philosophy of language wondered in the 80's if giving language capacities to a computer system could give it a mind. In his scenario, he is isolated in a room with a set of instructions for responding to Chinese symbols that are slid under the door. Despite having no comprehension of Chinese, Searle follows the prescribed set of procedures for symbol and numeral manipulation, much like a computer would. As a result, he generates accurate strings of Chinese characters that are sent back out under the door, which misleads those outside into believing that there is a Chinese speaker present in the room.

This experiment tends to show that an artificial intelligence can only be a weak artificial intelligence and can only simulate a consciousness. It is not capable of possessing authentic mental states of consciousness and intentionality.?


Conclusion

While we're at it, we might as well push the point to the end, following the example of Peter Doocy, American journalist and White House correspondent for Fox News who relays the words of the Machine Intelligence Research Institute: "The most likely result of a superhumanly intelligent Al is that literally everyone on Earth will die".??

This kind of posture, which consists in stating something as a truth and putting the interlocutor in the position of demonstrating the contrary, is part of the intellectual dishonesty known as "the inversion of the burden of proof". In matters of law or science, the proof must be brought by the one who makes an assertion. The author asserts here that "Artificial intelligences will be able to coalesce". What studies, what tests, what research allow him to say that beyond the few manipulations he has made of ChatGPT?

But beyond the form, the content appears to be based on irrational fears, a lack of understanding of the technology and its ins and outs, or the willingness to adopt an alarmist position for personal reasons.?

As far as we are concerned, there is nothing to prevent us from documenting ourselves, learning, understanding and referring to real experts such as Yann Le Cun or Melanie Mitchell, to mention only them.


I will leave it at that for today, but be sure,

I'll be back!


?? ???????????? ????

AI technologies are becoming more and more common in our professional or even personal lives. Putting aside the extremes in terms of fears or the opposite in terms of expectations, do not hesitate to share a few words about your personal vision. ????.

?? References:

Jacques Attali "ChatGPT 25" [link ], Melanie Mitchell: "Seemingly ‘sentient’ AI needs a human in the loop" [link ], Yann le Cun "Robots won't seek world domination" [link ], Stanford Encyclopedia of Philosophy: "The Chinese Room Argument" [link ].

要查看或添加评论,请登录

社区洞察

其他会员也浏览了