Artificial Intelligence” is dangerous because it is neither intelligent nor artificial[1]

Artificial Intelligence” is dangerous because it is neither intelligent nor artificial[1]

ANALYSIS

Hector Casanueva[2]

The term “Artificial Intelligence (AI)” has become popular, moving from scientific and academic fields to common language. It is already a commonly used term, a name that seems to be irreversible, which in any case I do not believe can be changed, because it is already installed. I am interested in making some clarifications and drawing attention to the nature of this tool, which we almost thoughtlessly assume to be intelligent and artificial without being so. In my opinion, that is what makes it more dangerous. What is dangerous about AI is precisely that it is neither intelligent nor artificial.

The name has been useful and easy to understand to designate a tool, an advanced computing system, based on “science and engineering that makes machines act intelligently,” according to the classic definition of the person who introduced it sixty-seven years ago, the American scientist and mathematician John McCarthy, considered one of his fathers. In 1956, a group of scientists and experts met at Dartmouth College, New Hampshire, to explore the possibility of creating machines that could have “human-like” intelligent behavior.

It must be clear that this is a large-scale task automation and data processing tool, usable in the most different manifestations of our lives. It is made up of a set of techniques and technologies that are designed to perform specific tasks with greater speed, precision and scope than if performed by a human being. But it is limited by the programming and the data with which it works and trains. By the way, it produces amazing results and facilitates decision making, scientific research, data science, medicine, law, art, access to information, transportation in autonomous aerial or terrestrial vehicles, construction, agriculture, virtual assistants, collaborative robots , and many more current and upcoming applications. Increasingly used in education, artistic creation and academia. With the development of generative AI, open AI, whose most popular expression is ChatGPT in its different versions, its impact has quickly become massive. But also the proliferation of dangerous fake news oral, written and images. Its questionable application to the development of sophisticated and “autonomous” nuclear weapons is one of the greatest concerns generated by this instrument.

Why is it not intelligent or artificial?

According to the generally accepted definition, it is a science and engineering that “makes” machines – read computers – act “intelligently”. That is, they could have “human-like” intelligent behavior. Clearly, then, it is not the same as human intelligence. Let's see:

First, if we stick to what the scientific community and philosophical thought consider the attributes of intelligence, we can see that the fundamental attributes that distinguish human beings are not present in the computer tool that we call “Artificial Intelligence”. Indeed, let us consider some of the main characteristic attributes of human intelligence: self-awareness, that is, understanding oneself, being aware of oneself and one's relationship with the world; ability to make informed judgments about real situations; solving problems of different degrees of complexity; logical thinking; ability to learn, memorize, accumulate and process knowledge and experiences; adaptability; abstraction capacity; development of language as an expression of own ideas; creativity to generate original and innovative ideas; emotionality; social skills.

Some of these attributes of intelligence may be present in AI, such as complex problem solving; ability to learn, memorize, accumulate and process knowledge (but not experiences); language development (but not for expression of own ideas). However, it does not and cannot count on the fundamentals, such as self-awareness, capacity for abstraction, generation of original ideas, emotionality and social skills.

Second: we cannot say that it is a type of “artificial” intelligence, since everything it generates is the product of deliberate human action, which aims for certain results. AI may appear intelligent to perform certain functions, even with greater speed, precision and amplitude than if performed by a human being, but it lacks the deep understanding, awareness and creativity of human intelligence, since everything that AI generates comes from and depends on the programming, the data, the software with which it is fed, and the hardware that houses it. Even if software and hardware capabilities (e.g. quantum computing) have in themselves the potential to generate continuous progressive developments of the tool (“learning from data”), it will always depend on programming and data feeds that are provided by humans.

Why then do we consider AI dangerous and potentially threatening to humanity?

First, because if certain functions are given to AI to be carried out automatically and autonomously - for example, their application for the use of nuclear weapons in certain predetermined circumstances, such as the questioned introduction into the LAWS type of weaponry ( Lethal autonomous weapons ) - autonomous weaponry that, lacking discernment, rationing and emotionality, including intuition, would not be able to make a decision different from the one prefigured if circumstances changed at the last moment. It is worth remembering here the case of Stalislav Petrov, a Russian officer who avoided a nuclear war with the United States in 1983, by sensing that the data that the Russian nuclear early warning systems were providing about a missile supposedly launched by the United States against Soviet territory was erroneous. The information enabled a Soviet response using nuclear missiles against US territory, which would have triggered a catastrophe. Petrov doubted the information, based on his intuition and logical reasoning, and decided not to report until he reconfirmed the data, which finally allowed him to avoid the automatic response of the USSR. It is known as “the case of the man who saved the world.” With the AI LAW system, this nuclear catastrophe would have occurred.

The same could happen in robotic surgical interventions, in disaster alerts, in data processing, city administration, etc. if decisions are delivered to machines that are not intelligent, without human control and intervention.

Second, because the systems and data with which the tool works are the product of human decisions, of human intentionality, with all their load of judgments and prejudices, even without being deliberately introduced like this. It has already been tested, for example, in facial recognition systems for crime detection purposes, which fail up to 95%, or in the application of selection tests, etc. The machine acts with the same ethical parameters, prejudices, animosities and discrimination present in the information provided by the programmers. As Roser Martínez and Joaquín Rodríguez, from the Autonomous University of Barcelona, point out (“The dark side of artificial Intelligence” May 2020, Idees Magazine, from the Center for the Study of Contemporary Issues of the Generalitat of Catalonia), there is “a myth that Machines can adopt ethical-moral behaviors if they are correctly codified. But it is evident that a machine cannot have ethics, morals or intuition of its own. In any case, it may have the ethics of whoever codified it. It will be a simulation of the programmer's ethics, a replica of the engineer or a combination of the data found in the cloud.”

Where to put the focus of governance and regulation?

Considering the risks, the threats, but also the positive potential of this tool, it is difficult to get right the type of governance and the degree of regulation that, in any case, is clearly necessary to undertake. Experts, academics, scientists, politicians from different organizations, universities and think tanks are engaged in this in-depth analysis. The Millennium Project has just issued a report with ideas and opinions from 55 experts and leaders in the sector, the US Senate is dealing with the issue in a regulatory manner, the EU is also in it, the Security Council and the Secretary General of The United Nations encourages the creation of an Agency or an global AI governance system, for which a study group has been convened to outline this initiative and a decision is adopted at the 2024 Future Summit. From the private sector, the platforms Google, Microsoft and OpenAI, in an attempt at self-regulation, have just announced the creation of the Frontier Model Forum to guide the development of this tool.

There is awareness of the danger of a tool with such impact and potential. Warnings that it could overflow, make decisions on its own and surpass human intelligence are implausible. What can overflow and constitute a strategic and existential threat is its misleading and perverse programming, along with its use contrary to security and human rights. That is, by human decision. For this reason, we must decide where to put the focus of governance and regulation, being clear about its nature, an automatic system with human power, which is neither intelligent nor artificial, which should be prohibited from entrusting certain critical and sensitive tasks, and be other applications are duly regulated. This means that the research and development of the system must have precise, sufficient, binding and supervised parameters, based on an international consensus of the scientific, academic, political and social community within the framework of the United Nations.

THE END


[1] This text is a translation of an article published in Spanish by the InfoBae news portal. Its content is the exclusive responsibility of the author and does not compromise the opinion of the institutions to which the author belongs.

[2] Professor of International Relations and Strategic Foresight. Researcher at the University Institute of Economic and Social Analysis (IAES) and in History and Prospective at the University Institute for Research in Latin American Studies (IELAT) of the University of Alcalá, Spain. Vice President of the Chilean Council for Prospective and Strategy and member of the Planning Committee of the think tank The Millennium Project.


Pablo Reyes Arellano

Director Ejecutivo en Memética | Co director en Artefacto

1 年

Se requiere un nuevo encuadre ético para vincularse con la tecnología no como un otro que compite con nosotros, una tecnología que no solo remite al uso, sino a la transformación de posibilidades ontológicas

回复

要查看或添加评论,请登录

HéCTOR CASANUEVA的更多文章

  • "IDENTIFICATION ERROR"

    "IDENTIFICATION ERROR"

    Hector Casanueva, professor of International Relations. Former Chilean Ambassador in Geneve.

    2 条评论

社区洞察