Human AI, all too human

Human AI, all too human

A Question-Answer session by Federico Cabitza, professor of Human-AI Interaction at the University of Milano-Bicocca.

Reading time: 10 minutes.

For any comment or feedback, please contact me at federico.cabitza [at] unimib.it

?Q)????The spread of non-human intelligent systems has given rise to plausible and predictable fears about 'our place in the world'. Beyond the technologies involved, what are the new elements (if any) of Luddism 4.0 compared to those that characterized the original phenomenon?

?

A)????I appreciate the term 'Luddism 4.0' and see how it could spread to indicate resistance or skepticism towards the automation of work through emerging technologies such as cloud computing, AI, IoT and robotics.

?

Although it is difficult to draw direct parallels with historical Luddism, I find a point of contact in the common goal of social justice. The original Luddites were not simply protesting against technology, but against a system that saw their wages fall while factory profits rose.

?

Today, we might see a 'Luddism 4.0' driven by similar concerns about the fair distribution of the value generated by new technologies. In drawing this tenuous parallel, however, I emphasize the importance, in the current historical context in which social sectors and socio-technical systems are strongly interconnected, of pursuing a balanced narrative even in terms of the metaphors one chooses, so as not to distract ourselves from the overriding issues of human, social, economic and environmental sustainability in the current context of the automation and digitization of work.

?

Q)????The discourse on the ethical use of computational machines is gaining more and more space. What solutions are being adopted? And which countries are the most advanced on this front?

A)????I find that there is too much discussion of ethics, when it should simply be taken for granted that people are also and above all motivated by ethical considerations (call me Candide!). Joking apart, I find it dangerous to pit ethics and innovation against each other, as if they were incompatible. In my book (Intelligenza Artificiale, with Prof. Luciano Floridi, Bompiani, Milano, 2021), I contested the idea of embedding ethical evaluations into machines, an approach that is often denoted as 'algorethical'. Although algorethics is feasible to a certain extent, I do not see why we should delegate a type of human evaluation to machines.

I would like to start discussing deontology, i.e. the professional code of software analysts and AI designers. We do not have the same attitude with these professionals as we do with doctors or engineers, from whom we expect a professional code and constant supervision by representative bodies. The ACM association has developed a professional code of ethics that could develop awareness and sensitivity for certain issues in future developers.

One example I cited in my book concerns researchers who developed a system to 'detect sexual orientation from facial images'. They should have asked themselves "What could possibly go wrong?" and considered the potential abuses of their system, which in fact has been likened by some to 'gaydar'.

I would like the discourse of 'machine ethics' to be contrasted with the importance of putting the ethics of machine builders in the spotlight. Algorethics, as it is proposed, looks more as an alibi for humans and it reflects the idea that machines can be better than us.

Q)????By the way, do ethics end up retarding technological development?

A)????Having addressed the ethical discourse, which is particularly close to my heart in my role as a teacher and trainer, I can respond more directly to your previous question. I believe that we are making progress on the ethical path of developing and adopting Artificial Intelligence at the regulatory level. I am glad to see that the European Union is becoming a potential model for other international actors such as the United States and the Russian-Chinese bloc. I am looking at an agenda that I believe is cutting edge, but which remains firmly rooted in the inalienable principles that European citizens and rulers have considered fundamental for centuries, principles that form the basis of our democratic societies.

However, I do not think we are doing enough on a cultural level, both in terms of information and training, for our secondary school and university students, market operators and citizens. We should focus on better information for citizens and increase funding for good training for our students, with up-to-date and verified introduction programs in computational thinking.

In this direction, I am happy to participate, in my small role as a lecturer, in the creation of two new degree courses jointly organized by the universities of Pavia, Milan and Milano-Bicocca, for both the sciences and the humanities. However, we can do much more, considering that less than a quarter of Italian citizens are university graduates. Human capital is a country's most important resource and even this can be subject to erosion and degradation.

Q)????In order to understand the revolution, including the existential revolution, dictated by artificial intelligence systems, a thinker such as Hans Jonas, father of the 'Responsibility Principle', was brought into the debate. What are the most relevant concepts that can be borrowed from his thought today?

A)????This question gives me the opportunity to introduce a well-known distinction among those who reflect on ethics and its application to science, art and the professions: the distinction between virtue ethics, value ethics, deontology and consequentialist ethics.

Value ethics emphasizes values instead of rules or consequences. Virtue ethics emphasizes virtues or moral character, in contrast to the approach that focuses on duties or rules (deontology) or emphasizes the consequences of actions (consequentialism).

Consequentialist ethics holds that an action is only right if it produces the best possible result (obviously, also effects must be evaluated in some inter-subjective, or more or less objective, way), based on two principles: the rightness or wrongness of an action depends only on the results of that action, and the more positive consequences an action produces, the better or more right that action is.

Jonas' principle of responsibility starts from the consequentialist approach to ethics to state that we must consider the impact of our actions not only in the short and medium term, but also in the long term, thinking of future generations and the environment. In short, the principle of responsibility argues that we must act responsibly, considering the long-term consequences of our actions and adopting a prudent and conservative approach.

I wonder how the ethical message around the development of technology could be clearer and more effective if, instead of devoting so much energy to defining the underlying ethical values and comparing them with approaches based on these principles or so-called 'values', the more concrete and pragmatic approach of consequence and responsibility theory were adopted.

?

Q)????Technology is commonly regarded as a tool at disposal. Philosophy and anthropological philosophy of the 20th century, from Heidegger to Plessner or, more recently, Sloterdijk, has instead highlighted how technology is itself at the origin of humanization processes. What do you think about this??Can we consider artificial intelligence as a further stage in this process of humanization?

A)????I like this question very much because I too feel close to the thinking of the philosophers mentioned, whom I consider participants in the great philosophical stream made possible by the courage and acumen of Friedrich Nietzsche, the philosopher I felt myself closest to during my formative years and who continues to be a constant point of reference for me today.

First of all, we need to understand what the process of humanization means: it is not only how the anatomical and cultural evolution of human beings unfolds over the millennia, distinguishing us more and more from our ancestors and more phylogenetically similar species; humanization is also how our conception and understanding of what it means to 'be human' changes over time.

In this sense, it is absolutely true that Artificial Intelligence, being the latest way in which human beings extend their intelligence and capacity to act in the world, represents a phase of humanization. This means that we express our tendency to surpass ourselves, to create new ways of dealing with the world or to transform it according to our needs, and thus to redefine our own nature through our technologies.

Whenever we reflect on technology, and in particular the most powerful technologies at our disposal such as genetic engineering, nuclear energy, information and communication technologies, artificial intelligence and robotics, we should ask ourselves how these affect our relationship with the natural world and what the potential dangers of irrational, unsustainable or unethical use of technology might be.

With the development and integration of these technologies, including AI, into our societies and lives, it is increasingly urgent to ask these questions because the answers we formulate will shape the future trajectory of our humanization process, both for better and for worse.

In this debate, Francesco Varanini, an expert in business culture from an anthropological stance, and I introduced the concept of 'cybork', i.e. 'cyborg work', arguing that the human being not only does not exist as a “naked animal”, deprived of their tools and knowledge, but not even as an isolated individual. We are always members of teams, groups or communities whose purpose is realized in making, creating, producing and continuously transforming the environment in which we live, to turn it into 'our' world, our “work” (cf. Italian “opera”).

Therefore, any narrative that pits us against AI as if it were a helper or an adversary is incorrect: AI is like an axe or a stick, with which human beings transform their world or tame themselves and the other inhabitants of this planet.

Q)????You emphasize the 'non-neutral' character of artefacts and thus also of artificial intelligence systems. What do you mean by this? And what does such non-neutrality imply?

A)????What I am arguing (like many others before me) is that there are no neutral technologies. Every technology offers humans a range of possibilities and uses, which also include applications that are potentially harmful to others or to the environment. Those who argue for technology neutrality often refer to tools such as a knife, which can be used both to cut bread and to harm someone, or a gun, which can be used for both crime and self-defense. This view can be extended to any weapon and, indeed, to Artificial Intelligence as well.

In this perspective, each tool brings with it a wide range of possible uses and, consequently, facilitates both beneficial and potentially harmful applications. The crucial point then becomes the choice: are we willing to take the risk of suffering the effects of the negative uses in order to have the opportunity to benefit from the positive uses? Or, in some cases, do we feel that the risk is too high and therefore unacceptable?

This is the same reasoning that led the European Parliament to ban certain applications of AI, such as the use of biometric recognition in real-time and in public contexts. Although these technologies can in some cases help law enforcement to prevent crimes or bring perpetrators to justice, it was felt that they could also be used in ways incompatible with fundamental human rights. Therefore, despite the potential benefits, the risk was considered too high to justify their use.

Q)????According to some observers, I am thinking of Luciano Floridi with whom he recently wrote a book, information technology does not merely describe the world, but transforms it. Digitalisation is not just a technological or epistemological phenomenon, but one with ontological impacts. What do you think?

A) I fully share Floridi's opinion and would like to add a further point of reflection: information technology, including Artificial Intelligence, is an extension of normativity 'by other means'. Like the law, it imposes and establishes certain relations of power and control.

Mireille Hildebrandt has observed, and I agree with her, that many design decisions and choices condition and limit the choices and actions that we, as users of these systems, can make. The fundamental difference between law and computer systems lies in the fact that law is enacted by a democratic legislature through written legal rules that anyone can read and challenge. In contrast, computer systems are often created behind closed doors and their code is not accessible to those interested in them.

Therefore, it is true that information technology does not merely describe the world, but transforms it, but this also means that it also has an impact on the social world, it is an intervention in it designed and promoted in such a way as to create and strengthen specific relationships of influence, control and power. In doing so, it can weaken and conceal other relationships and ways of being, imposing a worldview that favors a status quo desired by some and opposed by others.

In this perspective, Artificial Intelligence is simply the most powerful of our control technologies. It is the invisible machine glue with which human beings “domesticate” themselves and their fellow humans, making us not only more informed, more connected and more capable, but also more docile, less rebellious and ultimately better citizens and peaceful members of the complex societies we are building.

?

Q)????What is your favourite book and why? And what reading would you recommend on new technologies?

A)????Although in the last two or three years I have looked at dozens of books on AI and the changing times we are going through, few have really impressed me. Many of them, in my opinion, lose sight of the concrete aspect of these technologies and their potential. If I have to choose a title, I would opt for Kate Crawford's “Atlas AI”. This is a clear, convincing and courageous book, based on rigorous worldwide research into the human work behind the development of artificial intelligence. This work is not limited to research and development, i.e. writing code, but involves extractive and industrial processes with great impact on our world, often at the expense of individuals in vulnerable and poor conditions.

It is not the first, nor will it be the last, book that explores the human and material aspect of new technologies, the only dimensions that should really interest us. However, Crawford has hit the nail on the head at the right time and in the right way, in a debate that is in danger of being monopolized by those who would rather talk about machine ethics and distant existential risks than address more tangible and immediate issues.

Kate Crawford reminds us that AI is neither 'intelligent', in the cognitive sense of the term, nor 'artificial', in the sense of being different from any other human, technological, artistic or cultural expression. Going back to Nietzsche, we can conclude that AI is 'human, all too human'. Through AI and its innumerable manifestations, human beings influence, direct and control other human beings, they are just the latest 'human use of human beings'.

Milan, 24 August 2023



I am grateful to Dr. Cristian Fuschetto, Head of Editorial Co-ordination at Infosfera, for conceiving the original questions and giving me the opportunity to share this interview, whose original version (which is longer and in Italian) has been published in the Infosfera journal issue no.1/23, which is available online

at https://www.edih-pride.eu/download/2304/?tmstv=1690814450&v=2305.

?

Opening illustration created by MS Bing

?

This version is shared under the Creative Commons license BY-NC.

Luciana De Laurentiis

Head of Corporate Culture & Inclusion at Fastweb / Internal Communication / Training Manager / Coach /Change Management / Mentor/ Book Author "Smart Working: la seconda stagione"

1 年

Inspirational reflections that offer interesting and well-argued points of view. Thanks for sharing!

要查看或添加评论,请登录

Federico Cabitza的更多文章

社区洞察

其他会员也浏览了