Why we should (not) fear AI
Yannick MENECEUR
Compte personnel | Droit et régulation du numérique | Magistrat | Ma?tre de conférences associé | Doctorant
The opinions expressed herein are solely those of the author and do not reflect any official position of the Council of Europe.
Draft translation of the article published in French on LinkedIn "Pourquoi nous devrions (ne pas) craindre l'IA" - Comments and improvements welcomed
In announcements, debates, conferences, articles, reports and books, artificial intelligence (AI) is constantly being presented as the breakthrough technology of the decade, perhaps even of the century. Unlimited opportunities seem to be opening up for humanity, if we are able to prevent risks in a context of ever more widespread use. The European Commission's "white paper" on AI, published on 19 February 2020, takes this observation on board and sets out the main policy options for the framework of this technology for the years to come.
However, fantasies continue to grow in the countless speeches on this technology and the formidable technical complexity animating the discussions between experts is leading to filling the public space with representations in which everyone leaves a little of their subjectivity and their vision of the world. Having old-fashionedized the term big data and sometimes confused it with computer science, digital technology or even the Internet (or even blockchains!), the term "AI" (which will be used in quotation marks in substitution for the more appropriate terms of applications of artificial intelligence) has become at the beginning of this century the reference word in terms of technology, which carries with it a fairly wide variety of concerns.
"AI", a plastic term that has become synonymous with progress
It must be said that this rhetorical coup de force by John McCarthy and Marvin Minsky, forged in 1955/1956, continues to unfold with great vigour because of its plasticity, not to say its imprecision. The more you are interested in the subject, the more you learn to distance from the term of "AI"... and the more you become incomprehensible to a large audience: who cares about the various benefits and limitations of deep neural networks, support vector machines, Bayesian networks, decision trees and expert systems, apart from the technicians of these subjects? We would therefore be living in a true era of approximation, where the precision of the terms and the reality of the scope of the technologies would matter less than what we hope to make of them, just to feel always adapted to our times.
Certainly more disturbingly, it is extremely disconcerting to see that the benefits of using these sets of technologies, in the most intimate aspects of our daily lives, are no longer even questioned and that no one seeks to convince of their virtues. "AI" has become progress and progress is not questioned. In this sense, Antoinette Rouvroy notes a certain drying up of the quality of public debate and reminds us that "algorithmic governmentality, although sometimes supported by speeches resurrecting the idea of progress, no longer presents itself as an alternative to other forms of government as much as their inescapable destiny" (A. Rouvroy, "Adopt AI, Think Later - La méthode Coué au secours de l'intelligence artificielle", 2020).
Whether we consider ourselves "for" or "against" (or even above these debates), we should therefore first of all manage to get rid of this collective astonishment that has seized us. It seems urgent to revitalize the debates on the use of this "AI" by returning to very simple bases to begin to order a critical, constructive and ambitious thinking, capable of effectively protecting individuals and society from a certain form of scientific and mercantile drift that paralyzes democratic debate and societal choices.
What is "AI"?
According to the definition given by the Commission for the Enrichment of the French Language, artificial intelligence is the "interdisciplinary theoretical and practical field whose purpose is to understand the mechanisms of cognition and reflection, and their imitation by a hardware and software device, for the purpose of assisting or substituting human activities". We thus find ourselves in the field of cognitive sciences and at the intersection with computer science, whose general ambition to automate tasks can easily be confused with the specific ambition to imitate the functioning of the human brain in order to achieve this automation. To reformulate it, "AI" is a particular form of application of computer science, whose technological reality has evolved according to the trend of the moment: a descriptive and symbolic approach in the 1970s and 1980s, by writing meaningful logical rules, and a connexionnistic approach today, by letting the machine "discover" the correlations between phenomena translated into data (this is what we say it "learns").
What should be of much more interest to us in terms of regulation, therefore, are the questions posed by complex systems of algorithms in general rather than a specific technology, and this for a particular application: automated decision-making without human action. Moreover, the data feeding these systems should be systematically added to the scope of reflection. Unlike a fuel that powers a car's internal combustion engine, the data play an increasingly structuring role on their algorithmic engine, particularly with machine learning. What we would then have to fear would not be an empowerment of the machine, which would attack its designer like in a bad science fiction movie, but rather an overconfidence in the power of these technologies to make, in all circumstances, better decisions than we do.
Going beyond the simple benefit/risk balance
While a speculative balance between probable benefits and risks is most often achieved, this simplistic approach fails to question the ability of these machines to efficiently manipulate the concepts on which they are based. Thus, the use of objective and quantifiable data in computing leads to much more robust results than with subjective - requiring interpretation to be transformed into data - and qualitative data. Deducing from AlphaGo's success revolutionary potential in all fields of human activity is therefore an over-optimism that should be avoided. Much more restrictive frameworks should be imposed on systems handling potentially hazardous concepts or those without a serious scientific basis.
The second issue that is often avoided is that of the large intellectual debt that is accumulating by piling up complex systems whose reasoning can no longer be reconstructed. Asserting, as Yann LeCun does, that explicability is not important if one can prove that the model works well as it is supposed to work, actually encourages the abandonment of any ambition to build solid scientific theories. It also leads to putting results before knowledge and to favouring short-term objectives to the detriment of a longer-term investment, which is the only way to build solid foundations capable of helping us go further than the fragile mechanisms of machine learning.
The third and final issue that is downplayed by focusing on the balance between the benefits and dangers of "AI" is the fundamental question of what kind of society we really want to live in. To hear the majority of people talk about digital and "AI", it is data that is the main source of future economic development today. The discourses claim to be human-centred and concerned about the respect of fundamental rights, but in reality they are based on a digital mystique (Rouvroy, 2020) where all problems seem to be solved, directly or indirectly, by this means. "AI" has thus become an objective - and not a simple tool to achieve an objective - and will contribute, if we are not careful, to further weaken our democratic institutions, which are already heavily discredited. By promoting a digital environment that automates decision making in order to expunge the biases of human operators, we are in fact contributing to undermining the foundations of a society based on deliberation and the Rule of law in favour of a "State of algorithms", which mathematicizes societal relations.
The ethical overflow and the need for a legal response
Faced with these "AI" issues, an important and dense ethical response has been developed since the mid-2010's. According to the European Union Agency for Fundamental Rights, more than 260 non-binding documents, texts and charters would have been produced worldwide by December 2019. Strongly inspired by bioethics, the resulting principles seem to fall into a few now well-identified categories such as transparency, justice and impartiality, beneficence and non-maleficence, autonomy, responsibility, respect for privacy, robustness and security, and so on. Without addressing the debates on the subjectivity of ethics, it should simply be noted that this intense production has served the digital industry to shift the discourse of the necessary regulation of "AI" into a more flexible and less constraining field. Without sanctions, ethics is in fact a convenient instrument of self-regulation whose benefits should of course not be minimized, but whose scope remains above all declaratory.
The other weakness of "AI" ethics is perfectly revealed by meta-analyses of existing frameworks (see for example A. Jobin, M. Ienca, E. Vayena, The global landscape of AI ethics guidelines, Nature Machine Intelligence, 2019). This ethics is far from being unambiguous and many principles are polysemic, with no interpretative institution to ensure consistency (as the courts do when interpreting rules of law). Many public organisations, both national and international, have published and are likely to continue to publish non-binding texts and thus help to stabilise debates, but here again we find ourselves far removed from binding standards accompanied by rigorous monitoring mechanisms and sanctions in the event of non-compliance.
It is in this respect that the mandate of the Council of Europe's Ad Hoc Committee on AI (CAHAI) is original and constitutes, to date, the best opportunity to establish a legal framework for the application of this technology which respects the fundamental values of our societies: human rights, the Rule of law and democracy. It should be recalled that enacting legal standards in this area is fully within the mandate of this international organisation, which has already made its mark since 1981 with Convention 108 on data protection - the "grandmother" of the GDPR - or the Budapest Convention on combating cybercrime in 2001. This experience legitimises the Council of Europe's intervention, in co-ordination with the European Union, the OECD and the United Nations (including UNESCO), to establish high-level, cross-cutting and non-specialised legal bases, on which specific sectoral texts can then be drawn up, with a level of constraints (both ex ante and ex post) commensurate with the foreseeable impact on individuals and society.
A binding legal response is the only one capable of giving sufficient substance to the discourse on the human being, of creating confidence, and thus of ruling out criticism of the laundering of technology through ethics. Without forgetting to include the question of the impact of digital technology on the environment, which will also be one of the major issues of our time.
Find these in-depth developments in the book to be published in May 2020 (in French): L'intelligence artificielle en procès, coll. Macro law - Micro law, Bruylant