??Emerging Systemic Litigation (ESL) and AI
Marie-Anne Frison-Roche (????????)
Directrice chez Journal of Regulation & Compliance | Droit économique, Droit processuel, Droit de la régulation et de la compliance
In the series of conference-debates exploring Emerging Systemic Litigation (ESL), organised by the Cour d'appel de Paris (Paris Court of Appeal), the Cour de cassation (French Court of cassation), the Cour d'appel de Versailles (78) (Versailles Court of Appeal), the Ecole nationale de la Magistrature (French National School for the Judiciary) and the EFB (Paris Bar School),
under the scientific direction of Marie-Anne Frison-Roche (????????),
following the conference-debate of March 29, 2024 on the very definition of Emerging Systemic Litigation,
??article reporting in English this conference ??Importance and specificity of the Emerging Systemic Litigation
the conference-debate of April 26, 2024 on Vigilance, a new field of Emerging Systemic Litigation,
and the conference-debate of May 27, 2024, which focused on the technical control of risks on platforms and the resulting Systemic Litigation,
??article reporting on this conference ??Technical risks controls?on platforms and disputes arising from them
The conference-debate of June 24, 2024, held at the Paris Court of Appeal, focused on :
ARTIFICIAL INTELLIGENCE, NEW FIELD OF SYSTEMIC LITIGATION
The speakers were Marie-Anne Frison-Roche, Professor of Law and Director of the Journal of Regulation & Compliance (JoRC), Sonia Cissé, Partner at Linklaters Paris , and Emmanuel Netter , Professor of Law at the Université de Strasbourg (Unistra) .
This article reports the three presentations, while the debate with the audience remains a spontaneous affair between the speakers and the large number of people who attended these face-to-face events.
The speeches took place in 3 parts, before the debate with the audience began:
In the general presentation on the theme itself, I underlined
THE TWO MEETINGS BETWEEN ARTIFICIAL INTELLIGENCE AND SYSTEMIC LITIGATION
The focus of this conference is not the state of what is usually called Artificial Intelligence, but rather how to correlate AI and "Emerging Systemic Litigation" (ESL).
This involves recalling what "Systemic Litigation" is (1), then looking at the contribution of Artificial Intelligence to dealing with this type of litigation (2), before considering that the algorithmic system itself can be a subject of Systemic Litigation (3).
1. What is the Systemic Litigation that we see Emerging?
On the very notion of "Emerging Systemic Litigation" (ESL), proposed in 2021, read :
??????????, ??The Hypothesis of the category of Systemic Cases brought before the Judge, 2021
Emerging Systemic Litigation concerns situations that are brought before the Judge and in which a System is involved. This may involve the banking system, the financial system, the energy system, the digital system, the climate system or the algorithmic system.
In this type of litigation, the interests and future of the system itself are at stake, "in the case". The judge must therefore "take them into consideration".
??Fr. Ancel, ??Compliance Law, a new guiding principle for the Trial?, in ???????? (dir.), ??Compliance Jurisdictionalisation, 2023
In this respect, "Emerging Systemic Litigation" must be distinguished from "Mass Litigation". "Mass litigation" refers to a large number of similar disputes. The fact that they are often of "low importance" is not necessarily decisive, as these disputes are important for the people involved and the use of A.I. must not overpower the specificity of each one. The fact remains, however, that the criterion for Systemic Litigation is the presence of a system. It may happen that a mass litigation calls into question the very interest of a system (for example, value date litigation), but more often than not the Systemic Litigation we see emerging is, unlike mass litigation, a very specific case in which one party, for example, formulates a very specific claim (e.g., asking for considerable work to be stopped) against a multinational company, and will thus "call into question" an entire value chain and the obligations incumbent on the powerful company to safeguard the climate system, which is therefore present in the proceedings (which does not, however, entitle it to make claims, but which must be taken into consideration).
领英推荐
2. The contribution of Algorithmic Power in the conduct of a Systemic Litigation
In this respect, AI can be a useful, if not indispensable, tool for mastering such Systemic Litigation, the emergence of which corresponds to a novelty, and the knowledge of which is brought before the Ordinary Law Judge.
Indeed, this type of litigation is particularly complex and time-consuming, with evidentiary issues at the heart of the case, and with expert appraisal following on from expert appraisal. Expert appraisals are difficult to carry out. AI can therefore be a means for the judge to control the expert dimension of Systemic Litigation, in order to curb the increased risk of experts capturing the judge's decision-making power.
The choice of AI techniques presents the same difficulties as those that have always applied to experts. It is likely that certification mechanisms, analogous to registration on expert lists, will be put in place, if we move away from construction by the courts themselves (or by the government, which may pose a problem for the independence of the judiciary), or if we want control over tools provided by the parties themselves, with regard to the principle of equality of arms due to the cost of these tools.
3. When it is the Algorithmic System itself that is the subject of a Systemic Litigation: its place is then rather in defense
Moreover, the algorithmic system itself gives rise to Systemic Litigation, in that individuals may bring a case before the courts claiming to have suffered damage as a result of the algorithmic system's operation, or seeking enforcement of a contract drawn up by the system. It is in the realm of the Ordinary Contract and Tort Law that the system may find itself involved in the jurisdictional proceedings.
It is noteworthy that, compared with the hypotheses hitherto favored in previous conference-debates, notably those of April 26, 2024 on Emerging Systemic Litigation linked to the Duty of Vigilance, the systems involved have been taken into consideration more behind the claims articulated by the plaintiffs, since they allege that a system has been attacked. It would then be "civil society" acting against the company. In the case of the algorithmic system, the initial litigation is made up of allegations that accuse the system of infringing rights (e.g. copyright, right to privacy, etc.).
However, the instance changes if the system is no longer presented as the potential "victim" but rather as the potential "culprit". In particular, it is much less clear what type of intervener in the proceedings, who is not necessarily a party to the dispute, should speak to explain the system's interest, particularly with regard to the sustainability and future of the AI system.
This is an area for further consideration by heads of courts.
Following this initial presentation, Sonia Cissé outlined the first observable systemic disputes involving artificial intelligence.
Sonia Cissé outlines litigation already resolved or in progress, particularly in intellectual property (IP) when the information used for machine learning is protected by copyright or when the question arises of possible IP protection for "works" produced by machines.
Firstly, she refers to a dispute that took place in China in 2019, in which a court held that an article, even one entirely generated by an AI tool, was protected by copyright for the benefit of the person who published it. She points out that, conversely, American and European courts have ruled that intellectual property cannot cover a work in which there is no human authorship, and which is therefore neither a work of the mind nor an invention. She mentions the decisions of the Columbia Court and the Prague Municipal Court in 2023. She notes that, beyond Literary and Artistic Property Law, Patent Law does not offer protection either, with the UK Supreme Court, for example, refusing to classify an AI-powered machine as an inventor.
She then discusses the mass litigation that could arise from automated decisions, in the event of algorithmic error, and in particular the question of attribution of responsibility (developer, deployer, user?). She noted that the first legal cases, notably in the United States and Australia, placed responsibility for the error on the user.
She went on to show that AI is a tool that courts are using, and will increasingly use, to deal with the mass litigations to which they must respond. This efficiency is leading lawyers, companies and judges to use AI to ultimately decide: this opens up the question of the responsibility of the decision, for example the court decision, or of the writings, if there is a breach observed, for example in the rules or precedents referred to. Precisely, the AI European Regulation qualifies as "high risk" the design and use of AI for jurisdictional decision-making, including in the organisation of amicable solutions. This opens up the prospect of litigation, which is and will be systemic in nature.
Emmanuel Netter then explained the impact of the regulatory texts adopted by the European Union, taking up the very nature of the AI system.
Emmanuel Netter explained the impact of the regulatory texts adopted by the European Union, not only the IA Act but also the GDPR, such that not only the Administrative Regulatory Authorities but also the Ordinary Law Judge are called upon to implement it, notably through liability actions.
The speaker made two preliminary remarks. Firstly, he stressed that the authors see AI either as the solution to everything, or as the source of all future disasters.
Secondly, he emphasised that there are two kinds of algorithmic power: the one that can be thought of as no more than a tool for deduction based on rules and principles, and the other that can be thought of as the power to accumulate particular solutions without any deduction, so that solutions for the next particular case can be found, or even invented ("machine learning", "deep learning").
As for the texts, the speaker begins by recalling that the 2016 GDPR, without dealing with artificial intelligence, laid down the principle, in Article 22, of the prohibition of "solely" automated processing. In this regard, he notes that although exceptions to this prohibition are provided for, the Court of Justice adopts a broad interpretation of this article and of the notion of "solely automated processing", a position that is resolutely protective of the human person (see in particular ECJ, December 7, 2023, case C-634/21, Schuffa).
He then discusses the regulation establishing harmonised rules for artificial intelligence, known as the "Artificial Intelligence Act", and focuses on so-called "high-risk" AI and the obligations arising from this qualification: analysis of reasonably foreseeable risks and prior testing (art. 9); "data governance" obligations (art. 10) and related evidential obligations; transparency (art. 13); proportionate human oversight (art. 14); accuracy and robustness (art. 15), etc.
All this information enables the speaker to spell out the systemic disputes that will arise from these clashes of overall understanding and the regulatory logic of technological systems. In particular, this will take the form of civil liability litigation. He underlines the evidentiary stakes involved.
At the end of this conference, the listener can appreciate that mass litigation can eventually, and with great caution, find support in AI, because the successive cases are analogous.
On the other hand, Emerging Systemic Litigation is characterised precisely by the fact that it mobilises a System, in that the situation it presents (e.g. a case of platform supervision, a case of vigilance in a value chain) is new and involves the interests and future of a system by "calling into question" principles : Because it is "emergent", it is essentially the fruit of the inventiveness of companies, stakeholders, lawyers and judges, because they are singular human beings.