Balancing innovation in AI with ethical responsibility
AI systems may cause unprecedented challenges to ethical standards due to their very specific features. AI not only may use massive data, including personal data, as inputs, but, thanks to machine learning processes and differently from traditional software and algorithms, it is capable of self-adaptiveness and to infer predictions, content, recommendations and decisions without human intervention. As outputs created by AI nourish, in their turn, future inputs for further improvements of the AI system, it works in autonomy and may become out of reach from human control. The potential benefits of AI are equally unprecedented and may contribute critical added value across the entire spectrum of industries and social activities. Thanks to its ability to process massive data and to improve, at scale and better than any human force, statistical predictions and analysis as well as allocations of resources, it will produce high benefits in healthcare, agriculture, food safety, education, media, culture, infrastructure management, energy, transportation, security, justice, climate change, etc. i.e. in all economic, public and social activities.
AI is already massively used. Large Language Models have allowed generative AI to enter our daily lives, and they will expand, despite their limits and inaccuracies. As a matter of fact, these limits and inaccuracies are positive developments in that they demonstrate that human oversight over AI systems is not only needed for the human being to remain in control, but human intervention is also needed to ensure that AI systems produce the effective benefits for which they are made for. Balancing technical innovation with ethical standards through placing human control at the core of AI is therefore “a must” for the sake of humanity, obviously, but also for the sake of technology itself, at least for the time being.
In 2019, the AI High Level Expert Group appointed by the EU commission developed 7 core principles for trustworthy and ethically sound AI. These principles include: 1) human agency and oversight, 2) technical robustness and safety 3) privacy and data governance 4) transparency 5) diversity, non-discrimination and fairness 6) societal and environmental well-being and 7) accountability. Those 7 principles have been imbedded in the set of rules which have been enacted on June 13th, 2024, by EU Regulation 2024/1689 providing for harmonized rules among EU member States on artificial intelligence.
In a nutshell, the EU AI regulation provides for the first comprehensive regulation applying to the development and deployments of AI systems. It will apply to all AI systems which produce outputs accessible by EU operators, whether they are developed by EU based providers or by providers situated outside the EU. This extra-territorial reach of the regulation is expected to have a spill-over effect (the so-called “Brussels effect”) on AI systems developed by non-EU developers to the extent they want to have access to the European market. However, a question mark remains as to whether this spill-over effect will be sufficient to make the European standard for balancing technological innovation and ethical principles a role model for a worldwide approach of this crucial subject.
The EU regulation classifies AI systems in four categories:
·????? 1) Prohibited systems which cover systems developing subliminal techniques, targeting vulnerable populations, applying social scoring and allowing criminal predictability, expanded facial recognition, emotional tracing at work or in educational environments, real time biometric identification in publicly accessible locations (with derogations).
·????? 2) High risk AI systems which are defined in a modifiable annex to the regulation and which include: authorized biometric identification systems, critical infrastructure safety systems, educational admission and evaluations systems, employment and workers management systems, systems used for access to essential private and social services, law enforcement, migration and border control, systems used for justice and democratic processes.
·????? 3) Specific AI systems (article 50) which are designed to interact directly with natural persons.
·????? 4) Other minimal risk AI systems which are out of the scope of the regulation.
In addition, the EU AI Act provides for special rules applicable to “general purpose AI models” which are defined as models trained with a large amount of data using self-supervision and that display significant generality and are capable of competently performing a wide range of distinct tasks. Typically, generative AI systems fall into this category. Specific transparency rules apply to such systems to ensure, for instance, compliance with EU law on copyrights and the application of specific risk control valuation processes when they may present systemic risks based on the volume of computation capacity that such systems may include. ??
领英推荐
With such sophistication included in the EU regulation, some commentators may fear that it will place the EU at a disadvantage in terms of innovation capacity notably as the regulation will increase considerably the administrative burden of providers and deployers to comply with it. However, the regulation creates a level playing field and it will be equally burdensome for all providers, whether EU or non-EU, to the extent that such parties want to deploy their systems within the EU.
Beyond these strict rules which will be enforced as early as 2025, for the provisions dealing notably with high risks AI systems, and 2026 for the reminder, the regulation includes several components which have a clear objective to foster further innovation.
Firstly, the regulation does not apply to systems which are developed solely for R&D activities (article 2 paragraph 6). It does not apply either to research, development or testing activities prior to their being placed on the market or put into service. It does not apply to AI systems released under free and open-source licenses unless they are high risk AI systems or deploying prohibited functions or designed for interactions with natural persons. Military and national security AI systems are also out of the scope of the regulation.
Secondly, chapter VI of the regulation provides specific support to innovation. More particularly, it creates sandboxes under the responsibility of Member States where AI developers will receive regulatory support and be allowed to operate within a protected legal framework for a limited time in order to facilitate their development activities and testing in real conditions. Eligible SMEs and start ups are given priority access to sandboxes.
Otherwise, detailed rules of the regulation aim at creating an environment for AI where risks are anticipated and prevented through specific procedures to be implemented for describing risks and mitigating activities including required human interventions and which are proportionate to the level of risk associated with the AI systems dealt with. Data governance rules are also provided notably to ensure the effective application of the EU regulation on copyright by general purpose AI systems. More generally, GDPR rules apply concomitantly to the rules established by the AI Act. Transparency provisions make it also compulsory for general purpose AI systems to produce outputs which are clearly marked as machine based to avoid any confusion with humanly authentic content. Finally, several bodies are implemented to create the proper structure to secure this AI regulatory framework including the creation of an AI office at the level of the EU Commission, an AI Board which will have jurisdiction to oversee the actual implementation of the Act, an Advisory Board with third party stakeholders and an Independent Expert group to provide insights on future developments. Each national State is also responsible for designating a proper competent authority to supervise the actual local implementation of the Act and to interact with local operators.
Overall, the EU AI Act is indeed a comprehensive and quite burdensome set of regulations, especially for high risks and general-purpose AI systems. However, taking into account its granularity, it is certainly unfair to affirm that it is excessive and endangering EU competitiveness. In view of the 7 principles agreed for a trustworthy AI, it is certainly needed in particular to maintain human control, transparency and respect for the fundamental rights recognized in the EU Charta. In the author’s view, the question mark should rather be whether it will be sufficient to serve its objectives in view of several weaknesses which remain, including:
·????? Its inapplicability to defense and national security AI systems which are certainly the number one area of concern that potentially AI systems get out of human and democratic control, even in Europe.
·????? The effectiveness of human control procedures once AI systems will be sufficiently trained and self-fueled by future data outputs, becoming progressively smarter than human beings.
·????? The limits of the extra-territorial reach of the Act, which is not applicable to AI systems deployed outside the EU even though such AI systems may have been nourished by EY based data and, in particular, by the cultural unique content which was produced in Europe for centuries.
·????? The regulatory capacity in practice to ensure effective compliance with the rules given the magnitude of AI systems which are being deployed and which will further come to the marketplace with further sophistications such as reasoning functionalities.
Directeur Agence Hauts de France chez SIB [Acteur public du numérique Collectivité et Santé]
1 个月De très bons conseils
★ Global Head IBM Z ScaleUp Program ★
1 个月Merci beaucoup, Eric. Watch the replay in English from IBM Z Day on Oct 1 2024 -> https://ibmzday2024-vconf.bemyapp.com/#/talks/66edde614080f4c393647b5a