The AI Act Newsletter
GPD-In strategic alliance with Troutman Pepper Locke
Studio legale e tributario
Part I: Introduction
The AI Act was introduced by the European Commission in order to:
“foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.”
It is also the first-ever comprehensive legal framework for Artificial Intelligence anywhere in the world.?
Part II: Vocabulary
These terms are used throughout the AI Act and should be clearly established:
AI systems: a system that is based on a particular AI model.?
Provider: those who develop AI models or systems
Deployers: also referred to as “users,” people/entities that use AI in their business models. they do not develop the models/systems themselves.
All of the provisions in this act apply to Artificial Intelligence models and systems that are deployed in the European Union, not those businesses residing in the European Union, so it will apply to third-country actors.
Part III: Risk Assessments
The Act categorizes AI into various categories
Unacceptable Risk: these models are strictly prohibited.?
High-Risk Models: these models are accepted, but regulated
Classified by the Act as those systems which are:?
Annex III models:
Limited Risk:
AI models that have issues regarding transparency, such as if a person doesn’t know that they are interacting with an AI system.?
Minimal Risk models: The AI Act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
General Purpose Artificial Intelligence (GPAI) models are bound to different risk evaluations. GPAI system is an AI system based on a GPAI model.?
领英推荐
“GPAI systems may be used as high risk AI systems or integrated into them. GPAI system providers should cooperate with such high risk AI system providers to enable the latter’s compliance.”
Free and open license GPAI models – "whose parameters, including weights, model architecture and model usage are publicly available, allowing for access, usage, modification and distribution of the model – only have to comply with the latter two obligations above, unless the free and open license GPAI model is systemic."
"GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs). Providers must notify the Commission if their model meets this criterion within 2 weeks. The provider may present arguments that, despite meeting the criteria, their model does not present systemic risks. The Commission may decide on its own, or via a qualified alert from the scientific panel of independent experts, that a model has high impact capabilities, rendering it systemic."
Part IV: Compliance
Requirements for high-risk AI models:?
Limited-risk AI models:
All providers of GPAI models must:
Free and open license GPAI models must only conform to the bottom two GPAI obligations.
Systemic risk GPAI models must also:
Part V: Rules & Future Implications
"This Act establishes the European Commission’s AI office to monitor the effective implementation and compliance of GPAI model providers.
Codes of Practice:
The Commission will enter into force 20 days after it is published in the Official Journal of the European Union. It is expected to be published in July 2024.
After entry into force, the AI Act will apply by the following deadlines:
Codes of practice must be ready 9 months after entry into force.
Source:
European Union Artificial Intelligence Act 2024 (EU)