The AI Act Newsletter

The AI Act Newsletter

Part I: Introduction

The AI Act was introduced by the European Commission in order to:

“foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.”

It is also the first-ever comprehensive legal framework for Artificial Intelligence anywhere in the world.?


Part II: Vocabulary

These terms are used throughout the AI Act and should be clearly established:

AI systems: a system that is based on a particular AI model.?

Provider: those who develop AI models or systems

Deployers: also referred to as “users,” people/entities that use AI in their business models. they do not develop the models/systems themselves.

All of the provisions in this act apply to Artificial Intelligence models and systems that are deployed in the European Union, not those businesses residing in the European Union, so it will apply to third-country actors.


Part III: Risk Assessments

The Act categorizes AI into various categories


Unacceptable Risk: these models are strictly prohibited.?


High-Risk Models: these models are accepted, but regulated

Classified by the Act as those systems which are:?

  • "used as a safety component or a product covered by EU laws in Annex I AND required to undergo a third-party conformity assessment under those Annex I laws; OR
  • those under Annex III use cases (below), except if:
  • AI systems are always considered high-risk if it profiles individuals, i.e. automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement.
  • Providers whose AI system falls under the use cases in Annex III but believes it is not high-risk must document such an assessment before placing it on the market or putting it into service."


Annex III models:

Limited Risk:

AI models that have issues regarding transparency, such as if a person doesn’t know that they are interacting with an AI system.?

  • Must disclose and ensure that their end-users are aware that they are using artificial intelligence.
  • Includes chatbots and deep-fakes

Minimal Risk models: The AI Act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

  • These systems are unregulated, but that is changing with Generative AI.


General Purpose Artificial Intelligence (GPAI) models are bound to different risk evaluations. GPAI system is an AI system based on a GPAI model.?

“GPAI systems may be used as high risk AI systems or integrated into them. GPAI system providers should cooperate with such high risk AI system providers to enable the latter’s compliance.”

Free and open license GPAI models – "whose parameters, including weights, model architecture and model usage are publicly available, allowing for access, usage, modification and distribution of the model – only have to comply with the latter two obligations above, unless the free and open license GPAI model is systemic."


"GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs). Providers must notify the Commission if their model meets this criterion within 2 weeks. The provider may present arguments that, despite meeting the criteria, their model does not present systemic risks. The Commission may decide on its own, or via a qualified alert from the scientific panel of independent experts, that a model has high impact capabilities, rendering it systemic."


Part IV: Compliance

Requirements for high-risk AI models:?

  • "Establish a risk management system throughout the high risk AI system’s lifecycle;
  • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
  • Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
  • Provide instructions for use to downstream deployers to enable the latter’s compliance.
  • Design their high risk AI system to allow deployers to implement human oversight.
  • Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
  • Establish a quality management system to ensure compliance."


Limited-risk AI models:

  • "For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labeled as artificially generated. This also applies to audio and video content constituting deep fakes."


All providers of GPAI models must:

  • "Draw up technical documentation, including training and testing process and evaluation results.
  • Draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.
  • Establish a policy to respect the Copyright Directive.
  • Publish a sufficiently detailed summary about the content used for training the GPAI model."

Free and open license GPAI models must only conform to the bottom two GPAI obligations.


Systemic risk GPAI models must also:

  • "Perform model evaluations, including conducting and documenting adversarial testing to identify and mitigate systemic risk.
  • Assess and mitigate possible systemic risks, including their sources.
  • Track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.
  • Ensure an adequate level of cybersecurity protection."


Part V: Rules & Future Implications

"This Act establishes the European Commission’s AI office to monitor the effective implementation and compliance of GPAI model providers.

  • Downstream providers can lodge a complaint regarding the upstream providers infringement to the AI Office.
  • The AI Office may conduct evaluations of the GPAI model to: assess compliance where the information gathered under its powers to request information is insufficient and investigate systemic risks, particularly following a qualified report from the scientific panel of independent experts."

Codes of Practice:

  • "[The Act] will account for international approaches.
  • [The Act] Will cover but not necessarily limited to the above obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain.
  • AI Office may invite GPAI model providers, relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process."


The Commission will enter into force 20 days after it is published in the Official Journal of the European Union. It is expected to be published in July 2024.

After entry into force, the AI Act will apply by the following deadlines:

  1. 6 months for prohibited AI systems. (approx.January 2025)
  2. 12 months for GPAI. (approx. July 2025)
  3. 24 months for high risk AI systems under Annex III. (approx. July 2026)
  4. 36 months for high risk AI systems under Annex I. (approx. July 2027).

Codes of practice must be ready 9 months after entry into force.


Source:

European Union Artificial Intelligence Act 2024 (EU)

要查看或添加评论,请登录

GPD-In strategic alliance with Troutman Pepper Locke的更多文章

社区洞察

其他会员也浏览了