Legal regulation of AI in Europe, mid-2023

Legal regulation of AI in Europe, mid-2023

As of mid-2023 there is a multitude of actual/proposed EU regulations on AI. The reason is simple: EU aims to establish itself as a global leader in human-centric and trustworthy AI, and it is doing this through a combination of mandatory requirements for “high-risk AI applications”, voluntary codes of conduct, safety and transparency obligations, and liability rules founded on ethical principles. Here is a brief overview:

Artificial Intelligence Act (April 2021) —?proposed by European Commission as part of its ambition to make the EU a global leader in ethical AI.

The Act introduces harmonized rules for the development, placement on the market and use of AI systems in the EU, it pplies to providers of AI systems in the EU, users of AI systems in the EU, and providers and users of AI systems that are located in a third country, if the output produced by the system is used in the EU. The Act prohibits certain artificial intelligence practices that pose unacceptable risk:

1. AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behavior to circumvent users' free will (with exceptions for prevention of self-harm), exploitation of children and vulnerable people, social scoring by governments, and use of 'real-time' remote biometric identification in public spaces (with exceptions for security and medical emergencies).

2. High-risk AI applications will be subject to strict obligations before they can be put on the market:

  • High-risk AI systems defined based on the intended purpose of the AI system, sectorial requirements, the degree of harm that could result from the system, and the ability of the system to interact with humans or otherwise manipulate the environment. They include AI applications in critical sectors like health, transport, energy, and parts of the public sector.
  • Such systems must meet requirements related to data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and cybersecurity.
  • Users must monitor systems on an ongoing basis through governance mechanisms like quality management systems.

3. Limited risk AI applications will need to comply with transparency obligations without undergoing ex-ante conformity assessments.

  • Providers must ensure systems are transparent, provide clear information to users, and enable human oversight.
  • Includes wellness, accessibility tools, conversational systems.
  • Most AI will fall under this category.

4. Voluntary codes of conduct to strengthen transparency and compliance with ethical principles are encouraged for non-high risk AI systems.

  • The EU and national authorities will support development of codes by industries and other stakeholders through facilitating sharing of expertise and networking.
  • Codes of conduct can specify sector-specific or use-case-specific requirements.

Timeline for the Artificial Intelligence Act:

a) in April 2021 European Commission proposed the Artificial Intelligence Act;

b) 2022-2023 is the targeted timeline for legislative process and negotiations on the Act;

c) the Act will enter into force 20 months after final publication in Official Journal of the EU. Provisions will be applicable after broad transition periods.

Artificial Intelligence Liability Directive (August 2021) - proposed by European Parliament's Special Committee on Artificial Intelligence in a Digital Age and introduces civil liability regime for damage caused by AI systems when operating autonomously and interacting with third parties. Key aspects:

1. Strict liability for operators of high-risk AI systems that cause damage to life, health or physical integrity of person or damage to property. Operators only exonerated in cases of force majeure, third party interference, or the person harmed intentionally acted to cause damage.

2. Fault-based liability for other AI systems - liability if operator fails to comply with duties of care.

3. Special liability regime for AI systems exceeding certain capabilities like self-learning. Operators strictly liable for damage unless they can prove that the AI systems' design enabled compliance with duties of care.

4. Personality rights proposed for AI systems exceeding certain capabilities that indicate human qualities like consciousness. Enables damages for harm to these rights.

5. National supervisory authorities proposed to monitor application of regulation.

Timeline for Artificial Intelligence Liability Directive:

a) August 2021 - Proposed by European Parliament Committee;

b) Legislative process in European Parliament and Council ongoing.

Ethics Guidelines for Trustworthy AI (April 2019) - developed by High-Level Expert Group on Artificial Intelligence set up by the European Commission in 2018 and identifies 7 key requirements that AI systems should meet to be considered trustworthy:

1. Human agency and oversight: ability to oversee AI system through governance mechanisms.

2. Technical robustness and safety: resilience to risks, ability to reliably work.

3. Privacy and data governance: ensuring proper privacy protection and data management.

4. Transparency: traceability, explainability and communication of AI system.

5. Diversity, non-discrimination and fairness: avoidance of unfair bias, accessibility.

6. Societal and environmental well-being: ensuring sustainability.

7. Accountability: mechanisms to ensure responsibility and accountability for AI systems and their outcomes.

  • Provides guidance for implementing these principles.
  • Preceded development of concrete legislative initiatives on AI by outlining ethical underpinnings.

Timeline for Ethics Guidelines for Trustworthy AI:

a) April 2019 - Guidelines published based on work of expert group from 2018 onwards;

b) Provides foundation for upcoming legislative actions

It would be safe to assume that one of the most significant discussions that regulator have are the discussions around the transparency requirements for AI systems, especially complex machine learning models. The black box nature of many AI systems like deep neural networks poses challenges for transparency. However, the regulations emphasize the importance of traceability and explainability of AI systems to ensure trust.

Some key aspects of the regulatory issues in regard with the transparency of AI systems:

  • The Artificial Intelligence Act requires high-risk AI systems to be designed and developed in a manner that allows for humans to interpret the system's functioning and outputs. Technical documentation must be maintained to demonstrate compliance.
  • For limited-risk AI systems, the provider must ensure the system is transparent, provides clear information, and enables human oversight. But no conformity assessment is required.
  • The Ethics Guidelines also highlight the need for traceability, explainability and communication related to AI systems. But acknowledge there are limits to transparency depending on the complexity of the system.
  • The Guidelines recommend that at a minimum, AI systems should be interpretable to developers and meaningful information provided to users. But there are disagreements over the level of explainability required, especially for public sector systems.
  • The difficulty of applying transparency requirements to complex, self-learning algorithms like neural networks is recognized. Alternate methods like model cards, code transparency and algorithm auditing are emerging solutions.
  • There are calls for proportionate obligations based on risk, use case and context. Right to explanation as part of fundamental rights is debated.


We see that a transparency of AI systems is a complex topic under ongoing discussion. The regulations aim to strike a balance between ensuring interpretability, traceability and oversight while recognizing the technology's limitations. But a very significant question remains open: HOW developers of AI can satisfy regulators, when every complex machine learning model tend to increase its complexity by itself without developers’ input. Here are some ways companies could potentially provide traceability for their AI systems as per discussions around the proposed regulations:

  1. Maintain extensive documentation and technical descriptions of the AI system's development process, including its design choices, development methodology, data sources and labels, testing procedures, validation results, and governance.
  2. Implement algorithm auditing mechanisms and maintain version histories so each update to the system is documented.
  3. Adopt explainable AI (XAI) techniques like generating explanations along with outputs, developing interpretable models, or using example-based explanations. But these have limitations for complex AI like deep learning.
  4. For machine learning models, maintain metadata around features, hyperparameters, model architecture, and training data. Explain broader model insights even if individual predictions are hard to interpret.
  5. Provide model cards that outline intended use cases, key metrics, safety considerations, social context, and other details about the AI system.
  6. Implement measures for recording decisions/recommendations made by the AI system along with the inputs, outputs and relevant context for each decision.
  7. Testing AI systems through techniques like systematically evaluating different model versions, input perturbations, and simulating user scenarios to assess behaviors.
  8. External algorithm auditing by independent third parties to evaluate properties and reliability of the AI system.
  9. Adopt standards like DARPA's Explainable AI framework which focuses on traceability across the AI lifecycle.

Tekeste Girma

Co Founder & Marketing Director Amazing Promotion & Event

1 个月

Dear Mr. Mikael thank you so much for the timely, useful and very interesting information. Scholars and practitioners like you have a serious assignment in developing and guiding various AI focused awareness creation campaigns by creating collaboration with all concerned bodies to fully leverage AI's potential for augmenting human performance and driving innovation. In this regard we are ready to collaborate with you and bring the desired result.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了