Responsibilities in the AI value chain: part 2 — the Provider

Responsibilities in the AI value chain: part 2 — the Provider

The EU AI Act includes a comprehensive regulatory framework for safe development and deployment of artificial intelligence (AI) systems. The AI Act addresses stakeholders throughout the AI value chain, each with a crucial role to make sure AI systems are developed, deployed and maintained in compliance with the Act’s requirements.

In the first part of this two-part series, we discussed the obligations of the deployer. In this second part, we outline the provider’s obligations and set out considerations for organizations when determining their preferred role in the AI value chain. ?

The responsibilities of the provider

While, in practice, the deployer is responsible for ensuring safe use of an AI system, the provider is responsible for making sure that the AI system is a safe product when it’s placed on the market. Therefore, the provider’s obligations are aimed at increasing the safety of AI systems.

Most of the AI Act’s obligations apply to the providers of high-risk AI systems, which are the systems deemed to pose a significant risk to individuals’ health, safety and fundamental rights. One of the overarching obligations for high-risk AI providers is to perform a conformity assessment to verify whether all obligations of the AI Act are met, then to put a Conformité Européenne (CE) marking on the AI system. Providers should execute these steps before systems are placed on the market.

Providers of high-risk AI systems have obligations from both organizational and system perspectives.

Organizational requirements

The provider is obligated to implement a quality management system in its organization, resulting in strategy and policy documentation that describes, for instance, how compliance with the AI Act is ensured; how AI shall be developed, tested, validated and monitored by the required post-market monitoring system; and how serious incidents and malfunctioning of an AI system are reported to supervisory authorities. The providers of high-risk AI systems also have more administrative tasks, such as registering their high-risk AI systems in an EU database and drafting a declaration of conformity for those systems.

System requirements

Obligations from a system perspective include drafting technical documentation on the AI system, such as information on the intended purpose of the AI system and detailing specifications, for example, versions of the relevant software or firmware and a description of the user interface. Another important obligation for high-risk AI systems providers is to implement data governance measures, such as measures to detect, prevent and mitigate biases. High-risk AI systems should allow for the automatic recording of logs to verify the traceability of the AI system’s functioning and should be designed and developed to guarantee accuracy, robustness and cybersecurity throughout their lifecycle. As mentioned above, high-risk AI providers should also implement a post-market monitoring system that tracks AI systems’ performance and risks after deployment.

Providers of high-risk AI systems have some obligations for the benefit of deployers. For example, AI systems should be developed in a manner that ensures their operation is sufficiently transparent, enabling deployers to interpret the system’s output and use it appropriately. The provider should draft user instructions that include the purpose of the AI system, the technical capabilities, the level of accuracy, and any circumstances that may lead to health and safety risks. The provider’s affixation of a CE marking to the high-risk AI system is meant to assure deployers that the AI system they’re using has been designed, developed and implemented in accordance with the AI Act’s requirements.

Providers of general-purpose AI models (GPAIs) are subject to similar obligations as high-risk AI providers, especially for GPAIs that may pose systemic risks. Their obligations include maintaining technical documentation, including documentation regarding training; establishing policies to comply with intellectual property rights, such as copyrights; and making available a sufficiently detailed summary about the content used to train the model. Models that pose a systemic risk need measures to mitigate such risks, including cybersecurity measures.

The AI Act does not affect providers of non-high-risk AI systems that are not GPAIs, with the exception of limited transparency obligations. Transparency obligations are generally to be implemented in AI systems that interact directly with natural persons.

Conclusion

In this two-part series, we highlighted the most important obligations of providers and deployers throughout the AI value chain. In short, where the provider is required to ensure that it only places safe, robust and compliant AI systems on the market, the deployer must make sure that it only uses the system in line with the intended purpose and is transparent about the use of AI. Organizations should consider the allocation of these responsibilities when deciding to either develop an AI system or buy it from a third party.

Wout Olieslagers HVG Law?B.V.

Dr. Peter Katko, licencié en droit, CIPP/E EY AI Law Leader

?

This publication contains information in summary form and is therefore intended for general guidance only. It is not intended to be a substitute for detailed research or the exercise of professional judgment. Member firms of the global EY organization cannot accept responsibility for loss to any person relying on this article.

Ya?l Cohen-Hadria ??

Avocate et DPO - Partner France IT-IP-Data chez EY Avocats | Expertise IP-IT et Data

3 个月

Thanks ! Dr. Peter Katko ??

回复
Johan Wisenborn

Global Legal & Compliance leader with deep pharmaceutical industry experience and a passion for data, technology, AI, and innovation

3 个月

Very useful two part article on the responsibilities of Deployer and Provider under the EU AI Act - not easy to summarize the different obligations in such a concise and easy read format - well done Dr. Peter Katko !

要查看或添加评论,请登录

Dr. Peter Katko的更多文章

社区洞察

其他会员也浏览了