3 types of HRAIS, and "intended purpose".

3 types of HRAIS, and "intended purpose".

This is number 4 of a series of bite-sized chunks on the AIA.


A previous edition of AI Legal explained that the AIA is essentially product safety legislation, and is best understood as part of the EU’s product safety regime.

?That’s also true of High Risk AI Systems. The AIA provides for 3 different types of HRAIS.


?The first is where the AI system intended to be used as a safety component of a separate product which is itself covered by EU safety legislation (eg. lifts, medical devices, etc), and that separate product is sufficiently risky that it needs a third-party conformity assessment before it can be lawfully put on the market. (For lower-risk products, the manufacturer can self-certify the assessment. For higher-risk products, you need to get an independent assessment).

?

The second is where the AI system is not just the safety component of a product covered by the EU safety legislation, but is itself one of those products (eg. lifts, medical devices, etc), and, like the first type, is also sufficiently risky that the EU safety regime requires a third-party conformity assessment before it can be lawfully put on the market.

?

The third type is the AI system that is most commonly talked about in connection with the AI.? It’s the type of AI that carries out the high-risk activities set out in Annex III of the AIA: processing of biometric data, critical infrastructure, and so on.

?

What’s different about the 3rd type is that, unlike most other product types envisaged, it requires the concept of “intended purpose”. For example, you don’t need a concept of intended purpose for a lift. A lift can only be used as a lift: you can’t do much else with it. The same is true for medical devices, cars, etc – ie. all the products which are presently covered by the EU safety regime.

?

But a software-enable AI system is different: it’s much more versatile, and that’s why the Annex III definitions of high-risk (nearly) all make reference to intended purpose (for example: “AI systems intended to be used to evaluate learning outcomes…”), and all the compliance requirements in the AI make reference to “intended purpose”.

?

“Intended purpose” even has its own definition: “the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.”

?

In case you’re wondering, the phrase “intended purpose” occurs 50 times in the AIA!

要查看或添加评论,请登录

Mark Sherwood-Edwards的更多文章

  • The unspoken link between the GDPR and the AIA.

    The unspoken link between the GDPR and the AIA.

    There’s an unspoken link between the AIA and the GDPR. One of the key elements of the GDPR is the accountability…

  • Automated Decision Making

    Automated Decision Making

    Automated Decision Making Both the GDPR and the AIA (despite being primarily a set of rules about product safety) give…

  • Publicly Available is not the same as Free To Use

    Publicly Available is not the same as Free To Use

    LLMs need a lot of data on which to train. But just because material is publicly available on the internet doesn’t mean…

    4 条评论
  • It's all about product safety

    It's all about product safety

    This issue is part of a series: the AI Act in bite-sized chunks. A lot of people think that the AIA is like the GDPR…

  • When you use an LLM, who owns your output? Is it you?

    When you use an LLM, who owns your output? Is it you?

    LLMs create content, as we know. Who owns the content that they create? There’s two levels to this question (leaving…

    3 条评论
  • The AIA is extra-territorial

    The AIA is extra-territorial

    One of the things I’m going to do in AI Legal is explain the EU AI Act in bite-sized, easy to digest, chunks. Here’s…

  • Will OpenAI be lawful in the EU?

    Will OpenAI be lawful in the EU?

    One of the provisions of the AIA is that providers of general purpose AI systems – like OpenAI’s LLM – must “put in…

    7 条评论
  • GDPR, Schrems 2 and the rule of law

    GDPR, Schrems 2 and the rule of law

    In a recent post (ICO fines Cabinet Office £500,000) I wrote how cheering it was to see the rule of law implemented…

    3 条评论
  • Wirecard, Outsourcing & OpRes

    Wirecard, Outsourcing & OpRes

    When Wirecard collapsed, a number of companies that had outsourced their payments processing to it found themselves in…

  • Software development contracts – the good, the bad, and the ugly.

    Software development contracts – the good, the bad, and the ugly.

    I recently helped a client put in a place a software development contract. It was one of those least worst-case…

    1 条评论

社区洞察

其他会员也浏览了