The EU Artificial Intelligence Act. What might be the effects of the EU’s AI Act outside the EU?
The European Parliament passed the EU Artificial Intelligence Act (the Act) on 13 March 2024 and has also proposed an AI Liability Directive to mitigate risks posed by evolving AI tools. ?What might be the effect of the Act outside the EU, and will it affect us in the UK?
What is artificial intelligence?
Up to now, there hasn’t been an agreed legal definition of ‘artificial intelligence’, not least because it can cover a wide scope of technology, but Article 3 of the Act now defines an AI system as:
“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The Act also defines:
a general-purpose AI model as ‘an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market’, and
a general-purpose AI system as ‘an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems’.
There are other laws applicable in the EU that affect the development or use of AI, including the EU General Data Protection Regulation (GDPR) and the Product Liability Directive (which grants those harmed by software a right to compensation) as well as protections for intellectual property. ?
In this blog, we will look at how the Act works and whether it will influence AI regulation in the UK and further afield.
What does the EU AI Act say?
The Act sets out its scope, where it applies, how it applies to a number of compliance roles performed by organisations, the categories of risk posed by AI and the obligations placed on those with compliance roles in relation to those risks.
Scope
Article 2 of the Act defines the scope as applying to:
The Act applies to all sectors.
Compliance roles
The Act sets out in Article 3(3) to (8) a number of compliance roles, each of which come with compliance obligations:
Categorising risk
The Act is intended to promote the uptake of the technology while ensuring that there is a high level of protection for health, safety, rights and the rule of law. ?To do this, the Act classifies AI systems and imposes requirements according to the different levels of risk posed by the various types of AI.
Article 5 prohibits outright those systems that present an unacceptable risk, including the ‘placing on the market, putting into service or use’ of systems which:
Article 6 goes on the classify the risk levels for those applications which are not prohibited.
Regulation in practice
The Act requires that each EU member state will establish a ‘notifying authority’ and a ‘market surveillance authority’ and ensure they have the necessary technical abilities, funding and staffing to fulfil their duties under the Act.
The notifying authority will set up and carry out assessment and designation procedures.
The market surveillance authority will report to the Commission and enforce compliance at a national level.
An AI office in the Commission will enforce common rules across the EU and will advise and assist member states on consistent and effective application of the Act.
领英推荐
Enforcement
Monetary penalties range from €7.5m or up to 1% of global annual turnover for the supply of incorrect, incomplete or misleading information, and €35m or up to 7% of global annual turnover for non-compliance with prohibited AI practices. ?This enforcement model will be familiar to those working with the EU GDPR, although the upper limits of the fines are even higher than in the GDPR.
What is the UK approach?
In February 2024, the UK government published its response to its white paper consultation on AI regulation. ?It is a regulatory framework led by the Department for Science, Innovation and Technology (DSIT) and is underpinned by core principles designed to address key risks with AI systems. ?Like the EU Act, it is cross-sector and applies a technology-neutral approach. ?
In contrast to the EU Act, there is no formal definition of AI, but instead an outcomes-based approach which focusses on adaptivity and autonomy for its interpretation. ?Existing regulators such as the Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA) and Ofcom can interpret adaptivity and autonomy to create specific definitions if they wish, but this raises a concern that different interpretations could be applied by different regulators, thereby creating confusion for those operating across regulators or sectors. ?
The framework distinguishes between:
Principles
Clause 10 of the framework sets out 5 cross-sector principles for existing regulators to interpret and apply within their own remits:
The strategy for implementing these core principles is predicated on:
The framework approach is in contrast to the EU and US’ more prescriptive, legislation-driven approach, but there is a growing recognition that legislation may be required going forwards. ?However, the previous government did not propose specific legislation or mandatory measures and went as far as saying that legislation would only be introduced if specific conditions are met. ?They would need to be confident that existing legal powers are insufficient, and that voluntary transparency and risk management are ineffective.
The recent change of government may have some impact on how (and how quickly) the UK approach evolves; the King’s speech in July 2024 stated that the newly elected Labour government ?will ‘seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models’, but stopped short of committing to a specific AI Bill. ?Meanwhile, the Prime Minister’s introduction included mention of ‘[harnessing] the power of AI as we look to strengthen safety frameworks’. ?So, details of the Labour government’s overall approach to AI regulation remain unclear, however there does appear to be a departure from that of the Conservatives. ?
The current framework approach has flaws, in that some areas of AI operation may fall outside the scope of existing regulators. ?Furthermore, regulators are already hard-pressed to deliver existing regulation; for example, the ICO’s existing regulation of data protection and the Freedom of Information Act is weak in some areas and negligible in others. ?Whether regulators can collaborate to the extent that they can provide complete and blanket regulation across all sectors remains to be seen.
What might be the consequences of the EU AI Act outside the EU?
Let us first consider the effect of the EU AI Act in the UK. ?Any UK company developing or working with AI systems for the EU market will have to consider the new EU Act. ?Following in the wake of Brexit, the UK government’s stance on AI is to use the ‘freedom’ from EU legislation gained by Brexit to enable innovation and to give as light a touch as possible on legislation and regulation to cut the cost of doing business. ?But if businesses must meet the more prescriptive EU legislation to operate there, how much advantage will the UK approach deliver, except for the few businesses domiciled and operating wholly within the UK?
For those examining the effects of the UK AI framework, there is currently little clarity from regulators as to what exactly will be required to achieve compliance.
The very wide territorial and sector scope of the Act means that organisations developing and using AI in the EU will be covered by it. ?We have seen in the past how EU legislation can change the approach taken by businesses. ?This was especially true with the General Data Protection Regulation (GDPR) and led to global changes as businesses complied with the GDPR to enable access to the European market of 450 million potential customers, not least because business processes can be simpler and cheaper if one approach is taken worldwide.
Nations with developing markets understand this and are keen to develop their own legislation in such a way to attract global business. ?In this way we have seen other data protection regimes align with the GDPR, such as Brazil’s Lei Geral de Prote?ao de Dados (LGPD), India's Personal Data Protection Bill (PDPB) and, like many other members of the African Union, Nigeria’s Data Protection Regulation (NDPR). ?Many countries may find it convenient to adopt an approach to regulation similar to the EU for data protection and may do so for AI.
It has also been the case that EU regulation raises the profile of its subject, and already the effects of the AI Act have been seen in the actions of other countries which are keen to be at the forefront of technology and to be seen using AI to drive the new economy. ?For example, China – keen to be a leading AI innovation centre - has developed its Chinese Cybersecurity Law and the New Generation AI Development Plan to provide measures for data protection and cyber security in AI, emphasising compliance and risk management. ?Canada has introduced key government-led programmes, such as the Pan-Canadian AI Strategy and the Canadian AI Ethics Council, to advocate for the responsible development of AI and address relevant ethical issues in the AI sector. ?These initiatives sit alongside its Personal Information Protection and Electronic Documents Act (PIPEDA) to regulate the collection, use, and disclosure of individuals’ personal information using AI technologies. ?The National Artificial Intelligence Ethics Framework is the cornerstone of AI regulation in Australia and directs the ethical principles that guide AI systems’ development and implementation process, overseen by the Australian Consumer and Competition Commission (ACCC), which plays a role in enforcing regulations.
The EU’s action is a clear commitment to regulating AI to protect its citizens, to allay fears from European technology companies that they are falling behind the early domination of technology markets by US companies and to offer a stable environment for AI companies in which to operate.
What does all this mean for organisations developing and using AI?
Whilst the development of AI systems and technology has been conducted thus far in a fragmented and threadbare regulatory and legislative environment, that is now changing. ?Some retrospective work may be necessary for existing players in the market and newcomers should use the opportunity to prepare for future AI development. ?That should include:
How URM can help?
For organisations looking to develop, provide or deploy AI systems in full compliance with regulations and frameworks such as the EU AI Act, conformance to ISO 42001, the International Standard for Artificial Intelligence Management Systems (AIMS’), is the ideal starting point. ?Whilst conformance to ISO 42001 will not guarantee compliance with the EU AI Act, there is overlap between two in terms of requirements, and both are concerned with ensuring AI systems are developed and used responsibly and ethically. ?As such, this AI standard can be a significant help in enabling you to achieve AI Act compliance.
With nearly 2 decades of experience delivering governance, risk and compliance training and consultancy, URM can provide ISO 42001 training that will ideally position you to undertake AI impact assessments (AIIAs), develop and implement an ISO 42001-conformant AIMS. ?Leveraging the expertise gained from supporting over 400 management system implementation projects in line with a range of relevant ISO standards, our one-day Introduction to ISO 42001 Course will equip you with the skills and knowledge necessary to govern and effectively manage an AI system as per the requirements of ISO 42001.