The EU AI Act Challenges - So what is AI anyway?!
As the European Union moves forward with its proposed Artificial Intelligence Act (AI Act), one of the key challenges that arise is the very definition of what constitutes an "AI system." The way AI is defined in the legislation has far-reaching implications for the scope, applicability, and effectiveness of the regulatory framework.
The Specifics
The final agreed text of the AI Act aligns its definition of an "AI system" with the OECD, stating that it is "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
Potentially too narrow...
For instance, the requirement for AI systems to exhibit "adaptiveness after deployment" could exempt rule-based algorithms that don't update or evolve their logic after initial deployment, even if they utilise machine learning components. This means that a loan approval system using a fixed set of rules to make decisions, even if those rules were initially derived from machine learning, may not be covered by the Act.
Similarly, systems that simply apply filtering or transformation operations on data inputs without explicitly "inferring" mappings to influence environments may not qualify as AI under this definition. For example, a social media content moderation system that uses predefined keywords to filter out inappropriate content might not be considered an AI system, even if it has a significant impact on users' online experiences.
The definition's focus on systems that influence "physical or virtual environments" could also potentially exclude simulations or models that aim to replicate real-world environments without directly influencing them. This might include traffic simulation software used for urban planning or climate models used to study global warming patterns. Which have inherent risk.
Furthermore, the emphasis on statistical inference from data could leave out reasoning systems based on explicit knowledge representations and logical inferences, known as symbolic AI. These systems, such as expert systems used in medical diagnosis or legal analysis, may fall outside the scope of the Act despite their potential impact on critical decision-making processes.
Lastly, the definition's use of the term "machine-based" could create ambiguity around unconventional computational paradigms like biological neural networks or chemical computers, which may not be clearly covered by the Act.
Scarily too broad…
In the most extreme case, the vague and all-encompassing definition of "AI systems" in the EU AI Act could lead to a regulatory nightmare. The broad scope could potentially subject a vast array of conventional software systems to burdensome and unnecessary oversight, stifling innovation and growth in the technology sector.
Imagine a scenario where any software that employs even the most basic forms of statistical modelling, data-driven optimisation, or adaptive behaviour is suddenly classified as an "AI system." This could include everything from simple linear regression models used for demand forecasting to optimisation algorithms for supply chain management. The resulting regulatory burden could be immense, forcing companies to navigate a complex web of compliance requirements for systems that pose little to no risk.
The AI Act's definition could create a chilling effect on software development in the EU. Companies may become hesitant to incorporate even the most rudimentary forms of intelligent behaviour into their systems for fear of triggering regulatory scrutiny. This could put EU businesses at a significant disadvantage compared to their counterparts in other regions with more targeted AI regulations.
Moreover, the broad definition could lead to a flood of litigation as companies struggle to interpret and comply with the Act's requirements. The legal uncertainty surrounding terms like "adaptiveness" and "autonomy" could give rise to a cottage industry of lawsuits, with companies facing penalties and reputational damage for systems that most would not consider true AI.
领英推荐
The definition could unintentionally hamper the development and deployment of beneficial AI technologies. The regulatory burden and legal risks could deter investment and innovation in areas like healthcare, environmental sustainability, and public safety, where AI has the potential to make a positive impact.
Why this?
One possibility is that they intentionally aimed for a broad, flexible definition to future-proof the legislation against the rapid evolution of AI technologies. By focusing on functional characteristics rather than specific techniques, the Act could potentially stay relevant as new AI capabilities emerge.
However, it's also plausible that the lack of technical specificity stems from a lack of deep expertise in the nuanced landscape of AI among the legislative drafters. Crafting laws around such a complex, multifaceted domain is undeniably challenging, and simplification may have been seen as the pragmatic path forward.
Moving forward
The EU has the opportunity to refine the definition and provide clearer guidance to mitigate these risks. However, the potential for unintended consequences underscores the need for a more targeted and nuanced approach to defining AI systems in the regulatory context.
If there is a will… then hopefully, there is a way.
Note: The EU Law is 88k words. That's a lot of words. I/we/it spent much of a day pulling it apart, trying to understand it and looking for 'Legitimate Interest' style loopholes in it. There are five significant ones that I/we/it identified. Doing a dive into each. This is the final part of this series. It's all a bit miserable, I know, I'm a recovering goth. At Obsolete we're developing a framework to help businesses navigate this which we'll be announcing soon. If you've read this far... you might want to get in touch.
Research: This was all Claude. Sorry GPT, you weren't required.
Narrative: Jon 60% Claude 40% after much discussion.
Obsolete.com | Work the Future
I make things happen
5 个月Lucia Savage this is a friend of mine - thought you would find it interesting reading!
Digital strategy and governance consultant, trainer and writer
8 个月Thanks Jon. This is a really useful analysis pointing out the difficulties of knowing exactly what types of behaviours will be judged as legitimate. As you say, we will probably have to wait for a few court cases!