The EU AI Act Challenges - So what is AI anyway?!
Dalle-3 - Which one is AI?

The EU AI Act Challenges - So what is AI anyway?!

As the European Union moves forward with its proposed Artificial Intelligence Act (AI Act), one of the key challenges that arise is the very definition of what constitutes an "AI system." The way AI is defined in the legislation has far-reaching implications for the scope, applicability, and effectiveness of the regulatory framework.

The Specifics

The final agreed text of the AI Act aligns its definition of an "AI system" with the OECD, stating that it is "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

Potentially too narrow...

For instance, the requirement for AI systems to exhibit "adaptiveness after deployment" could exempt rule-based algorithms that don't update or evolve their logic after initial deployment, even if they utilise machine learning components. This means that a loan approval system using a fixed set of rules to make decisions, even if those rules were initially derived from machine learning, may not be covered by the Act.

Similarly, systems that simply apply filtering or transformation operations on data inputs without explicitly "inferring" mappings to influence environments may not qualify as AI under this definition. For example, a social media content moderation system that uses predefined keywords to filter out inappropriate content might not be considered an AI system, even if it has a significant impact on users' online experiences.

The definition's focus on systems that influence "physical or virtual environments" could also potentially exclude simulations or models that aim to replicate real-world environments without directly influencing them. This might include traffic simulation software used for urban planning or climate models used to study global warming patterns. Which have inherent risk.

Furthermore, the emphasis on statistical inference from data could leave out reasoning systems based on explicit knowledge representations and logical inferences, known as symbolic AI. These systems, such as expert systems used in medical diagnosis or legal analysis, may fall outside the scope of the Act despite their potential impact on critical decision-making processes.

Lastly, the definition's use of the term "machine-based" could create ambiguity around unconventional computational paradigms like biological neural networks or chemical computers, which may not be clearly covered by the Act.

Scarily too broad…

In the most extreme case, the vague and all-encompassing definition of "AI systems" in the EU AI Act could lead to a regulatory nightmare. The broad scope could potentially subject a vast array of conventional software systems to burdensome and unnecessary oversight, stifling innovation and growth in the technology sector.

Imagine a scenario where any software that employs even the most basic forms of statistical modelling, data-driven optimisation, or adaptive behaviour is suddenly classified as an "AI system." This could include everything from simple linear regression models used for demand forecasting to optimisation algorithms for supply chain management. The resulting regulatory burden could be immense, forcing companies to navigate a complex web of compliance requirements for systems that pose little to no risk.

The AI Act's definition could create a chilling effect on software development in the EU. Companies may become hesitant to incorporate even the most rudimentary forms of intelligent behaviour into their systems for fear of triggering regulatory scrutiny. This could put EU businesses at a significant disadvantage compared to their counterparts in other regions with more targeted AI regulations.

Moreover, the broad definition could lead to a flood of litigation as companies struggle to interpret and comply with the Act's requirements. The legal uncertainty surrounding terms like "adaptiveness" and "autonomy" could give rise to a cottage industry of lawsuits, with companies facing penalties and reputational damage for systems that most would not consider true AI.

The definition could unintentionally hamper the development and deployment of beneficial AI technologies. The regulatory burden and legal risks could deter investment and innovation in areas like healthcare, environmental sustainability, and public safety, where AI has the potential to make a positive impact.

Why this?

One possibility is that they intentionally aimed for a broad, flexible definition to future-proof the legislation against the rapid evolution of AI technologies. By focusing on functional characteristics rather than specific techniques, the Act could potentially stay relevant as new AI capabilities emerge.

However, it's also plausible that the lack of technical specificity stems from a lack of deep expertise in the nuanced landscape of AI among the legislative drafters. Crafting laws around such a complex, multifaceted domain is undeniably challenging, and simplification may have been seen as the pragmatic path forward.

Moving forward

The EU has the opportunity to refine the definition and provide clearer guidance to mitigate these risks. However, the potential for unintended consequences underscores the need for a more targeted and nuanced approach to defining AI systems in the regulatory context.

If there is a will… then hopefully, there is a way.

Note: The EU Law is 88k words. That's a lot of words. I/we/it spent much of a day pulling it apart, trying to understand it and looking for 'Legitimate Interest' style loopholes in it. There are five significant ones that I/we/it identified. Doing a dive into each. This is the final part of this series. It's all a bit miserable, I know, I'm a recovering goth. At Obsolete we're developing a framework to help businesses navigate this which we'll be announcing soon. If you've read this far... you might want to get in touch.


Research: This was all Claude. Sorry GPT, you weren't required.

Narrative: Jon 60% Claude 40% after much discussion.



Obsolete.com | Work the Future

Charlie Barraclough

I make things happen

5 个月

Lucia Savage this is a friend of mine - thought you would find it interesting reading!

Jeremy Swinfen Green

Digital strategy and governance consultant, trainer and writer

8 个月

Thanks Jon. This is a really useful analysis pointing out the difficulties of knowing exactly what types of behaviours will be judged as legitimate. As you say, we will probably have to wait for a few court cases!

要查看或添加评论,请登录

Jon Bains的更多文章

  • Purpose First, AI Second

    Purpose First, AI Second

    Steering your AI Revolution AI is everywhere, promising to revolutionise business. But are we so focused on the…

    8 条评论
  • Fandom and Generative AI

    Fandom and Generative AI

    For decades, fandom has been defined by passion, obsession, and the formation of communities united by their shared…

    2 条评论
  • Out of the Shadows

    Out of the Shadows

    It’s not your fault You may have heard of the term "Shadow AI", and sadly, it's a problem that many organisations are…

  • The EU AI Act - the one about startups

    The EU AI Act - the one about startups

    The European Union's proposed Artificial Intelligence Act (AI Act) is set to introduce a comprehensive regulatory…

  • The EU AI Act - The One About Copyright

    The EU AI Act - The One About Copyright

    The European Union's Artificial Intelligence Act (TA-9-2024-0138_EN) aims to establish a comprehensive regulatory…

    2 条评论
  • The EU AI Act - "Systemic Risks"

    The EU AI Act - "Systemic Risks"

    The European Union's Artificial Intelligence Act (TA-9-2024-0138_EN) introduces the concept of "systemic risks" in…

  • The EU AI Act - "Materially Distorting Human Behaviour"

    The EU AI Act - "Materially Distorting Human Behaviour"

    As artificial intelligence capabilities advance rapidly, the European Union has proposed legislation to ensure these…

    9 条评论
  • Google, Guardrails, and the Rocky Path Forward

    Google, Guardrails, and the Rocky Path Forward

    In a recent turn of events, Google found itself at the centre of controversy when its latest AI photo generation system…

    2 条评论
  • Never Trust a JunkAI ;-)

    Never Trust a JunkAI ;-)

    Understanding the Mindset In the realms of passion and persistence dwell a devoted group known as the JunkAIs. These…

    2 条评论
  • Copilot: Censor, Prison Guard or Destroyer of Words?

    Copilot: Censor, Prison Guard or Destroyer of Words?

    Governance and accountability. Two principles that most of us expect when rules are created that impact our lives.

    5 条评论

社区洞察

其他会员也浏览了