The EU AI Act - "Systemic Risks"
The European Union's Artificial Intelligence Act (TA-9-2024-0138_EN) introduces the concept of "systemic risks" in relation to general-purpose AI models that have "high-impact capabilities" and could pose significant negative effects or impacts across areas like public health, safety, fundamental rights, public security, or society as a whole.
The Act considers systemic risks as stemming from the broad societal impacts these powerful, versatile AI models may have due to their scale, technical capabilities, and potential reach. It lays out specific criteria, such as the quality of training data, number of users, autonomy levels, and computational power thresholds, to assess whether a general-purpose model poses such systemic risks, which would require heightened obligations for providers.
While the Act's ambitions are absolutely cool, aiming to safeguard public interests against the far-reaching impacts of AI, the broad and somewhat subjective criteria defining "systemic risks" open the door to several potential misapplications and abuses that warrant careful consideration.
Disproportionate Burdens on SMEs
The ambiguity surrounding the term could disproportionately affect SMEs, lacking the resources of their larger counterparts to argue compliance. This imbalance could stifle competition and innovation, suggesting a need for measures that level the playing field, ensuring all entities, regardless of size, can contribute to and benefit from AI advancements.
Regulatory Arbitrage
The Act's expansive definition may inadvertently facilitate regulatory arbitrage. Companies could seek the most lenient interpretations across EU Member States, undermining the uniformity the AI Act aims to establish. This scenario highlights the need for a harmonised approach to interpreting "systemic risks" to prevent discrepancies that could dilute the Act's effectiveness.
Stifling Innovation
The subjective nature of "reasonably foreseeable negative effects" (Article 3(1)(c) of the AI Act) could lead startups and SMEs to self-censor or shy away from pioneering AI developments, fearing their innovations might be classified as posing "systemic risks". This risks dampening the entrepreneurial spirit crucial for technological breakthroughs, underscoring the importance of clear, supportive guidelines for innovators navigating the AI Act's provisions.
Technical Criteria Manipulation
Developers might adjust their AI models to technically avoid "systemic risk" classification without genuinely mitigating associated risks. This manipulation of technical parameters to skirt regulations highlights the necessity for robust, transparent criteria that genuinely reflect the Act's intentions (Article 4(1)(a)), ensuring AI technologies enhance rather than endanger societal well-being.
领英推荐
Strategic Litigation and Competitive Challenges
The Act's subjective criteria could spur strategic litigation, with entities challenging "systemic risk" classifications to delay or undermine competitors. This potential for legal manipulation calls for a more objective, transparent classification process, mitigating the risk of the Act being used as a tool for anti-competitive practices.
Lobbying for Competitive Disadvantage
In an even more adversarial vein, companies might lobby for the classification of competitors' AI models as systemic risks, exploiting the subjective assessment criteria (Article 4(2)(b)) for strategic advantage. This underscores the potential for regulatory measures to be hijacked for competitive battles, diverting the Act from its protective aims.
Sooo…
The introduction of "systemic risks" in the EU AI Act marks a great step towards addressing the challenges posed by AI. However, the potential for misapplication and abuse necessitates refinements to ensure the Act's criteria are clear, objective, and uniformly applied.
Guess we’ll have to wait and see!
Note: The EU Law is 88k words. That's a lot of words. I/we/it spent much of a day pulling it apart, trying to understand it and looking for 'Legitimate Interest' style loopholes in it. There are five significant ones that I/we/it identified. Doing a dive into each. This is the second and most significant to startups and the really big folk although has potential for misinterpretation all over the place.
Research: Both Claude and GPT had goes at helping me extract all the relevant laws - both did OK, Claude was ultimately more useful.
Narrative: Jon 50% Claude 50% after much discussion.
Use Cases: Claude 40% GPT 40% Jon 20% - I just wrote the brief and curated
Obsolete.com | Work the Future