Special Edition: EU AI Act Provisional Agreement
Mark Hinkle
I publish a network of AI newsletters for business under The Artificially Intelligent Enterprise Network and I run a B2B AI Consultancy Peripety Labs. I love dogs and Brazilian Jiu Jitsu.
A deep dive on the first regulation on artificial intelligence and what it means for the AI Industry
[If you'd like The Artificially Intelligent Enterprise delivered to your email every week, then you can subscribe via SubStack as well as additional free content like my book, Marketing Machines: Harnessing Artificial Intelligence for Better Results and Storytelling and future special offers.]
I like to keep a cadence of a weekly newsletter, but I think this news is especially noteworthy and I wanted to share my analysis. On Friday, December 8th, 2023, the European Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act. I won’t try to varnish my opinion on the topic. I am typically skeptical when a regulatory body tries to pass legislation about technology. Typically, they create legislation that cannot govern technology effectively because it typically provides too broad and hard to enforce guidelines and cannot adapt to the fluid nature of developing technology.
Summary of the EU Artificial Intelligence Act
The proposed legislation is the Artificial Intelligence Act, which the European Commission laid out. The main goals of the legislation are to:
The Artificially Intelligent Enterprise is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Subscribed
The legislation categorizes AI systems as high-risk or non-high-risk. High-risk systems pose significant risks to health, safety, or rights. These systems face stricter requirements around transparency, risk management, data quality, documentation, and human oversight.
Key requirements in the legislation include:
The legislation also proposes the following:
Open Source
The EU's AI Act has been a topic of concern for open source efforts, with experts warning that the written legislation could impose onerous requirements on developing open AI systems (open source, not OpenAI). There are fears that the Act might lead to a chilling effect on open source AI contributors, potentially concentrating power over the future of AI in large technology companies.
领英推荐
While the Act contains carve-outs for some categories of open source AI, such as those exclusively used for research and with controls to prevent misuse, experts argue that it could be difficult to prevent these projects from being integrated into commercial systems, where malicious actors might abuse them.
Some have emphasized that open source developers should not be subjected to the same regulatory burden as those developing commercial software. The concerns revolve around the potential impact of the AI Act on open-source AI contributors and the balance between innovation and accountability in the AI landscape.
Guardrails for general artificial intelligence systems
It has been agreed that general-purpose AI (GPAI) systems and their models must follow transparency requirements proposed by Parliament. These requirements include creating technical documentation, complying with EU copyright law, and releasing detailed summaries about the content used for training.
For GPAI models with high impact and systemic risk, Parliament negotiators have managed to secure more strict obligations. If these models meet certain criteria, they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the Commission, guarantee cybersecurity, and report on their energy efficiency.
Members of the European Parliament(MEPs) have also emphasized that, until harmonized EU standards are published, GPAIs with systemic risk may comply with the regulation by relying on codes of practice.
Governments Aren’t Typically Effective at Creating Technology Legislation
I am not against technology regulation, but I am skeptical of how well they can regulate complex and evolving technology. Here are a couple of examples of technology regulation that falls flat.
The CAN-SPAM Act, enacted in 2003 by the U.S. Congress, to regulate unsolicited commercial email. While the law has made most spam illegal and less attractive to spammers, it has been criticized for being largely ineffective in stopping malicious spammers. A study revealed that the Act had no observable impact on the amount of spam sent and did not significantly affect spammer compliance with its provisions.
However, the law I am even more critical of is the General Data Protection Regulation (GDPR). This law makes us constantly click on cookie warnings to enter the website. This comprehensive EU law aims to protect individuals' privacy rights by regulating the use of personal data. It has been in effect since May 25, 2018, and is considered the world's toughest privacy and security law. However, it has also posed challenges for businesses, particularly small and medium-sized enterprises (SMEs), in terms of compliance due to its extensive and far-reaching requirements. Some argue that the regulation has significantly burdened businesses, especially SMEs, and has led to compliance challenges and potential business hindrances.
The EU AI Act TL;DR
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It’s a big deal because it sets the bar for legislating AI usage in other legislatures. The EU will also be one of the most important markets for IT and will likely not be in lockstep with other government regulations that are sure to come.