European Union’s Artificial Intelligence Act- It is Finally Here– Part 1
Suvabrata Sinha
Experienced CISO | Cybersecurity Strategy and Defensive Operations | ex-NXP Semi, Microsoft & Bank of America.
It has been a couple of years since the first publication by the European Commission of the draft regulation on “Harmonized Rules on Artificial Intelligence”.? The document has been much debated in the policy circles by lawmakers, policy wonks and sundry academics.? However, with generative AI suddenly taking center stage and catching public attention, this draft regulation has “gone viral” and the EU, showing urgency, has just concluded its final rules on Artificial Intelligence.
This would be a good time to discuss, debate and understand the construct and broad thrust of this regulation.
This article, and subsequent follow-up blogs, will cover the broad aspects of this law, its positives and negatives (as I see them) and finally, discuss if India should follow up with its own, analogous legislation on Artificial Intelligence.
So, what are broad contours of this law?
Scale and Scope
The principle is that this is a “horizontal legislation”.? It does not look at individual industries, sectors or areas of technological activity. The scope of this law is the use of AI in public domain, wherever it may be. Exclusions and? limitations are specific and designed to be as narrow as possible (e.g. Section 19 on AI usage by law enforcement)
?The second, is that it is “extra-territorial” in scope, much like GDPR.? In its remit, it aims to cover all EU members but also AI systems developed and used outside EU, to the extent that it covers natural persons of the EU. ?In fact the (somewhat troubling) provision includes AI systems “even when they are neither placed on the market, nor put into service, nor used in the Union……”.? This creates many foreseeable complications that would require a separate blog.
The third , is that it is a “human-centric” approach.? It aims to put potential harm to human beings at the center of this regulation.? All provisions are focused on preventing harm in high-risk use cases and providing clarity, visibility and enough tools to make an informed choice in most other cases.
The final compromise responds to a number of concerns raised during the consultation process. ?It now excludes exclusively military and defense applications of AI and also not affect individual member states jurisdictions over “national security”.? ?
Risk-based approach
The regulation adopts a risk-based approach, categorizing AI systems into three levels of risk: (1) Unacceptable Risk, (2) High Risk, and (3) Low Risk. High-risk applications are subject to more stringent requirements.
Special provisions for “high-risk” AI systems
AI systems considered high-risk include those used in critical infrastructure, educational/ training, essential services, law enforcement, and biometric identification. These systems are subject to strict requirements including conformity assessments, data quality, and human oversight over its creation and operation.
Descriptive/ directive regulation of “prohibited practices”
The regulation lists a number of practices that are considered unacceptable and prohibited. These include AI systems that manipulate individuals through subliminal techniques, exploit vulnerabilities of specific groups, or use real-time remote biometric identification. ?The explanatory text to the final draft cites some examples like untargeted scraping of face/images from internet/ CCTV, emotion recognition (quite sophisticated already), social scoring etc.? How these will play out in real life and in commercial usage remains to be seen.? There are some narrowly focused exceptions for law-enforcement – which also need to be tested in real-life.
领英推荐
Transparency and accountability
The regulation emphasizes transparency, disclosure and informed consent when users are interacting with an AI system. Providers of AI systems are expected to "maintain documentation" and "make it available" to authorities to demonstrate compliance.
Data governance
The draft emphasizes the importance of data used in AI systems. It requires that data used for training and testing AI models be of high quality and representative. It also addresses issues related to bias and fairness.
The regulation also lays down rules for “foundational models” – with transparency obligations.
Human oversight
High-risk AI systems must have human oversight to minimize the risk of harm. This includes the ability to intervene and override the AI system's decisions.
Penalties
In line with most of the recent regulations from EU, there significant fines for non-compliance. The fines can be as high as 30 million euros or 6% of the total worldwide annual turnover, depending on the nature of the violation.
?What's next?
This is a “frontier” legislation, and a lot of clarity still needs to evolve. There is a 2 year transition period that has been envisaged- however, I would bet a pretty penny that developments in the field of AI will far outpace the bureaucrats and regulators ability to regulate them.
?I also sense some pitfalls and ambiguity in the law as it stands– a commentary on "the good, the bad and the ugly" is coming up on part 2 of this blog. So stay tuned.
?Notes
CEO & Founder at Dstreamer Tech
1 年Good summary Suvo
Technology Leader | Cloud Strategist | Gen AI Enthusiast | Passionate People Leader | AWS
1 年Not easy for me to consume such draft regulations that's usually written in complex language, so thanks for sharing.
Learner
1 年Thanks for summarizing this Suvo. Very helpful.
LinkedIn Top Voice | Global Corporate Communications Leader | Leadership Communications & Personal Branding Coach | GCC Employer Branding Strategist | Expert in AI-Driven Communications Practices I IIM-A, NIT Surat
1 年Good insights Suvo