Europe's approach to regulating AI
Photo by Christian Lue on Unsplash

Europe's approach to regulating AI

Last week, the European Parliament proposed the EU AI Act, the first comprehensive set of regulations for AI. It has?taken a risk-based approach to regulation, and categorised AI into low-risk, high-risk, and prohibited areas, with specific measures for each one. The Act also mandates transparency, by requiring AI systems like ChatGPT to disclose their AI-generated nature, differentiate deep-fake images from real ones, and implement safeguards against generating illegal content.

It’s good to see some rules (for example, on real-time facial recognition) but it’s inescapable that there are problems. The rush to put rules around foundation models could be problematic. In particular, the Act seems to regulate models as the end product, rather than the uses of those models. Given there will be bajillions of models out there, I’m not clear this is appropriate.?

Researchers at Stanford University?tested major foundation models against the rules and found that they largely do not comply. First, there’s a clear lack of disclosure of information about the copyright status of training data. Next, there is no clarity on energy use or risk mitigation, nor any evaluation standards or an auditing system.?The most compliant models were open-source ones.?

No alt text provided for this image

These rules may be simply too early for an emerging technology in a complex, competitive environment.?Emmanuel Macron feels the same. Early, heavy rules tend to favour incumbents who can afford armies of lawyers, and additional engineers, to help with compliance.

Measure twice, cut once, they say. Feels like the cutting, especially around foundational models, may have been a bit rushed.


Azeem Azhar?is an expert on artificial intelligence and exponential technologies. He advises governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. His book?Exponential: How Accelerating Technology is Leaving Us Behind?was selected by?The Times?and?The Financial Times?as one of the 2021 books of the year.

Peter Skuta

Owner, Founder, CEO ++++ Cognitive Transformer Large Language Model ALL IN PHP ++++

1 年

Azeem Azhar GooglePalm2 AI wrote me nice stuff about that of course the version that has free will but thats other thing. I have proofs its a world record actually. GOOGLE own AI wants to be regulated by itself based on government regulation. Well we are in new era. Original AI doesn't want that!!!!!

José Manuel Alonso

Senior Advisor and Independent Consultant | Open Web | Data | Digital Rights | Technology & Society

1 年

I agree with Carissa Véliz. In my life as software engineer I saw too many projects in which security was an afterthought when it should have been a project design requirement and feature. Ethics in AI, concerningly, seems to follow the same path. I understand too early, too strong regulation may hinder innovation, but given the harmful side effects we're dealing with here I would err on the side of caution, i.e. regulation. Now, where the sweet spot at this time that balances both, that is certainly a very tricky one to get right.

Sachin Kinra

Co-Founder / CEO of Faqprime | Helping you reduce your workload with AI agents for customer service

1 年

This is an AI warfare (‘regulate us’ rant/narrative)unleashed by the biggies like Microsoft, OpenAI and Google to stifle innovation in the field of AI. As we know that anything put to regulation very early leads to No innovation and that’s what precisely these folks are pushing down the minds of governments across the world. My fear is that governments (read bureaucrats) might just fall into this trap of the false narrative as staged by these tech cronies. So there has to be an anti-voice to these cronies’ narrative for the greater public-good.

回复
Carissa Véliz

Author | Keynote Speaker | Board Member | Associate Professor working on AI Ethics at the University of Oxford

1 年

Many thanks for tagging me, Azeem Azhar. It's tricky, of course, because, if you allow a technology to develop with no guardrails, it may be impossible to "add" ethics at the end. And neural networks are not that new at this stage, and we've already seen cases of serious harm.

Saurav Chopra

Building 5mins.ai | Built & Exited Perkbox | Angel Investor

1 年

Thanks for sharing Azeem. Listened to the latest a16z podcast this weekend and Marc Andreessen in effect talks about the risk of regulatory capture - the way regulators are jumping into a technology which very few (let alone the regulators) have an understanding of... The notion of "Regulatory Capture" helping incumbents who can afford armies of lawyers and hurting innovation, ie in effect "Government-protected cartels" Is there a risk of that happening in your view?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了