Athens Roundtable on the AI Act
Dr. Sebastian Wieczorek
Vice President AI Technology | GenAI Specialist | Strategy Advisor | AI Ethics Expert | PhD in Informatics | Keynote Speaker | Lecturer
The first thing I noticed was the friendliness in all the discussion and so it took me a bit to find clues for the usual friction between people in favor of use case specific vs. technology specific AI regulation. This conflict however is manifesting also in the AI Act, which started risk-based on the one side but then also incorporates rules for “general AI”. This was surprising, because the Athens Roundtable was visited by people with a wide range of backgrounds, i.e., members of parliaments, government officials, NGO representatives, policy advisors and corporate public affairs staff.
A recurring theme in many of the discussions was that regulators will have challenges in enforcing policies because they lack and have issues to attract and recruit AI experts.?NIST?has been mentioned few times as a great example of how successful policy enforcement can be structured. Looking at some of their activities like the?Face Recognition Vendor Tests?I can see the point but I believe that in the discussion a critical point is missed out on. The fact that NIST can do benchmarks is based on their ability to collect sensitive data for validation.
I also learned that the necessity for AI regulation is frequently positioned as “creating certainty for companies” as well as “enabling innovation”. I think it is a comforting thought, but I feel it is missing out some of the disadvantages additional regulation may create. At some point sandboxes were positioned as the vehicle to enable European innovation, which points to the fact that indeed, there is some skepticism about the overall positive effect of AI regulation on innovation. However, it was described as supporting the testing and validation of AI systems. To me it is unclear how this would help to create large foundational models like GPT3 because you need to develop on large data sets already before testing or how highly interactive systems like recommenders would be treated where user-generated feedback is part of the training.
领英推荐
There also was an interesting discussion on whether the request for explainability is in fact introducing bias, because it is based on human rationalization concepts and human language – both containing and re-producing bias and therefore are not capable to provide adequate information. The conclusion of those in favor of this view was that explainability might therefore be creating a false sense of safety and that instead of enforcing it we should rather focus on concepts to safeguard human decision making, e.g., establishing process requirements and establishing four-eyes principle.
Reflecting on the discussion, my overall impression was that the EU is trying to set the direction for AI but is doing it mostly by hitting the brake. However, setting the direction can only be done when you are leading, and I think that there is little evidence that this is the case right now as large foundational models are developed elsewhere. To stay in the picture of racing, of course it is important to have brakes that help you to steer around obstacles, but you also need the capability to accelerate where there is clear sight. Maybe I am seeing it wrong, but I believe that Europe needs to level their AI regulation initiatives with massive public and private investments to set an own agenda.