Initial thoughts on new EU AI plans

Initial thoughts on new EU AI plans

Earlier this afternoon, the European Commission unveiled its latest plans on artificial intelligence. Presented by EVP Margrethe Vestager and Commissioner Thierry Breton, the package, now available at https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682, contains two key components:

As a senior European AI researcher and one of the founders and Chair of the Board of CLAIRE, the Confederation of Laboratories for AI Research in Europe (claire-ai.org), one of the largest AI research organisations world-wide, I have been looking forward to seeing these documents. Of course, they cannot be properly analysed or digested within a few hours, but here are my first impressions:

(1) The proposal for regulation is quite different from a broadly discussed, leaked version from earlier this year. Notably, there are now four levels of risk in the use of AI, which are treated very differently. This strikes me as a reasonable approach: higher-risk uses require a higher level of scrutiny, safe-guarding and enforcement.

(2) The definition of AI, which is of key importance in the context of regulation (as well as that of major public investment), differs notably from those used earlier by the European Commission. Annex I of the proposed regulation defines AI as follows:

(a)  Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b)  Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c)  Statistical approaches, Bayesian estimation, search and optimization methods.

What's good about this is to finally see a better reflection of the richness and diversity of methods and techniques that comprise AI. Also, at least indirectly, there is now an acknowledgement that not all AI techniques rely on large amounts of data, or on large amounts of computation (although many do). What's potentially problematic, but admittedly difficult to fix, is that the new definition can easily be read as including computational techniques usually considered outside of the scope of AI; an extreme interpretation may even suggest that any computer program (since based on logic) falls under this definition. While I see a clear improvement from earlier Commission documents, it seems that more work is required on the definition of AI, especially as used in the context of a legal framework. I still believe that CLAIRE's proposed definition (as stated in https://claire-ai.org/wp-content/uploads/2020/06/ec-wp-response.pdf) is one of the best possible starting points for this:

AI [...] encompasses algorithms and systems that can replicate, support or surpass human perceptual, linguistic and reasoning processes; learn, draw conclusions and make predictions based on large or small quantities of data; replicate or enhance human perception; support humans in diagnosis, planning, scheduling, resource allocation and decision making; and cooperate physically and intellectually with humans and other AI systems.

What distinguishes AI approaches from other kinds of computation is that they exhibit key aspects of behaviour considered as intelligent in humans, and thus enable fundamentally new levels of automation and delegation.

(3) EVP Vestager stated, in the press briefing on the newly released documents, that AI already brings many benefits to society, and that AI should be a force of progress for Europe. I couldn't agree more. She also emphasised the concern that, in light of the potential risks and downsides of using AI, especially in the public sector, uptake might be slow and important opportunities may be missed, and I very much agree with that assessment. Clearly, regulation can help to build a certain level of trust, and there is a growing perception, also outside of Europe, that regulation on AI is needed - especially with respect to more controversial and riskier uses of AI technology.

(4) It was disappointing to see that, at the press conference, regulation was not just at the centre of the presentation and discussion, but that investment into AI research and innovation was hardly mentioned at all. At least at the surface, the new coordinated plan appeared to be aimed in the right direction. However, the impression given by the Commission at today's announcement and press conference to the public that funds their initiative and activities is that what mostly counts is to get AI regulation right, and that investment into excellence in AI research and innovation is of secondary importance for the success of "AI made in Europe" and of "AI for the benefit of Europe". This, I believe, would be a very dangerous and costly mistake to make. Even with the world's best AI regulation (a reasonable and perhaps quite attainable goal to pursue in Europe), the benefits of AI for citizens across Europe will be very limited indeed if we don't preserve and strengthen, swiftly and with determination, European excellence in AI research and innovation.

Further analysis of the new action plan is needed, but my first impression is that there is still insufficient realisation of the importance of creating and supporting critical mass in AI research and innovation within Europe, that key concepts, such as the prominently mentioned "lighthouse centre" remain vague and unconvincing, and that the Commission seems resigned to support a highly fragmented ecosystem through a smorgasbord of weakly coordinated mechanisms, mostly based on existing instruments.

Here, a major course correction seems needed, starting with the realisation that regulation and investment must go hand in hand, and that, while it might take years for regulation to take effect, we don't have that kind of time when it comes the investment urgently required to ensure Europe's technological sovereignty and future competitiveness. It would not merely be reassuring, but increasingly mission-critical to have the highest levels of the Commission and the member state governments emphasise the need for ambitious, carefully planned, yet swiftly executed investment into the future of "AI made in Europe".

As a Belgian proverb goes: "Celui qui arrive trop tard trouve la plaque retournée" (s/he who arrives too late finds the plate turned over). Right now, there is little reason to worry about arriving late on AI regulation, but a very significant risk of finding the plate of AI turned over by the time Europe truly comes to the table.


Addendum

Looking a little further into the documents released today, I am puzzled that the Commission still appears to not have realised the crucial role AI techniques, notably from the area of automated reasoning, are already playing in the area of cybersecurity, safety and robustness (see, e.g., use of SAT/SMT solving for system, protocol and software verification, and of SMT and MIP for neural network verification). These are well-established areas of excellence in European AI (along with other areas, including machine learning, knowledge representation, multi-agent systems, vision, natural language processing, robotics and others) that can and should be key for supporting not only for the quality management of AI systems, but of IT systems and infrastructure in general. It's almost as if the Commission's AI experts have not yet realised that AI techniques are already crucially responsible for the correct and reliable functioning of the computer hard- and software we are using today, especially in high-risk applications ...


Ferdi v. Engelen

Organiseren met Data

3 年

Interesting read, Holger Hoos, and a good start for me before I dive into these plans myself.

回复

要查看或添加评论,请登录

Holger Hoos的更多文章

社区洞察

其他会员也浏览了