Stanford analysis suggests how EU should categorize foundation models.

Stanford analysis suggests how EU should categorize foundation models.


In brief

The analysis published by Stanford University discusses the challenges and considerations in designing tiers for the governance of foundation models. The author cautions against using developer properties (e.g., company size) or standalone computing power-based tiers for regulation (currently the approach taken by the EU Commission in its AI Act proposals). They argue that computing alone may not adequately capture the societal impact or risk of foundation models.

The analysis suggests that evaluations of foundation models are promising but currently immature. While evaluations have been essential for tracking AI progress, the author acknowledges the need for improvements in measuring societal impact. They propose demonstrated impact as an ideal basis for tiers but note the current lack of mechanisms to track the downstream use of foundation models. The author explores the potential use of public databases, specifically in the context of the EU AI Act, to track downstream impact and recommends requiring registration of foundation models alongside high-risk AI systems.

The author concludes by encouraging governments to consult with various stakeholders, including civil society organizations, academia, and industry, to effectively advance the public interest in developing tiered regulatory frameworks.


How does it play in Brussels?

The widespread use of ChatGP has disrupted AI Act negotiations, prompting policymakers to address the challenges posed by these powerful models. A tiered approach, with horizontal obligations such as transparency for all foundational models, has been under consideration. The focus has been on defining the top tier, encompassing 'very capable' models like GPT-4, which may undergo ex-ante vetting and risk mitigation.

The AI Act Parliament position from June requires that both high-risk AI systems and foundation models be registered in a public database. However, in the last weeks France, Germany and Italy have spoken out against the tiered approach initially envisaged on foundation models and pushing back against any regulation other than codes of conduct. This has been a red line for the European Parliament which quoted the Stanford report to give legitimacy to the regulation of foundation models and adopts its recommendations on how to to categorize them.


#AI #foundationmodels #EU

https://www.davidhubert.com/post/stanford-analysis-warns-against-eu-approach-to-categorizing-foundation-models

Joerg Wicik, MBA

Head of Digital Platforms and Fin-Innovation @Volkswagen ?? | Digital-CFO aaS @KI-4-Mittelstand-Arbeitskreis ?? | Strategist, KI-autonome Finanz-, Dokument- & ERP-Prozesse??| Keynote Speaker ?? |

11 个月

Graet presentation. In my opinion we should consider eu ai Act + eu data Act + dsgvo + … as an integrated approach from governance, Risk and efficiency Enterprise Perspektive. All these consolidated in the report of the eu ai act conformity assessment - what is pretty challenging….

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了