The (Gen)AI Policy Juxtaposition - EU vs US Approach
In April 2021, the European Union proposed the EU AI Act , promoting it as "the first comprehensive regulation on AI by a major regulator anywhere." After passing in the European Parliament and Council in December 2023 , the Act was adopted in May 2024.
The framework categorizes AI applications based on perceived risk , applying distinct regulatory treatments for each category. Certain applications, such as 'voice-activated toys' and some 'facial recognition' use-cases, are banned as "unacceptable risk." Other categories include High-Risk (e.g. critical infrastructure, biometric systems, education tools), Limited-Risk (e.g. chatbots, image manipulation), and Minimal-Risk (e.g. AI in video games, spam filters).
The 'High-Risk' category is subject to strict requirements, including risk assessments, data standards, documentation, and human oversight. The 'Limited Risk' category must adhere to transparency obligations, such as notifying users. Minimal-Risk cases are simply encouraged to follow best practices.
The EU AI Act also establishes a regulatory structure, which includes the:
The EU AI Act takes a similar stance to how investment education treats investment classes: it classifies AI based on perceived risks, while perhaps overlooking opportunities to leverage AI for global competitiveness. The risk-based classification could be seen as restrictive, potentially oversimplifying the nuanced differences between AI applications.
It's important to note that this framework was proposed before the rapid advancements (inspired by ChatGPT) we're seeing today.?
The EU AI policy approach tolerates, and even slightly encourages, AI but does not aim to fully unleash the technology to achieve global competitiveness.
The U.S. policy on AI, as indicated by the White House Memo released on October 24, 2024, and the executive order from October 30, 2023, is aimed at leveraging AI to strengthen both the U.S.'s technological leadership and national security.
The US AI policy seems to take a very comprehensive view of AI. Let us start with last year’s executive order by the Biden administration titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" which aims at "harnessing AI for justice, security, and opportunity for all".
Key policy areas outlined in the order include:
The order also establishes several key entities:
The October 2024 memo , titled "Memorandum on Advancing the United States' Leadership in Artificial Intelligence," reiterates the goal of positioning the U.S. as the global leader in safe, secure AI development.
The memo is quite elaborate and I would strongly recommend you to read the whole document .
领英推荐
I will discuss a few points from the memo, read together with last year’s executive order , that I found to be interesting.
National Security
The impetus of the U.S. policy is national security rather than just technological advancement. The policy emphasizes identifying AI threats to national security and leveraging AI to bolster it. Section 4.2 of the executive order requires U.S. based IaaS providers (e.g. AWS, Google Cloud) to report on large-scale AI model training with potential cyber-attack capabilities. There's also a focus on protecting critical infrastructure from AI-enabled attacks through regular threat assessments.
The executive order also recognizes that AI models (especially GenAI models) should not enable non-experts to develop chemical, biological, radiological, or nuclear (CBRN) weapons.
U.S. Leadership in AI
The U.S. aims for global AI leadership, not just through domestic infrastructure but by attracting international AI talent. The policy includes streamlining visa processes and enhancing research opportunities at U.S. universities to foster an AI-competitive ecosystem.
The U.S. AI policy promotes a competitive ecosystem by discouraging monopolistic practices that could allow dominant firms to restrict competitors' access to critical resources, such as semiconductors, computing power, cloud storage, and data. Additionally, it emphasizes securing a steady supply of semiconductor microchips, which are vital for AI development, to ensure that the AI industry can continue to grow and innovate without bottlenecks.
Safe AI Deployment
To ensure safe AI deployment, the policy mandates testing environments, including red-teaming and testbeds, for developing trustworthy AI.
A notable focus is on 'dual-use foundation models' - versatile AI models that could be used for both beneficial and potentially harmful purposes. Entities developing these models must submit reports to the government.
The policy also addresses generative AI misuse, including child abuse, non-consensual imagery, and harmful content generation. The policy also touches upon the labeling of GenAI outputs.
International Collaboration in AI Governance
The U.S. aims to establish a responsible global AI governance framework centered on safety, human rights, and democratic values. Collaborating with international partners, it seeks to lead in setting norms that balance innovation with security, reflecting a commitment to ethical AI leadership.
Addressing Broader Impacts
Acknowledging AI's economic impacts, the policy proposes to support workers through job retraining programs and proposes safeguarding against risks such as excessive surveillance, health issues, and labor disruptions. It emphasizes the involvement of unions, educators, and employers in shaping AI's role in the workplace to ensure equitable access to its benefits.
Privacy and Consumer Protection
The executive order recognizes that AI could amplify existing privacy issues online. It highlights AI's capabilities in data re-identification and inference, raising potential privacy risks. To address this, the policy promotes Privacy Enhancing Technologies (PETs) like Homomorphic Encryption and Secure Multi-Party Computation to protect user data.
PS: The book "The Age of Decentralization " contains detailed discussion on privacy enhancing technologies including homomorphic encryption, SMPC, zero-knowledge proofs, etc.?
So, how does the US Policy approach compare to the EU approach?
Well, till this point (especially when a new administration is coming to the White House), the U.S. Policy approach seems to be more comprehensive and recognizes the advancements in AI as an opportunity to strengthen U.S. leadership. The (quite balanced) U.S. approach, especially with recent policy actions, emphasizes a balance of competitiveness, innovation, and safety. By positioning itself as a leader in AI governance, the U.S. aims to set international norms, promote responsible innovation, and protect civil rights, while also preparing the workforce for AI-driven changes in jobs and industries.
The EU policy approach reeks of protectionism and emphasizes strict regulations, especially for high-risk AI applications. While some may see the EU approach to AI policy as a reflection of the EU's commitment to values like data protection and consumer rights, the EU may miss on the innovations and opportunities that the developments in AI are likely to bring. The regulatory approach may be borderline anachronistic.