California's AI Bill Veto: A Step Back or a Strategic Pause?
Felipe Oriá
Tech Policy & Regulation | Latam & Emerging Markets | Digital Platforms, Emerging Technologies, Web3 | ex-Uber | ex-Binance | Harvard MPP | PhD Cand.
Our newsletter is published twice a week to keep you at the forefront of digital regulation and technology policy:
Tuesdays: The "Bridging the Gap" edition offers an accessible, academic take on key digital regulation concepts, helping bridge the gap between theory and practice for those immersed in the world of tech policy.
Thursdays: Dive deeper with an in-depth analysis and my unique perspective on a significant development in tech regulation from the past week.
?? Subscribe and follow #DigitalRegulationSeries to stay informed and engaged with insights into the evolving world of technology regulation.
What you need to know about California's vetoed AI Bill
Three days ago, California's Governor, Gavin Newsom, vetoed California's AI regulation bill. Why? The bill claims to regulate only the most powerful AI models, that use large computing capacity, but its requirements had the potential to have a wider impact, including on open-source models. It required companies to put safety measures in place, like shutdown capabilities, safety assessments, testing, and computer cluster policies, to prevent "critical harm." It also called for the companies to submit reports and included penalties for those that didn’t comply.
The broader context
California's AI bill attracted attention from the state’s tech community, as it aimed to regulate the most advanced and ambitious AI models, which can cost over $100 million to develop. In its original and stronger version, the law based compliance on computing power, which meant that as computers got cheaper and more powerful, more companies would eventually fall under the law. The bill was eventually weakened as it advanced in the state legislature. More here and here
The arguments for and against
Governor Newsom criticized the bill for focusing too much on regulating large A.I. systems, without addressing the broader risks and harms of the technology, suggesting a more nuanced approach. “While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it”, he said in a statement.?
Scott Wiener, a State Senator from California, who authored the bill, called the veto a "setback for everyone who believes in oversight of massive corporations." In defending the bill, he said that “the companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public."??
What is Big Tech saying about the bill?
Meta and Microsoft, along with The Chamber of Progress—which represents major tech companies like Google, Apple, and Amazon—opposed the legislation, claiming it could hinder innovation and weaken the U.S. in the global AI competition. In a letter, OpenAI expressed its opposition arguing that SB 1047 “could cause entrepreneurs and engineers to leave the state” In contrast, Musk expressed support for the bill, citing concerns about the potential risks AI poses to the public. Here and here for more
What comes next?
Now that Newsom vetoed the bill, there has been talk of federal regulators taking the lead on regulating AI models, something that tech giants would approve. Future regulations might blend elements from both SB-1047 and the EU's AI Act, leading to a hybrid approach that combines targeted oversight with comprehensive risk-based categorization.
Potential impact in California
The high costs of GPUs, infrastructure, and salaries for in-demand AI experts drive up the expenses of training advanced models. Smaller companies may struggle to compete with larger firms that can absorb these costs, potentially leading to a consolidation in the AI industry where only well-funded organizations can afford to develop advanced models in California, home to over 65,000 AI jobs.
What is happening in other US states?
States like Colorado, Maryland, and Illinois have passed laws to require disclosure of AI-generated "deepfakes" videos in political ads, ban facial recognition in hiring, and protect consumers from discrimination by AI models. Recently, the EU adopted the AI Act, taking a broader approach than SB-1047 by categorizing AI systems into various risk levels, from unacceptable to minimal risk. As of today, there are no nationwide AI laws in the U.S. That’s why California, home to 35 of the top 50 AI companies in the world, matters, as it could lead the country’s approach to AI regulation.
What does this mean for Latam?
The vetoed California AI Bill (SB-1047) has significant implications not only for the United States but also for countries around the world, including those in Latin America. California is a major hub for AI development, its regulatory decisions can influence global standards and practices. Resulting from this developments, we can expect:
1. Setting a Precedent for Regulation: California's attempts to regulate AI is bound to cause spillovers in other regions, including Latin America. Countries like Brazil, Argentina, and Chile are discussing frameworks for AI governance. The California bill's focus on safety protocols and accountability may inspire similar legislative efforts in these countries, encouraging them to adopt regulations that address the risks associated with AI technologies.
2. Aligning with Global Standards: The California legislation was influenced by the European Union's AI Act, which categorizes AI systems based on risk levels. This alignment suggests that as countries (including Latam as discussed in this previous edition) develop their own regulations, they may look to both California and the EU for guidance. Such alignment could facilitate international business and cooperation, making it easier for Latin American companies to engage with firms in the U.S. and Europe.
3. Social Concerns at the forefront: Latin America faces unique challenges related to technology, such as data privacy, algorithmic bias, and digital inequality. The discussions surrounding SB-1047 highlight these issues. As Latin American nations consider their own AI policies, they may prioritize protections against discrimination.
4. Innovation vs. Regulation Balance: The veto was a political statement as well as a show of force by tech companies. The debate in California reflects a broader tension between fostering innovation and ensuring safety in technology development. Latin American countries will need to navigate this balance carefully, without falling victim to foreign lobbies while also preserving a healthy business environment. By learning from California's experience—where tech companies expressed concerns about regulatory burdens—Latin American policymakers can aim to create environments that encourage innovation while still protecting public interests.
For a deeper dive:
?? Share your thoughts below!
?? Subscribe to our newsletter and follow #DigitalRegulationSeries for more insights into tech and regulation!
#techpolicy #tech #technology #innovation #publicpolicy #regulation #digitalplatforms #law #politicalscience #governmentrelations #governmentaffairs #ai #california #SB1047