Proposed California AI bill (SB-1047)
If you've been paying attention lately, one of the hottest topics of debate in tech circles is California’s proposed Senate Bill 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”
Let's briefly discuss what the bill contains:
The bill applies only to “covered models” that, before January 1, 2027, are defined as either:
After January 1, 2027, the definition may change based on regulations set by the Government Operations Agency. These cost thresholds will be adjusted annually for inflation.
The bill mandates that developers of covered AI models implement robust safety protocols, including the ability for a full shutdown and detailed safety and security measures. Developers must retain and provide access to these protocols and annual audit reports to the Attorney General for up to five years.
It also prohibits the use or commercial release of AI models that could cause significant harm and requires third-party audits starting January 1, 2026.
Additionally, it mandates that operators of computing clusters establish policies for assessing AI model training usage and protects whistleblowers who report compliance issues. The bill establishes the Board of Frontier Models and a consortium to develop a public cloud computing cluster, “CalCompute,” to support safe and ethical AI development. It will take effect upon budget appropriation and includes provisions regarding public access limitations.
As California is the epicenter of AI development, the introduction of this bill has generated significant discussion.
Reactions are mixed. The AI industry argues that the proposed laws could negatively impact small businesses and stifle innovation, while others support the bill, seeing some protection as better than none.
The bill was introduced by a San Francisco Democratic Senator. One criticism is that it doesn’t address AI-driven discrimination. Originally, Assembly Bill 2930 sought to make the use of discriminatory AI illegal in sectors like housing, finance, insurance, and healthcare, but it was shelved after the Senate Appropriations Committee narrowed its focus to AI in employment.
领英推荐
Tech companies are already gearing up against AI regulations. Aside from big players like Google, Meta, Microsoft, and OpenAI (which hired its first lobbyist in Sacramento this spring), nearly 100 companies from various industries - including Blue Shield of California, dating app Bumble, biotech firm Genentech, and pharmaceutical giant Pfizer - oppose these regulations.
Overall, the regulatory approach in the US is fragmented compared to the EU, where the AI Act has already been enacted. The EU AI Act categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal, with stringent rules for high-risk applications like critical infrastructure or law enforcement. It requires risk assessments, transparency, and human oversight for high-risk systems, and it establishes the European AI Office for enforcement.
It will be interesting to see how AI regulations in the US evolve, as they may become the framework the rest of the world eventually follows.
??The Age of Decentralization is now available for pre-order on Amazon globally.