November 2024 | This Month in Generative AI: California Legislatures Have Been Busy

November 2024 | This Month in Generative AI: California Legislatures Have Been Busy

Image generated with Dall-E

By Hany Farid , Professor at UC Berkeley, CAI Advisor

As the power and potential of artificial intelligence grows, legislators around the world are crafting policies and acts aimed at helping to mitigate some of the perceived harms associated with this relatively new technology.?

In October of 2023, President Biden signed a sweeping executive order titled the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence . This EO opens with the following statement:

Artificial intelligence (AI) holds extraordinary potential for both promise and peril.? Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.

Since this time, many US government agencies have been busy implementing various aspects of Biden's far-reaching EO, including, for example, the construction of the US Artificial Intelligence Safety Institute . At the same time, many states have grown impatient with the lack of? meaningful federal legislation to protect consumers and institutions from the many harms that have already emerged.

To fill this gap, several states have begun to pass a range of regulations. A whopping 38 AI-related pieces of legislation recently landed on California Governor Gavin Newsom's desk. They aimed to address a range of AI-related issues, many of which are enumerated in Biden's EO. I will briefly review some of the more notable of these.

Let's start with the bill that is close to the efforts of the Content Authenticity Initiative (CAI). The California AI Transparency Act (SB-942) aims to increase transparency around AI-generated content (i.e., images, audio, and video) by requiring AI providers with over one million monthly users to implement several measures to disclose when content is created or modified by artificial intelligence.?

A few key provisions include:?

  1. Providers must offer users an option to include a visible disclosure in the generated content that clearly identifies it as AI-generated.?
  2. Providers must also include imperceptible disclosures specifying the content's provenance (e.g., digital watermarks).
  3. Providers must offer a publicly accessible, no-cost tool that allows users to check to see if content was created or altered by the provider's system. Governor Newsom signed this bill, which will take effect on January 1, 2025.

This bill is a good step in bringing the vision of the CAI to life. As the place where the vast majority of people consume content, it's also important there be safeguards built into online and social media platforms to help prevent people from falling victim to deepfakes and deceptive AI content. Having provenance requirements not only for generative-AI tool providers where content is created, but also for platforms where it is published and ultimately consumed will make a meaningful impact on our ability to trust what we see and hear online.?

AB-1831 aims to address the troubling trend of AI-generated child sexual abuse material (CSAM). By criminalizing the production, distribution, and possession of AI-generated CSAM, this bill closes a loophole in California law that excludes provisions regarding AI-generated content. Governor Newsom signed this bill, which will take effect on January 1, 2025.

Consistent with recent FCC rulings, AB-2905 requires robocalls to notify consumers if they use AI-generated voice content and obtain consent from consumers before disseminating pre-recorded messages that use artificial voices. This bill, like the federal FCC rulings, are likely motivated by robocalls in New Hampshire earlier this year that, using an AI-generated voice of President Biden, urged voters not to vote in the NH primary. This law is part of a broader effort to curb the misuse of AI in political and commercial contexts (more on this below). Governor Newsom signed this bill, which will take effect on January 1, 2025.

Building on earlier California rules around deepfake-powered election disinformation, the Defending Democracy from Deepfake Deception Act of 2024 (AB-2655) aims to prevent the distribution of materially deceptive audio/visual deepfakes related to elections in California, especially near election periods. This bill requires AI providers with over one million California users in the preceding 12 months to block or label deceptive content that could harm a candidate's reputation or mislead voters before and after elections.?

Shortly after Governor Newsom signed this bill into law, a federal judge temporarily halted its enforcement, citing concerns over the law's impact on free speech. The court will now assess whether the restrictions, intended to protect electoral integrity, disproportionately infringe on constitutional rights.

With many of the California AI bills focused on specific harms from how AI is deployed, AB-2013 focuses on how AI systems are developed. This bill requires developers of generative AI systems or services (i.e., text, audio, image, and video) to post detailed documentation on their websites about the datasets used to train these systems. This includes summaries of dataset sources, purposes, types of data, and data processing methods as well as whether datasets contain copyrighted material, personal information, or synthetic data. Governor Newsom signed this bill into law, and it will take effect on January 1, 2026.

In response to concerns over harms from powerful AI models, California legislators introduced SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. This would have required developers of powerful AI models to meet certain risk assessment, safety, security, and testing requirements prior to deployment. Perhaps most important, this bill was relevant to models that cost more than US$100 million to train, and it addressed harms that exceed US$500 million in damages. SB 1047 was met with resistance from many in the technology and academic sectors and was vetoed by Governor Newsom.

While I might take issue with one state trying to regulate what is undoubtedly a national and arguably an international issue, it makes sense that California – home to the titans of tech – would lead. The drawback of this state-level approach is that it is likely to lead to a complicated patchwork of regulation. On the other hand, the threat of a patchwork of regulation may help jolt the US Congress into action to create a unified response to the opportunities and risks ahead of us.

Mark Loundy

Instructional Technology Specialist and Maker Educator

1 周

Why no CAI link on the image at the top of this story? It is significantly flawed and is obviously generated by AI.

要查看或添加评论,请登录