From California to New York: How the AI Regulation Debate is Evolving

From California to New York: How the AI Regulation Debate is Evolving

Reviving the AI Regulation Debate: New York’s RAISE Act Aims to Set New Standards

The United States is entering a new chapter in artificial intelligence (AI) regulation, and New York is poised to take center stage. Assembly member Alex Bores is drafting the Responsible AI Safety and Education Act (RAISE Act) to address critical concerns about advanced AI systems. This initiative seeks to pick up where California’s failed SB 1047 left off, offering a more refined and focused approach to AI regulation.

In this newsletter, we’ll dive into what the RAISE Act entails, how it differs from previous attempts, and its potential implications for the AI industry.


Why AI Regulation Matters

AI is advancing at breakneck speed, with models becoming increasingly powerful. However, with this growth comes significant risks, from unintended consequences to outright misuse. While many tech leaders advocate for regulation, they often resist specific legislation when proposed.

California’s SB 1047 sparked national debate in 2024 but was ultimately vetoed. Now, the RAISE Act is reigniting the conversation, aiming to address gaps in existing laws and mitigate catastrophic risks posed by advanced AI models.


What the RAISE Act Proposes

The RAISE Act focuses on regulating “frontier AI models” — those pushing the boundaries of current capabilities. Key provisions include:

  1. Mandatory Safety Plans AI companies must develop comprehensive safety plans, including:
  2. Whistleblower Protections Employees can report potential misuse or critical harms caused by AI models without fear of retaliation. Critical harm is defined as:
  3. Independent Audits Third-party audits will verify compliance with safety plans. Non-compliance could result in fines or court-mandated halts to unsafe AI development.
  4. Attorney General Oversight The New York attorney general would have the authority to enforce the law, investigate violations, and take legal action against non-compliant companies.


How the RAISE Act Differs from SB 1047

Bores has carefully considered the criticisms of SB 1047 to improve his proposal. Key differences include:

  • No New Regulatory Bodies Unlike SB 1047, the RAISE Act doesn’t create a new government agency, focusing instead on leveraging existing structures.
  • No Public Cloud Requirement The bill avoids mandates like SB 1047’s proposal for a public cloud computing cluster for public good projects.
  • Streamlined Definitions By eliminating vague terms like “advanced persistent threat,” the RAISE Act focuses on clear and specific risks.
  • Exclusion of a “Kill Switch” SB 1047’s controversial requirement for companies to halt operations of rogue models has been dropped, avoiding conflicts with open-source AI developers.


Criticism and Concerns

While the RAISE Act addresses catastrophic risks, it doesn’t tackle issues like AI bias, job displacement, or environmental impact. Critics argue that focusing solely on extreme scenarios overlooks more immediate challenges.

Kate Brennan of the AI Now Institute emphasizes this gap:

“The focus on catastrophic harms overlooks real-world risks like surveillance, workplace injuries, and the environmental toll of AI systems.”

Still, Bores believes the bill is necessary to prepare for the future:

“We’re not talking about any model that exists right now. We’re talking about frontier models on the edge of what we can build and understand.”

The Industry’s Reaction

The tech industry’s response to AI regulation has been mixed. While companies claim to support oversight, they often push back against specific proposals. Lobbying efforts derailed SB 1047, and a similar fight is expected in New York.

Bores’ strategy of engaging stakeholders early may help mitigate opposition. Still, the industry’s track record suggests that any regulation — even light-touch — will face resistance.


Why New York?

New York’s robust economy and presence of AI companies like OpenAI make it a logical leader in AI regulation. Partnering with California, which hosts top AI firms, could create a blueprint for national legislation.

Scott Wiener, the California senator behind SB 1047, supports these efforts:

“The bill triggered a conversation about whether we should just trust AI labs to make good decisions. Regulation is essential for such powerful technology.”

What’s Next for the RAISE Act?

The RAISE Act is still in draft form, with room for adjustments. Key challenges include:

  • Clarifying Definitions The bill uses thresholds like computational requirements (FLOPs) and costs to define covered models, but these may evolve as technology advances.
  • Navigating Lobbying Efforts The industry’s opposition to SB 1047 suggests a tough road ahead.
  • Building Public Support Educating the public on the importance of AI regulation will be critical for success.


Join the Conversation

The RAISE Act has the potential to shape the future of AI regulation in the US. As professionals and enthusiasts in the tech industry, we must engage with this evolving landscape.

Questions for You

  1. Do you think AI regulation should focus solely on catastrophic risks, or should it address broader issues like bias and job displacement?
  2. How can policymakers balance innovation with safety in AI development?
  3. Should states like New York and California lead the way, or is national legislation the only viable solution?

Let’s shape the conversation together. Share your thoughts in the comments below!

Join me and my incredible LinkedIn friends as we embark on a journey of innovation, AI, and EA, always keeping climate action at the forefront of our minds. ?? Follow me for more exciting updates https://lnkd.in/epE3SCni

#ArtificialIntelligence #AIFuture #TechRegulation #AIInnovation #ResponsibleAI #NewYorkTech #AICompliance #AIGovernance

Reference: MIT Tech Review


Manish Verma

Data Scientist | Sr Software Engineer | Public figure Meta Official 5.7K+ Ex Ministry of Finance,GOI? Cyber Security Certified, MeitY GOI & UK?Microsoft?Google?IIT/IIM?IBM? Amazon Certified| Topmate Mentor?USA Certified

1 个月

Great impactful insights that pouring and it will utmost priority for industry leaders to engage with these cutting-edge technologies a more forefront and advancement thereof. Thanks for sharing.

回复

OK Bo?tjan Dolin?ek

回复
Marsha Jane Orr

Corporate Brand Marketing Placement and Sales KIDS ADVENTURE & SURVIVAL PACKS

1 个月

It is so hard to stay abreast of AI legislation and the challenging environments that deploy AI, often without public safeguards. As Chandra R Pillai shares below, this might come at our peril. Tune in, even for seconds: 'Kate Brennan of the AI Now Institute emphasizes this gap: “The focus on catastrophic harms overlooks real-world risks like surveillance, workplace injuries, and the environmental toll of AI systems.” '

RAJA ?? A

Founder & CEO of Connecting Profit

1 个月

ChandraKumar R Pillai The debate on AI regulation is heating up across the U.S. as California vetoed a major AI safety bill, while New York is pushing forward with its own regulatory proposals. California's approach seeks to balance innovation and safety, while New York's initiatives, like requiring watermarks on AI-generated content, aim to protect users and ensure accountability. This evolving debate highlights the challenge of governing such rapidly advancing technology. Both states are actively shaping the future of AI regulation, and it’ll be interesting to see how this unfolds! #AIRegulation #Innovation #TechPolicy

回复
Ryan Dsouza

Founder & Fractional Chief AI Officer building AI-First Engineering Products & Organisations | Passionate about the intersection of Art, Design & Technology | Fine Art Photographer

1 个月

As AI innovation accelerates, clear, effective regulation will be critical in balancing growth with accountability.

要查看或添加评论,请登录

ChandraKumar R Pillai的更多文章

社区洞察

其他会员也浏览了