Weekly news context for March 14th, 2025

Weekly news context for March 14th, 2025

Everything important that happened this week in the cybers.

?

State AGs Target AI and Cybersecurity Compliance

Key quote: "State attorneys general have expanded their enforcement priorities beyond traditional data breaches to include algorithmic accountability. Their investigations now scrutinize how AI systems make decisions affecting consumers, focusing particularly on instances of bias and discrimination. This enforcement approach requires companies to implement specific technical and operational safeguards, including algorithmic impact assessments, bias testing protocols, and transparent documentation of AI decision-making processes. These requirements constitute concrete compliance obligations rather than theoretical concerns."

Why it matters:

State AGs are ramping up enforcement on cybersecurity and AI. California secured a settlement against Blackbaud for security failures, while Washington sued T-Mobile over a massive data breach. These officials now investigate algorithmic bias and AI decision-making, requiring companies to implement new controls. The potential legal risks are immediate. Organizations face the risk of penalties across multiple states with different requirements, making compliance both complex and necessary.

?

Garcia v. Character.ai - Defendants File Motions to Compel Arbitration and Dismiss Claims

Key quote: "I've been tracking the case of Garcia v. Character.ai because of the potential legal and regulatory outcomes. All defendants filed multiple motions on March 10, 2025, seeking to compel arbitration and dismiss the lawsuit. The case represents one of the first major legal challenges involving alleged harms caused by AI chatbots. This is an update of litigation in progress."

Why it matters:

Character.ai and co-defendants filed motions this week to end the landmark wrongful death lawsuit brought by Megan Garcia after her son's tragic passing. They're pushing to force private arbitration based on terms of service agreements and dismiss jurisdiction claims against individual executives. This case could redefine AI companies' responsibilities to minors and potentially trigger new regulatory frameworks for conversational AI systems.

?

Hedge Fund Compliance Failure Costs $90M

Key quote: "Between November 2021 and August 2023, unauthorized changes to 14 live-trading models created significant performance disparities. Some client funds overperformed by $400 million while others underperformed by $165 million, leading to uneven returns across accounts and inflated compensation for the responsible modeler. Two Sigma ultimately agreed to a $90 million penalty and repaid $165 million to affected clients."

Why it matters:

Two Sigma ignored algorithmic security warnings for four years despite repeated alerts from staff. An employee exploited this vulnerability, making unauthorized changes to 14 trading models. The manipulation created a $565 million performance gap between client accounts. Some gained $400 million while others lost $165 million. The SEC hit Two Sigma with a $90 million penalty on top of $165 million in client restitution.

?

Thanks, and have a great weekend! This newsletter is published every Friday I'm in the office.

要查看或添加评论,请登录

Kayne McGladrey的更多文章