AI Regs on the Rise: CO Leads, EU Finalizes Law, & US APRA Shifts

AI Regs on the Rise: CO Leads, EU Finalizes Law, & US APRA Shifts

Privacy Corner Newsletter: May 24, 2024

By Robert Bateman and Privado.ai

In this week’s Privacy Corner Newsletter:

  • Colorado passed one of the most ambitious laws regulating private-sector AI outside the EU (and the European Council adopted the EU AI Act).
  • The ICO dropped its threat of enforcement action against Snap—but is now “speaking with” Microsoft about its new Recall feature.
  • There have been some important changes to the draft federal privacy bill, the American Privacy Rights Act (APRA).
  • What we’re reading: Recommended privacy content for the week.

An overview of Colorado's new AI law, one of the most significant outside the EU

Colorado’s General Assembly has passed SB 24, the Colorado Artificial Intelligence Act (CAIA), which regulates the use and development of “high-risk AI systems.”

  • Under the CAIA, developers and deployers of AI systems used in areas such as healthcare, education, and insurance will face new transparency and accountability requirements.
  • Developers of high-risk AI systems must produce detailed technical documentation and report certain AI incidents to the state Attorney General.?
  • Deployers using high-risk AI systems must adopt an AI risk management framework and perform regular impact assessments, among other obligations.

? Why are you writing about Colorado’s new AI law? The European Council just adopted the AI Act…

Each announcement about the EU AI Act’s progress arrives with more fanfare than the last. But The Privacy Corner Newsletter has covered a few, so now it’s Colorado’s turn.?

Colorado’s new AI law is quite similar to the EU’s in some ways, except that it’s much shorter and simpler.

Like the EU AI Act, the CAIA focuses on “high-risk AI systems.” Colorado defines a high-risk AI system as an AI system that makes or makes a “substantial contribution” to a “consequential decision” in the following areas:

  • Education enrollment or opportunity
  • Employment
  • Financial or lending
  • Essential government services
  • Healthcare
  • Housing
  • Insurance
  • Legal services

The law requires developers and deployers (users) of high-risk AI systems to take “reasonable care” to avoid “algorithmic discrimination” based on age, ethnicity, language proficiency, and certain other characteristics.

Both developers and deployers are presumed to have taken reasonable care to avoid algorithmic discrimination if they comply with their obligations under the CAIA (this mechanism is known as a “rebuttable presumption”).

? What are developers’ obligations under the CAIA?

Developers of high-risk AI systems face many new obligations, mostly around providing technical documentation to the deployers using their products.

Here are some examples of the types of technical information developers must provide:

  • A statement of the known harmful or inappropriate uses of the system
  • High-level summaries of the training data
  • Information about the system’s limitations, purpose, and intended uses and benefits
  • How the system has been evaluated for bias
  • Any risk-mitigation measures implemented

Deployers also have to notify the Colorado Attorney General about known or reasonably foreseeable risks of algorithmic discrimination.

? What are deployers’ obligations?

Deployers of high-risk AI systems also have extensive obligations under the CAIA, including the following:

  • Implementing a risk management policy, such as NIST’s AI RM or ISO 4001
  • Completing an impact assessment for high-risk AI systems at least annually
  • Providing transparency information to consumers

The CAIA also cross-references the Colorado Privacy Act, which includes relevant obligations around “profiling with legal or similarly significant effects.”

If signed by the state’s governor, the CAIA will take effect from February 2026.

The ICO has dropped its case against Snap but is ‘making enquiries’ about Microsoft’s Recall

The UK Information Commissioner’s Office (ICO) says it will be speaking with Microsoft about its new Recall feature after deciding not to proceed with enforcement against Snap’s “My AI” chatbot.

  • In a preliminary enforcement notice last October, the ICO alleged that Snap had not properly assessed the data protection risks involved in its My AI chatbot feature, particularly with respect to children.
  • On Tuesday, the ICO said Snap had put appropriate mitigations in place and would not face enforcement.
  • The ICO now says it intends to “speak with” Microsoft about its Copilot+ Recall feature, which takes and stores regular screenshots of users’ active screens.

? What happened with Snap?

Snap rolled out My AI, a GPT-based chatbot with “additional safety features,” to all Snapchat users last April. The ICO began an investigation last June and issued a preliminary enforcement notice in October.

The ICO did not publish the preliminary enforcement notice, but it reportedly alleged that Snap had not properly considered data protection risks associated with generative AI, particularly for users aged between 13 and 17.

This week, the ICO announced that it would not pursue enforcement against Snap because the company had taken “significant steps to carry out a more thorough review of the risks” and “implemented appropriate mitigations.”

? What about Microsoft?

Microsoft announced Recall, a new AI feature for Copilot+ PCs (computers with special hardware capable of running resource-heavy AI applications).?

Recall grabs an image of a user’s screen every few seconds and stores it in encrypted form on the device’s hard drive. The user can then scroll back “through time” and can search for things they did in the past.

Microsoft says it won’t use the images stored by Recall for any other purposes and will only make them available to the relevant logged-in Windows user. Users can pause recording or exclude certain apps. If the user happens to enjoy Edge browser, Recall won’t screenshot their private browsing sessions.

? What’s the privacy issue?

An app that snaps an image of your screen every few seconds has potentially huge privacy implications. However, based on Microsoft’s reassurances, it appears that all the storing and processing of the images will occur on the user’s device.?

Nonetheless, data protection experts have pointed out some of the many possible risks involved in the product (this article from Carey Lening provides an excellent run-down of the issues), including that Recall will be activated by default on certain PCs.

Following a BBC report describing Recall as a “privacy nightmare”, the ICO said it was “making enquiries with Microsoft to understand the safeguards in place to protect user privacy.”?

Later, on X, the ICO linked its inquiries into Recall with its investigation into Snap’s AI product, the latter of which it described as a “warning shot for industry.” Note, however, that, as far as we know, Snap did not receive an actual warning, reprimand, or any other penalty.

Some important changes have been made to the American Privacy Rights Act draft

A new draft of the American Privacy Rights Act (APRA) was published before a House Energy and Commerce Subcommittee on Thursday.

  • Last month saw the release of the first discussion draft of the APRA, a new federal privacy bill that would expand individual rights and business obligations across the US.
  • The new draft features important clarifications to the bill’s “data minimization” requirements, a change to the deadline for responding to privacy rights requests, and new rules for data brokers, among other amendments.
  • The draft does not appear to make any substantial changes to the APRA’s preemption provisions, which are a key sticking point for the bill.

? What’s changed about the APRA?

Here are a few of the many changes present in this new draft of the APRA.

First, there are a couple of clarifications on data minimization.

The original draft provided a broad prohibition on processing covered data with two main exceptions, one of which was about providing “reasonably anticipated” communications to the individual. The new version clarifies that such communications do not include ads.

The new draft clears up an ambiguity about transferring sensitive covered data. It clarifies that where a covered entity can rely on a permitted purpose to transfer sensitive covered data, the entity must also get consent from the individual.

There’s a new (and long) “privacy by design” section that requires covered entities to implement “policies, procedures, and practices” for identifying and mitigating privacy risks.

The deadline for responding to individual privacy requests has been cut from 45 days to 30 days (for large data holders and data brokers, the timeline remains at 15 days).

On data brokers, the new draft includes a “Delete My Data” request mechanism alongside the original draft’s “Do Not Collect” process. This change addresses a key criticism from the California Privacy Protection Agency’s (CPPA) letter to the bill’s sponsors last month.

On pre-emption of other laws—the contentious issue that could end up killing this bill—there appears to be little change, despite a slight broadening in how other federal laws take precedence over the APRA.

And for whatever reason, the Kids’ Online Safety Act (KOSA), a separate child privacy bill introduced last year, is now tacked onto the end of the APRA.?

Because things weren’t complicated enough already.




What We’re Reading

Alex Krylov

Privacy, Data Protection, Compliance | CIPP, CIPM, FIP

6 个月

Robert Bateman I'd like to see Colorado do a public comment round on AI Risk Nutrition Labels / Manifests as they've done for OOPS/UOOS choice signals. Seems more and more like a good and useful thing. PS: https://www.forbes.com/sites/greglicholai/2023/11/21/its-time-for-nutrition-labels-in-artificial-intelligence/?sh=2e2731a05f87

要查看或添加评论,请登录

社区洞察

其他会员也浏览了