Generative AI and GDPR, Fines for Location Data Sharing

Generative AI and GDPR, Fines for Location Data Sharing


Privacy Corner Newsletter: May 3, 2024

By Robert Bateman and Privado.ai

In this week’s Privacy Corner Newsletter:

  • Noyb takes on ChatGPT hallucinations—don’t they understand how AI works?!
  • $200 million in penalties for carriers that shared data without consent: The FCC does an FTC.
  • The FTC finalizes updated Health Breach Notification Rule: It’s definitely not just about security incidents anymore.
  • What we’re reading: Recommended privacy content for the week.




Does noyb’s complaint against OpenAI misunderstand generative AI?

Max Schrems’ campaign group, noyb, has submitted a complaint against OpenAI, alleging that the generative AI application ChatGPT violates the GDPR.

  • Noyb’s anonymous complainant asked ChatGPT his date of birth and received inaccurate personal data as the output.
  • The complainant for noyb requested that OpenAI provide access to and erase his personal data. OpenAI responded by providing the complainant’s account information and stating that it could not ensure its outputs were accurate.
  • Despite OpenAI having established an office in Ireland, Noyb argues that the GDPR’s “One Stop Shop” process does not apply and that the Austrian Data Protection Authority (DPA) should handle its complaint.

? Does noyb expect OpenAI to fix AI hallucinations overnight?

No, noyb apparently understands the issues around certain generative AI applications “hallucinating” inaccurate personal data and likely prompted ChatGPT with the expectation that it would generate false information.

Essentially, ChatGPT generates text based on the probability that strings of characters will appear sequentially based on patterns in its corpus of training data. Given imperfect information, inaccurate outputs are inevitable.

The complainant (possibly Max Schrems himself) asked ChatGPT for his date of birth. ChatGPT confidently asserted that the complainant’s date of birth was a particular day. On this occasion, its output was inaccurate.

The question is whether this aspect of OpenAI’s activities violates the GDPR.

? So does ChatGPT violate the GDPR?

Noyb thinks so, and similar allegations have been made in Italy , Poland , and some other EU countries. OpenAI denies the allegations and has made several changes to ChatGPT since its launch in an attempt to bring the platform into GDPR compliance.

After reading the incorrect reporting of his date of birth, noyb’s complainant submitted access and erasure requests to OpenAI.

Under the right of access, the complainant requested:

  • “What data concerning him is processed by OpenAI for the purpose of powering ChatGPT”
  • “Where this data came from”?
  • “What the legal basis [for processing the personal data] may be”
  • “How long ChatGPT plans to store this information
  • “Who it provided this (false) information to”

In response to the access request, OpenAI provided the complainant’s account details but not the information requested above.

Under the “right to erasure,” the complainant asked OpenAI to erase “the data subject’s incorrect date of birth from the results displayed by ChatGPT.” OpenAI reportedly replied that “there is no way to prevent its systems from displaying the data subject’s inaccurate date of birth in the output…”

As such, noyb alleges that OpenAI has:?

  • Failed to ensure the accuracy of the complainant’s personal data under Article 5(1)(d)
  • Failed to respond to an access request within the GDPR’s mandatory time limits under Article 12(4)
  • Failed to provide the complainant with access to personal data allegedly stored about him under Article 15

Note that there is no allegation that OpenAI has violated the right to erasure. But noyb requests that the Austrian DPA order OpenAI to rectify or erase the complainant’s inaccurate date of birth in order to comply with the GDPR’s “accuracy” principle.

The complaint acknowledges that OpenAI might be technically unable to meet this requirement, but states that this would not excuse the company from complying with the GDPR’s principles.

? What happens next?

OpenAI recently set up an establishment in Ireland. The normal process would be for the Austrian DPA to forward the complaint to the Irish DPA under the GDPR’s “One Stop Shop” process.

However, noyb has requested the Austrian DPA handle this complaint directly.

The complaint alleges that OpenAI’s Californian entity effectively acts as the controller regarding the processing of the relevant personal data via ChatGPT, and so the One Stop Shop supposedly does not apply.

This argument might be as important as the allegations about ChatGPT itself. Many cross-border complaints, particularly those involving Ireland (the vast majority), have languished for several years under the One Stop Shop process.

If noyb successfully argues that OpenAI cannot benefit from the One Stop Shop process, this could be a significant victory against other US tech companies.




FCC does an FTC, fines wireless carriers $200m for ‘illegally sharing access to customers’ location data’

The Federal Communications Commission (FCC) has issued almost $200 million in penalties to four of the largest wireless carriers in the US for allegedly sharing location data without appropriate notice or consent.

  • The FCC investigated Sprint, T-Mobile, AT&T, and Verizon for alleged violations of Section 222 of the Communications Act.
  • The investigations resulted in forfeiture orders against each company that were finalized this week.
  • The enforcement against these companies for selling location data to “data aggregators” mirrors similar actions by the Federal Trade Commission (FTC) earlier this year.

? What did these carriers allegedly do wrong?

These four carriers allegedly shared their customers’ location data with third pirates without consent.

For example, AT&T ran a “Location-Based Services program” until March 2019. The company allegedly sold location data to two “location information aggregators” (data brokers) who then resold access to the data to third parties, including other data brokers.

AT&T put contracts in place with the two data aggregators with whom it shared location data directly. However, the FCC says audits revealed serious data security issues among these companies (which AT&T said were resolved).

? Is that illegal?

Under Section 222 of the Communications Act, wireless carriers:?

  • Must only “use, disclose, or permit access to individually identifiable customer proprietary network information” (CPNI) to provide its services unless it has “approval” from the customer.
  • May share data for other reasons, but only if it’s in “aggregate” and subject to “reasonable and nondiscriminatory terms and conditions” and on “reasonable request”.

Here’s how the law defines some of those terms:

  • Individually identifiable CPNI includes information about a customer’s “location” (among other things).
  • Aggregate information means “collective data” from which “individual customer identities and characteristics have been removed”.

The FCC found that the information these carriers shared was not “aggregate” and that people hadn’t consented to its sharing.

? How much was each company fined?

Here are the total fines issued against each company:

  • Sprint: More than $12 million?
  • T-Mobile: More than $80 million
  • AT&T: More than $57 million
  • Verizon: Almost $47 million

Note that Sprint and T-Mobile have merged since the investigations concluded.

? This all sounds familiar…

Indeed, the FTC took related enforcement action against data aggregators InMarket and X-Mode Social earlier this year.?

The FTC’s investigations took aim at a different stage in the data supply chain the FCC’s, but they show that the FCC is not the only authority concerned about the sale of location data and other sensitive information.

The FCC’s Privacy and Data Protection Task Force makes this sentiment clear, calling privacy and data protection “a top priority” and “crucial to all Americans”.

So if you’re operating in the US—and even if you’re not regulated by the increasingly aggressive FTC—think carefully about how you use personal data.

FTC finalizes updated Health Breach Notification Rule: It’s definitely not just about security incidents anymore

The FTC has finalized amendments to the Health Breach Notification Rule (HBNR).

  • The HBNR requires health vendors not covered by the Health Insurance Portability and Accountability Act (HIPAA) to disclose “security breaches” under certain circumstances.
  • The new rule “underscores its application to health apps” and imposes new notification requirements.
  • Since 2021, the FTC has interpreted the HBNR broadly and has enforced the rule against apps such as GoodRX and Premom for their advertising practices.

? Why the new rule?

Here’s the background:

  • The American Recovery and Reinvestment Act of 2009 (ARRA) established breach notification requirements for vendors handling personal health records (PHRs) not covered by HIPAA.
  • In 2010, the FTC adopted the HBNR based on its powers under the ARRA.
  • In 2021, the FTC issued a statement titled “On Breaches by Health Apps and Other Connected Devices,” which gave a broad interpretation of the HBNR intended to cover vendors providing health-related apps.
  • The statement “reminded” health vendors that a “breach” under the HBNR is “not limited to cybersecurity intrusions or nefarious behavior” and could include the unauthorized disclosure of health information to vendors, including via Application Programming Interfaces (APIs).
  • The updated HBNR amends the original rule to incorporate this broad interpretation. It went out for comment in May 2023.

Two out of five commissioners opposed the new rule, arguing that it is broader than Congress intended. Nonetheless, the rule has been finalized and will take effect 60 days from its publication in the Federal Register on April 26 (June 25).

? So what’s new?

Here’s a summary of the changes under the revised HBNR:

  • New and revised definitions: There are modified definitions of “personal health record (PHR) identifiable health information” and “PHR related entity”, and two new definitions: “Covered health care provider” and “health care services or supplies”.
  • Clarifying “breach of security”: The new rule states that a “breach of security” includes an unauthorized acquisition of identifiable health information that occurs as a result of a data security breach or an unauthorized disclosure.
  • Clarifying “multiple sources of PHR identifiable health information”: There’s an explanation of how a personal health record can draw PHR identifiable health information from multiple sources.
  • New notification requirements: The new rule expands the methods via which entities can notify consumers, requires more detail in notices, and requires entities to notify the FTC of certain breaches at the same time they notify individuals.

Here’s the gist: Companies in the health space (and not covered by HIPAA) should now have little doubt that the HBNR covers health apps and the non-consensual sharing of health information via APIs and similar technologies.

Along with the FTC’s enforcement against health companies under the FTC Act and new health privacy laws in Washington, Nevada, and elsewhere, the revised HBNR is another example of how drastically the US privacy landscape is changing—in healthcare and practically every other area.

What We’re Reading

Alex Krylov

Privacy, Data Protection, Compliance | CIPP, CIPM, FIP

6 个月

Where literalism and maximalism meet, there will be implausible expectations on incomplete technologies. ??

要查看或添加评论,请登录

Privado.ai的更多文章

社区洞察

其他会员也浏览了