UK-US Data Bridge Unveiled, OpenAI Under GDPR Probe, Meta's Ad Algorithm Challenged
By Robert Bateman and Privado.ai
In this week’s Privacy Corner:
UK Approves ‘Data Bridge’ for US Data Transfers
The UK government has announced its approval of a new mechanism for transferring personal data from the UK to the US.
What exactly is a “data bridge”?
A “data bridge” is essentially the UK’s term for an “adequacy decision”—official approval of another country’s data protection standards.?
If the UK and another country have a “data bridge”, UK organizations can export personal data to that country without needing transfer-specific contracts or technical safeguards.
So it’s like the DPF, but for the UK?
That’s exactly what the data bridge is. Here’s a quick timeline:
Soon? So the data bridge is not fully constructed yet?
The regulations that implement the UK’s US adequacy decision have been “laid before Parliament” and will become law on October 12.?
From that date, UK organizations wishing to transfer personal data to a US business can simply check if the company is on the list of businesses certified under the UK extension to the DPF. If so, the transfer can go ahead. If not, another transfer mechanism will be required.
Is the UK piggybacking off the EU’s hard work?
Yes, you could say that. The European Commission negotiated the DPF, and the UK did not have a representative at the Commission throughout the negotiation period. As such, the UK can’t take much credit for the “UK-US data bridge”.
There’s nothing wrong with that, of course. The UK made an independent assessment of whether the DPF was “adequate” and—unsurprisingly—determined that it was.
But other data transfer mechanisms—such as standard contractual clauses (SCCs)—are less risky in the UK.?
In contrast to its EU regulator colleagues, the UK Information Commissioner’s Office (ICO) interprets Chapter V of the GDPR as allowing a “risk-based approach”. In any case, the ICO has never enforced the GDPR’s data transfer rules.
As such, the announcement of US adequacy is arguably less of a big deal in the UK than it was in the EU.
Will the UK data bridge last?
As we explored in last week’s Privacy Corner, EU-based privacy fans are already trying to bring down the DPF.
But—if the DPF ends up in the data transfer framework graveyard (alongside its predecessors, Safe Harbor and Privacy Shield), there’s no obvious reason that the UK data bridge will need an adjacent plot.
Tortured funerary analogies aside, any challenge of the DPF at the Court of Justice of the European Union (CJEU) would be against the European Commission’s adequacy decision—not the DPF itself, and certainly not the UK’s adequacy decision.
Even if the EU’s adequacy decision gets junked, the DPF itself could continue. After all, Privacy Shield kept running on “zombie mode” after Schrems II.
Unless someone in the UK mounts a successful challenge to the US adequacy decision (which would arguably be much harder at the UK Supreme Court than the CJEU), the UK data bridge could survive even if the EU-US DPF does not.
Given its post-Brexit desire to get a “competitive edge” over the EU, perhaps that’s what the UK government is hoping for…
Poland Announces ChatGPT Investigation
The Polish Data Protection Authority (DPA) is investigating allegations that OpenAI’s ChatGPT violates the GDPR.
More legal problems for OpenAI?
Yes, this is indeed not the first time OpenAI has come up against data protection and privacy complaints.?
In the spring, the Italian DPA temporarily banned OpenAI from processing personal data about people in Italy. This led to ChatGPT going offline in the country for several weeks until OpenAI put compliance measures in place.
Because a number of investigations were announced in other EU countries, including France and Spain , the European Data Protection Board (EDPB) announced that it would be coordinating ChatGPT investigations via an EU-wide AI task force .
So what’s special about this complaint?
领英推荐
Many cases against OpenAI are… not strong. But the complainant here, researcher Lukas Olejnik, at least knows his Regulation (EU) 2016/679 from his Directive 2002/58/EC.
Olejnik alleges that OpenAI has violated the following provisions of the GDPR:
Essentially, Olejnik asked ChatGPT about himself and found the bot’s outputs to be false.?
He asked OpenAI to correct any inaccuracies that led to these incorrect outputs and to provide any information relating to him in the bot’s training set.
Olejnik alleges that OpenAI’s responses did not meet the GDPR’s requirements.
But isn’t that the nature of large language models (LLMs)?
Yes—LLMs do not simply regurgitate their training data.?
In very simple terms: Based on their training data and algorithm, LLMs predict which “tokens” (combinations of characters) are most likely to appear in a given string of text (output).
An LLM’s output varies according to the availability of relevant training data, the user’s input, the preceding tokens as the output is generated, any fine-tuning performed manually by engineers, and the overall objective of “sounding like a human”.
Note that the LLM’s objective (or rather, the LLM’s developers’ objective) is not “providing accurate information”—it’s “sounding like a human”. Accuracy might be a byproduct of a well-functioning LLM, but even the best models frequently “hallucinate” inaccurate outputs.
Indeed, LLMs don’t always aim for the “most likely” next token in a sequence, as this can lead to robotic-sounding outputs. So, in a sense, inaccuracy is baked into the LLM’s design.
So is the case hopeless?
A data protection problem does not disappear merely because it is hard (or impossible) to solve.
There might be GDPR compliance solutions that do not require OpenAI to pull ChatGPT from the EU market.?
In the classic “right to be forgotten” CJEU case known as “Google v Spain ”, Google was required to de-list certain search results upon receiving valid “right to erasure” requests under the GDPR’s predecessor, the Data Protection Directive.
Some observers—including the author of the Privacy Corner newsletter—were surprised that OpenAI’s previous GDPR compliance efforts were accepted by the Italian DPA after its above-mentioned ChatGPT investigation.
So, perhaps OpenAI can find a way to satisfy the EU’s high data protection standards—or at least achieve a compromise with the bloc’s DPAs.
Meta’s Ads Algorithm Can Face Legal Scrutiny, California Court Says
California’s First District Court of Appeal has ruled that Meta must face allegations that its advertising tools discriminate against users.
Is this a “net neutrality” thing?
Section 230 of the Communications Decency Act is one of America’s most divisive and poorly understood laws.
Essentially, Section 230 protects certain online platforms from legal claims arising from content posted by those platforms’ users. So, for example, you can’t sue Meta because someone defamed you on Facebook. You’ll need to sue the Facebook user themselves.
This complaint argues that Meta enabled insurance companies to target ads at users based on their age and gender. The plaintiff alleges that such targeting violates California’s anti-discrimination law, the Unruh Civil Rights Act.
So, is Meta liable for such claims?
At an earlier stage in the proceedings, the court found that Meta was not liable for this conduct. The insurance companies posted the ad and instructed Meta to target it at certain types of people, so the insurance companies were liable.
However, in last week’s opinion , a judge found that while third-party advertisers are liable as “information content providers”, Meta could be liable in this regard, too.
By providing the tools that advertisers use to target users, based on information gathered about them on Facebook and via tracking tools such as the Meta Pixel, Facebook might at least partly contribute to the “content” appearing on its platform.
Another blow for Meta’s advertising ecosystem?
Liability under Section 230 would be very bad for Meta.
Liability shields like Section 230—and similar provisions in other laws worldwide—are crucial for social networks, which could not operate if they were liable for every illegal comment made by their users.
But some privacy advocates might welcome an interpretation of Section 230 that restricts how Meta uses personal data—in this case, quite sensitive data—to target its users with ads.
What We’re Reading
Take a look at these three privacy-related reads published this week:
Junior Privacy Counsel / L'Oréal UK & Ireland
1 年Thanks for this Robert. In the 4th bullet of the DPF timeline, should it say 'US businesses started self-certifying under the DPF and could?opt into the EU and Swiss extensions' rather than UK and Swiss extensions?