New 'Zero-Click' Spyware Infects Through Ads, Kids' Privacy Law Controversy, and EU-US Data Privacy Tensions Rise

New 'Zero-Click' Spyware Infects Through Ads, Kids' Privacy Law Controversy, and EU-US Data Privacy Tensions Rise

By Robert Bateman and Privado.ai

In this week’s Privacy Corner:

  • A new type of spyware that infects targeted ads.
  • California’s kids’ privacy law gets a kicking in court.
  • Pressure on the EU-US Data Privacy Framework (DPF) builds on three fronts.
  • What we’re reading in privacy this week.

‘Surveillance Capitalism’: Infected Online Ads Can Deliver New ‘Zero-Click’ Spyware

An investigation by the Israeli newspaper Haaretz has revealed new software that leverages ad networks to target and infect people’s devices.

  • The spyware, “Sherlock”, was reportedly developed by Israeli company Insanet and has been sold to at least one “nondemocratic country”.
  • Sherlock is capable delivering infected ads via the real-time bidding (RTB) process, a system of targeted ads towards specific groups and demographics.
  • The spyware is “zero-click”, meaning that it can infect a device without any action required from the user.

How bad is this?

It looks pretty bad. This news arguably vindicates some of privacy advocates’ worst warnings about the intrusive nature of online advertising.

How is online advertising involved?

As readers will know, the internet is near-saturated with cookies and other trackers that collect information about people’s activities. Such information includes our browsing history, location, and inferences about who we are and what kind of stuff we might buy.

When the average internet user visits a website, a split-second bidding war takes place behind the scenes as advertisers compete to get their ad up on the page. The price they’ll pay depends on how relevant the ad is to the user’s characteristics and preferences.

Think about it: A vast network of tracking software collecting information about pretty much everyone on earth and enabling third parties to deliver content to their devices.

But can an advertiser actually target a specific person with an ad?

Ad giants like Google and Apple pseudonymize advertising data, associating people’s characteristics with an advertising ID that—supposedly—cannot be linked to an offline identity.

However, an industry known as “AdInt” (advertising intelligence) aims to leverage this commercially available data for intelligence purposes.

“In order to have real AdInt, a huge advertising infrastructure is required,” an anonymous industry source told Haaretz.?

“You need to be connected somehow to the various ad systems in order to do what Apple and Google absolutely don’t want you to be capable of doing: to track people or use advertising profiles for infections.”

So how does this relate to this new spyware?

Sherlock, the new spyware revealed last week by Hareetz, appears to use ad networks in two malicious ways:

First, the software leverages online advertising’s targeting capabilities. An audience is compiled and barraged with ads, enabling further surveillance via AdInt techniques. Among this audience can be the Sherlock user’s individual targets.

Then, a malicious ad is inserted into the campaign, infecting any devices that display it. The Sherlock user can then spy on the people who own these devices.

Is this any worse than other types of spyware?

We’ve seen some particularly nasty strains of spyware emerge in the past few years, including Pegasus—a powerful surveillance tool typically delivered via Apple’s iMessage app.

Sherlock could be equally, if not more damaging—if Haaretz has the details right.

Like Pegasus, Sherlock is a “zero-click” spyware—you don’t need to download an attachment or click a link to get infected.

Haaretz claims that because Sherlock infects via the “front door” (rather than exploiting device security vulnerabilities), “even the smartest and most advanced defenses of Apple, Google and Microsoft currently lack the capacity to block this sort of infection”.

The company behind Sherlock, Insanet, sells the software as a product rather than a service, with permission from the Israeli government. The terms of Isanet’s license have reportedly been restricted recently, but one “non-democracy” has reportedly already purchased the software.

For years, privacy advocates have been urging governments and regulators to curtail the risks associated with online advertising. Such risks can sometimes seem abstract and theoretical.

While we don’t yet know how much damage Sherlock will do, the existence of this software makes digital advertising risks seem much more concrete.

California Age-Appropriate Design Code Act Blocked Over Free Speech Concerns

A US district court in California has blocked the planned enforcement of the California Age-Appropriate Design Code Act (CAADCA) until a court case alleging that the law violates the First Amendment is resolved.

  • The CAADCA imposes privacy and content moderation obligations on online platforms. The law is closely modeled on the UK’s Children’s Code.
  • Industry group Netchoice argues that the CAADCA violates the First Amendment of the US Constitution, which protects the right to free expression.
  • A US District Court in California found that Netchoice has a good chance of succeeding in its claims. The CAADCA’s enforcement date of July 1, 2024, will therefore be pushed back until after the case has concluded.

What is the CAADCA?

The CAADCA is a California children’s privacy and online safety law. The law strongly resembles the UK’s Children’s Code, and some of the architects of the Children’s Code also helped draft the CAADCA.

OK, what’s the Children’s Code?

The Children’s Code is a code of practice developed by the Information Commissioner’s Office (ICO), the UK’s data protection regulator.

Originally called the “Age Appropriate Design Code”, the Children’s Code is effectively a guide to complying with the UK GDPR when developing products likely to be used by children.

As a code of practice, the Children’s Code was approved by the UK’s parliament and specifically references obligations under UK law.

But UK law doesn’t apply in California…

Exactly. That appears to be one of the issues here.?

The UK’s legal system is very different from that of the US, and California has attempted to transplant a code derived from the GDPR into a jurisdiction where the GDPR does not directly apply.

Why is the code incompatible with US law?

NetChoice, the group that brought this case, argues that the CAADCA violates the First Amendment (free speech).

It’s important to note that the District Court did not rule that the CAADCA is unconstitutional—but it found a decent chance it might be.

What does the CAADCA require?

The CAADCA is targeted at businesses that provide “an online service, product, or feature likely to be accessed by children” and imposes obligations including the following:

  • Principles of data minimization and purpose limitation.
  • Data Protection Impact Assessments (DPIAs—normally called “data protection assessments” in other state privacy laws).
  • A prohibition on the use of “dark patterns”.
  • Restrictions on profiling children and selling personal data about children.
  • A requirement to provide child-friendly language in privacy notices.
  • A requirement to publish and honor content moderation rules.

These sorts of requirements are common among the comprehensive privacy laws now enacted across 12 US states.

The law also requires platforms to either estimate the age of users and offer children strong privacy protections by default—or else to offer strong privacy protections by default to all users.

But what does a privacy law have to do with freedom of speech?

The court found freedom of speech issues with the CAADCA that might surprise readers outside the US.

For example, similar to other privacy laws, the CAADCA prohibits platforms from processing more personal data than necessary to provide services used by children (with some exceptions). The aim of this provision is to help prevent children from viewing harmful content.

The court found that this requirement “throws out the baby with the bathwater” as it could also restrict how platforms show “neutral or beneficial content” to children.

So not displaying certain content on a social media platform is a free speech issue?

Yes, the court—referencing various legal precedents—says that this restriction will likely infringe on platforms’ “commercial speech” rights.

Here’s another example: The CAADCA would require platforms to publish and enforce content moderation policies and warn children if parents are monitoring their activities on an app.

These obligations would force platforms to “affirmatively provide information to users” and to restrict the speech of their users, thus “regulating” speech.

The court agreed with NetChoice that “the State has no right to enforce obligations that would essentially press private companies into service as government censors”—which is what these requirements would ostensibly do.

What are the implications for other similar privacy laws?

The CAADCA is different from many other privacy laws in that it focuses on children. One persistent issue is that the law’s requirements seem too broad and could significantly impact adult users.

Other states like Florida and Utah have passed similar kids’ privacy laws—and Congress is considering several child-focused federal privacy bills, such as the Kids Online Safety Act (KOSA) and “COPPA 2.0”.

But the court also seems to have a more fundamental problem with the CAADCA’s obligations around risk assessment, dark patterns, and the sale of personal data.?

As noted, such provisions are present—in some form—in every comprehensive state privacy law from California to Connecticut. So a finding that these obligations violate the First Amendment per se could put those other laws on shaky footing.

Pressure Building on EU-US Data Privacy Framework

The EU-US Data Privacy Framework (DPF) faces criticism on three fronts:?

  • A French MP has lodged an application to have the decision annulled at the Court of Justice of the European Union (CJEU).
  • Politicians and regulators in Germany have voiced criticism of the decision.
  • Campaigner Max Schrems is reportedly planning to lodge a complaint about the DPF in the fall.

Let’s start with the French MP’s challenge.

Sure.?

French MP Nicholas Latombe lodged an application for annulment of the DPF last week under Article 263 of the Treaty of the Functioning of the European Union (TFEU).

Article 263 TFEU is a way to challenge EU laws and other acts. But it’s intended for use primarily by EU institutions and other bodies to whom such acts are directly addressed.

Some organizations tried to use this mechanism to challenge the DPF’s predecessor, Privacy Shield, but failed. As such, Latombe might find that he is not eligible to use the procedure.

What’s happening in Germany?

There have been criticisms of the DPF from various parties in Germany over the past week.

Several German politicians expressed support for Latombe’s application to annul the decision, with Free Democratic Party (FDP) spokesperson Maximilian Funke-Kaiser telling Euractiv that the DPF did not protect personal data to EU standards.

A data protection authority (DPA) in the German state of Thuringia also publicly criticized the DPF and warned organizations that the scheme might not stand up to CJEU scrutiny.

And Max Schrems is back?

Privacy campaigner Max Schrems, whose legal cases invalidated both of the DPF’s predecessors, has long made clear that he intends to challenge the adequacy decision.

Euractiv reports that Schrems will likely submit a complaint about the DPF to the Austrian DPA—and that he could do so as early as October 10.

Given Schrems’ track record in torpedoing US data-transfer deals, his case might be the most effective challenge to the DPF.

What We’re Reading

Take a look at these three privacy-related reads published this week:

要查看或添加评论,请登录

社区洞察

其他会员也浏览了