?? Privacy noir

?? Privacy noir

Lucid folks,

The U.S. continues to tilt towards a European approach to data protection despite, or perhaps in spite of, a dysfunctional Congress.? This latest pull towards true GDPR North uses data minimization and the legitimate interest test as regulatory lodestones. Case in point, the FTC’s proposed settlement with Marriott and Starwood over its security lapses lean into data retention justification and collection self-restraint.?

  • “The companies must implement a policy to retain personal information for only as long as is reasonably necessary to fulfill the purpose for which it was collected. The companies also must share the purpose behind collecting personal information and specific business need for retaining it… [including] if required by law, regulation, or court order or other legal obligation; or for other documented legitimate business needs except for marketing.

If this provision sounds eerily familiar, it should. It echoes the first principles of proportionality and limitation also encoded into the CCPA (as amended by the CPRA), the Colorado PA Regulations and an ever-growing patchwork of analogous comprehensive state privacy laws.?

Moving on, in this XL issue:

  • What the FTC is telling AdTech
  • Tidings from the UK’s #DPPC24 conference
  • Agree/Reject button colors matter to Belgium

…and more.

From our bullpen to your screens,

Colin O'Malley & Lucid Privacy Group Team

With Alex Krylov (Editor/Lead Writer), Ross Webster (Writer, EU & UK), Raashee Gupta Erry (Writer, US & World), McKenzie Thomsen, CIPP/US (Writer, Law & Policy)


?? If this is the first time seeing our Privacy Bulletin in your feed, give it a read and let us know what you think. For more unvarnished insights, visit our Blog.

Your comments and subscriptions are welcome!


Blog: What FTC Sticks Are Telling Adtech

As Lucid’s principal, Colin O’Malley writes, Adtech companies, with their scaled data collection across 3rd party sites, often feel the hot gaze of privacy regulators around the world, and in the US, they now have twin authorities pushing the policy envelope, with perhaps more on the way.?

Tracking regulatory activity in the Adtech space can sometimes feel disjointed, with a myopic focus on the particulars of each emerging case.? In this piece, we’ll focus on recent enforcement activities in the US and attempt to? extract the main areas of focus across these cases to evaluate higher level policy priorities.?

The privacy cops can be cheap with the carrots, but they have plenty of sticks, and as many openings drive their points home...??

Continue reading


Takeaways From UK ICO's Data Protection Professional Conference

Lucid’s David Reeves had the pleasure of attending the recent UK ICO DPP Conference that took place on the 8th October. Amid the usual excellent speakers and useful workshops was the opportunity to join a Q&A with the Information Commissioner, John Edwards.?

Priorities, in brief: Although understandably short on definitives and predictions, the Q&A nevertheless provided a number of notable signposts.?

  • The kids are not alright. Edwards confirmed that children’s online privacy and safety vis-a-vis Children's Code and Online Safety Act will form a large part of ICO’s ongoing work and collaboration with Ofcom. Recall his keynote from February mentioning the role that audits, DPIAs and, yes, enforcement actions will play in advancing a key (and most controversial) component of the kids’ safety framework -- age assurance.?
  • Privacy and AI (of course). A major prong for the ICO, Gen AI has garnered consultative position papers, including on data scraping, and expressions of support for DPOs. “...For a DPO, it’s not introducing friction into [the push to market], but just making sure there’s a full risk assessment. And we will have tools to help with that…” While Edwards likely means software and guidelines a-la CNIL, we cannot exclude ICO taking action to ensure DPOs have the resources and organizational access they need to do the work.
  • Cookies and adtech. The ICO’s intervention in Google’s Sandbox initiative and partnership with the CMA to hold Google to its commitments appears to be part of a broader strategy. For Edwards, it’s not just about publishers offering ‘Reject All’ buttons or fair choice with Consent or Pay business models. He is interested in exploring multiple “motivating factors” that incentivize “systemic good practice”, including the risk of public shaming.

UK GDPR reform: Efforts to amend the law post-Brexit are not dead, and the ICO expects “something” from the Government quite soon.

Zooming out: John Edwards' regulatory approach for 2025 reflects a measured balance between fostering innovation and upholding existing laws and guidance. The Commissioner appears focused on? providing the tools and support data protection practitioners need to tackle increasingly interwoven tech challenges.

Bonus highlight: The session on cyber crime was particularly interesting. Representatives from IASME, National Crime Agency, and the NCRCG gave opinions and tips on dealing with ransom attacks and other online crimes on the rise. The TL;DR? Never, ever pay. The panel was unanimous and robust in its evidence that once blackmailed is twice blackmailed.

-DR, AK


Podcast: Signal’s Stand Against the Big [AI] Tech Data Machine

Signal Foundation President Meredith Whittaker recently sat down with Kara Swisher to discuss privacy, power, and the future of tech. After leaving Google over its ethical compromises, Whittaker now leads Signal, one of the most secure messaging platforms on the market. As she tells Swisher, her mission is simple: build tech that protects people, not profits, refusing to collect even metadata.

Why it matters: As AI giants tighten their grip on our data, the conversation around privacy is no longer academic—it’s existential.?

  • Signal’s business model stands in stark contrast to Silicon Valley’s data-fueled empires.?
  • Echoing the views of FTC Chair Lina Khan, Whittaker positions Signal as the antidote to ‘surveillance capitalism’, where consumer data is the new fossil fuel and privacy is just collateral damage.

Between the lines: Whittaker’s journey from Google to Signal speaks volumes about the state of tech today. Google, once a place of open debate and lofty ideals, has transformed into a company driven by defense contracts, tracktech, and the relentless pursuit of profit. She left Big G in 2017 after helping to organize walkouts over ethical concerns and sexual misconduct, frustrated by the company’s shift toward a “don’t mention the evil” approach.

  • Trust game. Signal, with 70-100 million monthly users, isn’t playing the numbers game like WhatsApp or Facebook. ByThat makes it the go-to platform for those who truly need privacy, from journalists to activists, and even governments.
  • Capability hype. The AI market, she critiques, is built on decades-old algorithms dressed up as something new, with much of the “innovation” still reliant on rampant data collection. And while consumer hype is at a fever pitch, Whittaker sees cracks in the facade—AI’s promises often don’t match its actual utility, something consumers and professionals should keep top of mind.?
  • Power imbalances. Whittaker warns that AI development is monopolized by companies like Microsoft and Google, who control the data and compute infrastructure while being dependent on the same old user tracking and profiling business models.

Inflection point? Whittaker’s comments point to an inflection point that will likely be precipitated by antitrust action.? As privacy and safety worries mount and the power of big tech comes under fire, there’s a growing appetite for the Bigs to be broken up, starting with Google. At least, to oxygenate the startup market through fairer competition for talent and investment dollars.

Zooming out: The big question is whether regulators will have the courage to act, and whether the public will demand a new era of tech, one where privacy and ethics are baked into the business model, not tacked on as an afterthought. Whittaker’s vision for the future, with smaller, more ethical and data-minimalist tech ecosystems, could become a reality—but only if we recognize that the current model is unsustainable and harmful. She asks, “How do we build technology that is actually beneficial, actually rights preserving?” The answer may, at least in part, be creating a playing field where certain tech could remain sustainably non-profit while attracting top talent.

-AK


Other Happenings

  1. CFPB to Big Banks: Stop Playing Dirty with Our Data. The CFPB’s new Final Rule—following extensive rulemaking—forces banks and financial services to provide for data portability, and to stop hoarding and surreptitious, unrelated secondary uses of consumer financial data.. The rule also bans “screen scraping,” where third parties use login credentials to pull data without ensuring accuracy. Most importantly, consumers will now have the power to shut off data access anytime or let it expire automatically after a year. Score one for privacy. New rules go into effect in 2026 for large institutions; 2028/2030 for SMBs. (See Jules Polonetsky’s excellent X thread, here.)
  2. X's Privacy Policy Shake-Up: Your Data's Going on a Field Trip! So, X is getting cozy with "collaborators"— a.k.a. third parties eager to train their AI models with your data. Effective November 15, 2024, users' data can now be shared with these partners, potentially paving the way for data licensing deals like the one between OpenAI and Reddit. But don’t worry; there’s a chance for users to opt-out—though good luck finding that elusive setting… Meanwhile, X is cracking down on scrapers with hefty penalties for anyone peeking at over a million tweets a day—$15,000, to be exact. So, protect your tweets, folks; they’ve never been more valuable (or more vulnerable)!
  3. Australia OAIC Releases New Guides for AI Privacy Compliance. The Office of the Australian Information Commissioner (OAIC) has published two critical guides aimed at helping businesses navigate the complex intersection of AI and Australia’s federal privacy law. Guidance on privacy and the use of commercially available AI products emphasizes the selection of appropriate tools while meeting privacy obligations; Guidance on privacy and developing and training generative AI models** underscoring the need for AI trainers to assess whether personal information is involved—especially where data provenance is unclear.
  4. RTL Belgium Ordered to Add 'Reject All' Button, Fix Color Schemes. Belgium’s Data Protection Authority just dropped a GDPR hammer on a major regional news publisher. The issue? Not offering a ‘Reject All’ button in their consent interface. Moreover, the DPA agreed that having a brightly colored ‘Accept and Close’ button and comparatively drab other options was “misleading”, and thus also violated Articles 5(1)(a) and 6(1)(a) GDPR, and Article 10/2 Loi-cadre. We’ve said it before and we’ll continue to beat this drum: check your CMP configurations and UX treatments. ‘Preferred choice’ and other forms of nudging could be viewed as manipulative design. (See screenshots below.)

-RGE, AK

Source: RTL Belgium
Source: RTL Belgium

Lucid Resources

要查看或添加评论,请登录

社区洞察

其他会员也浏览了