Making Privacy Primary
Image is from https://swisscognitive.ch/2023/01/10/privacy-meets-artificial-intelligence-the-intersection-of-cybersecurity-and-leadership-ai-trajectory-2023/

Making Privacy Primary

One of the biggest challenges with modern privacy laws is the right to be forgotten. This right first appeared under the General Data Protection Regulation (GDPR) and later appeared in the California Consumer Privacy Act (CCPA). Additionally, the United States does not have federal omnibus security and privacy law; therefore, the Federal Trade Commission (FTC) has pursued privacy and data security violations under Section 5 of the FTC Act, which prohibits deceptive or unfair acts. [1] Furthermore, the mission of the FTC is to protect American consumers from corporate harm and prevent anti-competitiveness.

One weapon at the FTC’s helm is algorithmic disgorgement — a weapon intended to be a fencing mechanism to intimidate with the ideal result of preventing future unlawful conduct — such as developing a model with data obtained without proper consent. [2] Algorithmic disgorgement, also known as model destruction, requires a company to delete any models or algorithms built and trained with data that the company failed to disclose and capture informed consent for the collection of the data.?The FTC has enforced algorithmic disgorgement in three cases — Cambridge Analytica[3], Everalbum[4], and WW, formerly known as Weight Watchers International, Inc. (Weight Watchers) and its subsidiary Kurbo, Inc. (Kurbo)[5].

Each case required the named defendants to delete any models or algorithms built and trained with tainted data. On the surface, it makes sense that the FTC would require an entire model to be deleted, as once the data is input into the model, even if deleted, the imprint from the data will remain present in the model. Deleting the entire model and any data collected with improper consent hypothetically should make the consumer whole. However, the FTC fails to consider the difficulty of unwinding a model, including identifying the provenance of the data.

The most recent algorithmic disgorgement case, Kurbo, and Weight Watchers (the "parties"), illustrates some of these issues. The parties violated the COPPA Rule through its collection practices regarding minors. In the settlement, the FTC requires the parties to "destroy the algorithms or AI models it built using [the] personal information collected through its Kurbo healthy eating app from kids as young as 8 without parental permission" [6] and delete the illegally obtained data.?

The parties most likely incorporated “several different intersecting models and data sets” [7] to work together or even splice or blend the data into a new dataset to test and validate. Furthermore, data replication and distribution are standard practices, ensuring a variety of stakeholders can access the data for different purposes.?Suppose a company does not have an internal control in place to track the data's lineage and provenance. In that case, model destruction could be a weapon of mass destruction rather than a tool for enforcement to protect privacy values.

The FTC deploying algorithmic disgorgement is an attempt to remedy privacy violations. But unfortunately, this enforcement tool fails to protect privacy values or make a consumer whole again. Furthermore, disgorgement has a disproportionate impact on small businesses, enhancing an anti-competitive environment where tech monopolies thrive. Imagine, for example, a terrorist wants to launch a nuclear weapon against the United States; they are less likely to be successful than if they were launching a nuclear weapon directed at Haiti. The reason is the United States has the resources to detect and intercept such an attack, whereas Haiti is a small island with minimal resources. Algorithmic disgorgement is analogous to a nuclear attack on American corporations; large companies will have the resources to detect, mitigate, comply, and survive; however, smaller companies, similar to Haiti, most likely will be annihilated.?

Ari Waldman theorized in The New Privacy Law that we are entering the second wave of privacy law; the first wave “focused on notice, transparency, and choice” [8], whereas the second wave will focus on internal procedures [9].?The FTC’s use of model destruction is a response to the first wave; however, this is predicated on the yet-to-be-established second-wave protocols. If the FTC requires companies to implement processes that “automatically attach lineage information to data they collect and used in building algorithmic systems" [10], then companies will be better equipped to protect privacy values, specifically in terms of context-specific norms of information flows as described by Helen Nissenbaum in?Privacy in Contextual Integrity.?

Privacy values have been debated time and again by privacy professionals; however, over time, more or less, there appears to be a mutual consensus that consumers have the right to seven different privacy values that should be protected. Those privacy values are:

1.????The right to be left alone[11]

2.????The right to limit access to self[12]

3.????The right to secrecy[13]

4.????The right to control your personal information[14]

5.????The right to protect your personhood and autonomy[15]

6.????The right to intimate and confidential relationships[16]

7.????The right to restraint government and commercial from personal gain[17]

Furthermore, Daniel J. Solove discusses how to conceptualize privacy and assess the value of privacy over time as privacy practices evolve. [18]

Moreover, suppose the FTC requires companies to map their data flows. Companies will understand the privacy values that they need to protect for the consumers. Additionally, if an enforcement action results in algorithmic disgorgement, companies can verify to the FTC and the public that any data collected and ingested into their models illegally are destroyed. Additionally, this requirement creates a mechanism for recourse for third parties who may have ingested tainted data through an API. They can test and validate their models and respond appropriately by removing any illegal data within their data ecosystems. Thereby effectively creating a pathway to make a consumer whole again.

To ensure that consumer privacy values are protected, the FTC must introduce a framework that creates guardrails for corporations to comply with while also informing consumers what is and is not allowed by companies. Thus, allowing consumers or even whistleblowers to report violations of data collection integrity. A lawsuit was filed July 2022, in the Northern District of California by a Jane Doe against Meta Platforms, Inc. (Meta), formerly known as Facebook, Inc. (Facebook) and UCSF Medical Center, and Dignity Health[19], for its unlawful collection and sharing of health data without consent.

The complaint alleges that Jane Doe’s data was obtained by Meta when she input her information as a patient of USCF Medical Center in their online patient portal. Jane Doe's information was shared with Meta because of USCF Medical Centers' use of the Meta Pixel[20].?UCSF Medical Center and Dignity Health knew that when embedding the Meta Pixel tool into its web platforms that it would share sensitive information with Meta — as the tool is designed to help with advertising.

Because Jane Doe entered her medical information into the patient portal and her “User Data, including sensitive medical information, harvested by Meta through the Meta Pixel tool without her consent”[21] there was a violation of contextual integrity[22] by the parties involved.?Furthermore, as a result of Meta unlawfully collecting Jane Doe’s information she received advertisements specifically tailored to the data collected about her.?Every privacy value was violated by the parties involved in the transaction.?

Still, the data which Meta harvested illegally was shared with other third parties for advertising, and it is possible that those third parties shared it with their third parties, thereby making it difficult to trace the data lineage. In all fairness, this is a prima facia example where algorithmic disgorgement would be appropriate enforcement. However, UCSF Medical and Dignity Health are the data controllers, as the data was first fed into their website. Later the information is shared with Meta, and then Meta shares the data with its third parties. UCSF Medical and Dignity Health would be the primary responsible parties, as the data provenance is from their website. They are obligated to obtain consent, not Meta or its third parties that have ingested the illegally obtained data they received from UCSF Medical and Dignity Health. So, which party would the FTC bring the enforcement action against?

Most parties might say that Meta should receive the enforcement action as their technology enables the tainted data to be shared efficiently across multiple models and databases. However, obtaining proper consent before disclosure is UCSF Medical and Dignity Health's responsibility — not Meta's. Therefore, algorithmic disgorgement would not be beneficial as USCF Medical and Dignity Health used a third-party tool to expand their marketing. Additionally, to the public's knowledge, Jane Doe's information did not train models internally; it was only shared because USCF Medical and Dignity Health used a third-party marketing tool to reach its ideal customer. It is unlikely that two medical institutions intentionally chose to violate HIPAA and other health information sharing regulatory requirements.

Consequently, if the FTC wants to have a pervasive impact when it uses its known enforcement tool, then the FTC needs to begin by focusing on creating a framework for data lineage. In that framework, the FTC could extend its enforcement reach to companies like Meta, who receive ill-gotten data when companies like USCF Health and Dignity Health are the primary source. Thereby creating a pathway to help make a consumer whole again and forcing companies to take data integrity and privacy values seriously.?


[1] “15 U.S. Code § 45 - Unfair Methods of Competition Unlawful; Prevention by Commission,” LII / Legal Information Institute, accessed August 14, 2022, https://www.law.cornell.edu/uscode/text/15/45.

[2] Avi Gesser, Paul Rubin, and Anna Gressel, “Model Destruction – The FTC’s Powerful New AI and Privacy Enforcement Tool | Compliance and Enforcement,” accessed August 13, 2022, https://wp.nyu.edu/compliance_enforcement/2022/03/30/model-destruction-the-ftcs-powerful-new-ai-and-privacy-enforcement-tool/.

[3] https://www.ftc.gov/legal-library/browse/cases-proceedings/182-3107-cambridge-analytica-llc-matter

[4] https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter

[5] https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww

[6] the Enterprise team, “Put down the Algorithm, and Slowly Back Away,” Protocol, March 14, 2022, https://www.protocol.com/newsletters/protocol-enterprise/ftc-algorithmic-destruction-google-cloud?rebelltitem=1#rebelltitem1?rebelltitem=1.

[7] “FTC’s Order to Delete AI Algorithms Will Have Consequences - Protocol,” accessed August 13, 2022, https://www.protocol.com/enterprise/ftc-algorithm-data-model-ai.

[8] Ariz Ezra Waldman, “The New Privacy Law,” UC Davis Law Review Onlin 55, no. 19 (August 2021): 19–42, https://lawreview.law.ucdavis.edu/online/55/files/55-online-Waldman.pdf.

[9] Waldman.

[10] “FTC’s Order to Delete AI Algorithms Will Have Consequences - Protocol.”

[11] Introduced by Warren and Brandeis in their law review article The Right to Privacy published in 1890 in the Harvard Law Review. Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review 4, no. 5 (December 15, 1890): 193–220, https://doi.org/10.2307/1321160.

[12] Introduced by Sissela Bok in her book Secrets: on the ethics of concealment and revelation?privacy is the condition of being protected from unwanted access by others — either physical access, personal information, or attention. Sissela Bok, review of Secret: on the Ethics of Concealment and Revelation, by Raymond Wacks, Modern Law Review 50, no. 1 (January 1987): 124–28, https://www.jstor.org/stable/i245709.

[13] In his book, The Economics of Justice, Richard Posner introduces the concept that privacy is the right to “conceal information about themselves that others might use to their disadvantage.” Richard A. Posner, The Economics of Justice, 5th ed. (Cambridge Massachusetts: Harvard University Press, 1983).

[14] Charles Fried introduces the concept that “privacy is not simply an absence of information about use in the minds of others; rather it is the control we have over information about ourselves.” Charles Fried, “Privacy,” Yale Law Journal 77, no. 3 (January 1968): 475–82.

[15] Jeffrey Reiman introduced the concept that people respect individual privacy barriers through social rituals. Jeffrey Reiman, “Privacy, Intimacy, and Personhood,” Philosophy of Public Affairs 6, no. 1 (1976): 26–44, https://www.jstor.org/stable/2265060?origin=JSTOR-pdf.

[16] Professor Danielle Citron believes that should be protected as it is a unique form of privacy. Danielle Citron, “Sexual Privacy,” Yale Law Journal 128, no. 1870 (2019), https://www.yalelawjournal.org/pdf/Citron_q8ew5jjf.pdf.

[17] Westin introduces that personal information should be defined as a property right. Alan Westin, “Privacy and Freedom,” Washington and Lee Law Review 25, no. 166 (1968).

[18] Daniel J. Solove, “Conceptualizing Privacy,” California Law Review 90, no. 4 (July 2002): 1087–1155, https://doi.org/10.2307/3481326.

[19] Jane Doe, individually and on behalf of all other similarly situation, v. Meta Platforms, Inc. f/k/a Facebook, Inc., UCSF Medical Center, and Dignity Health Medical Foundation, No. 3:22-cv-04293-AGT (n.d.).


[20] Per Section 3 of Summary of Allegations in the Complaint. “Meta Pixel is a snippet of code embedded on a third-party website that tracks a users’ activity as the users navigate through a website. Meta Pixel can track and log each page a user visits, what buttons they click, as well as specific information they input into the website.” (Jane Doe,, v. Meta Platforms, Inc, et al. 2022)

[21] Jane Doe, individually and on behalf of all other similarly situation, v. Meta Platforms, Inc. f/k/a Facebook, Inc., UCSF Medical Center, and Dignity Health Medical Foundation, No. 3:22-cv-04293-AGT (n.d.).

[22] Nissenbaum describes multiple transmission principles confidentiality, reciprocity, bidirectionally, dessert, entitlement, and need which aid in identifying if contextual integrity is violated in the norm of information flow. ?(Nissenbaum, p. 145, 2009)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了