Privacy impact assessments are not the proper tool for privacy by design
This blog was written in response to the numerous comments I received on a position I took, namely that a PIA was not an appropriate tool of privacy by design. In my post on LinkedIn, I took issue with a statement in the description of a forthcoming webinar which read “'For this reason, the Privacy Impact Assessment (PIA) has emerged as the lynchpin for successful privacy by design programs.” The curtness of my post left some people befuddled. In this post, I dive into the details of why I believe PIAs and privacy by design, while both useful for privacy, serve different purposes. A PIA is a wonderful tool for catching existing and potential privacy violations, whereas privacy by design forestalls them from even becoming potentialities.
I’m going to break this post down into four distinct arguments: semantics, meaning, practice and embedded privacy model.
Semantics
First off, just to clarify, I am discussing government and private sector assessments generally labeled “Privacy Impact Assessment” and initialized as PIA. Now exactly what form a PIA takes can vary by jurisdiction or organization as there is no universally applied meaning. To that extent, an organization could undertake various efforts to design for privacy and part of that effort may be labeled a PIA, regardless of the content. I once had a client with a “Differential Privacy Policy” that had nothing to do with the common understanding of Differential Privacy and everything to do with generic encryption and abstraction standards to be applied to different types of data. Many organizations improperly call privacy notices/statement, a privacy policy. Calling something an apple doesn’t make it an apple. I have no doubt people may be using the term PIA loosely to refer to any type of assessment related to privacy. Those doing this, may, in fact, be doing privacy by design and are only guilty of a misnomer of the term.
I think this is the case with ISO and their use of PIA in ISO 29134:2017 . ?The standard says “A PIA is more than a tool: it is a process that begins at the earliest possible stages of an initiative, when there are still opportunities to influence its outcome and thereby ensure privacy by design.”?Similar to the ISO definition, the UK ICO’s timely post about DPIAs on LinkedIn describes it as a process (which assesses and mitigates risks, not just impacts) during the planning and development process.
A DPIA should begin early in the life of a project, before you start your processing, and run alongside the planning and development process. It should include these steps:
1?? Identify need for DPIA
2?? Describe the processing
3?? Consider consultation
4?? Assess necessity and proportionality
5?? Identify and assess risks
6?? Identify measures to mitigate risk
7?? Sign off and record outcomes
8?? Integrate outcomes into plan
9?? Keep under review
What both ISO and ICO say does accurately describe some aspects of privacy by design but, in my opinion, the labeling of those processes, or the documentation, as a (D)PIA owes more to the bandwagon effect than a proper use of the term. It is done out of convenience and convention rather than a precise use of the term. ?In order to understand why I say that, it’s necessary to dive into the term itself, which I do in the section below.
Meaning
A Privacy Impact Assessment is clearly an assessment, and a literal reading of the title would imply that such assessment examines the impact (to someone or something) related to privacy. A PIA is not a risk assessment, because risk generally has two measures: likelihood and impact. In a risk assessment, the former measures how likely an event is to occur and the latter measures the gravity of the effect of that event. The risk of the moon hitting earth is the combination of its likelihood (negligible in our lifetime) and the gravity (no pun intended) of its effect, namely destroying life as we know it. PIAs, as the name implies, only covers one side of the equation, the measure of the impact. PIAs originated from a desire to replicate the use of Environmental Impact Assessments. Those were developed as a way of understanding the impact of planned developments (roads, dams, subdivisions, etc.).
Both privacy and environmental impact assessments look at only one side of the equation: the impact. The likelihood of the precipitating event is a foregone conclusion (or 100% likely). If we build this dam, how much damage will there be to the environment? If we build this surveillance system, how bad will people be affected? Risk assessments include the measure of likelihood. How likely is California to build a new dam in the next decade and what will the impact be??How likely is a vendor to sell data we give them to a data broker and what will the impact be?
Many may be confused because impact can, in itself, include a subsidiary risk consideration, namely a likelihood of different levels or types of impacts. In the case of the dam, there isn’t just direct impact from the impoundment behind the dam and water diversion but there is a “risk” the dam breaks and causes damage. One of the impacts of the dam project could be the risk of breaking and flooding downstream communities.
PIAs, and their environmental predecessors will also include considerations for mitigating harms, but this is reacting to a proposed design not designing for privacy. An early definition, from the Australian Privacy Act of 1988 , bears out this interpretation.:
A privacy impact assessment is a written assessment of an activity or function that:
(a)?identifies the impact that the activity or function might have on the privacy of individuals; and
领英推荐
(b)?sets out recommendations for managing, minimising or eliminating that impact.
Managing, minimizing or eliminating the impact is not the same as designing products, services or business processes for privacy. You can manage the impact by buying insurance which pays out certain claims. You can minimize impact by taking preventative measures which don’t eliminate the invasiveness of an activity but can reduce their toll. Consider a person spying on their spouse. The spy could do things to reduce the impact on the one being spied on, putting things back in exact order after rummaging through belongings, making cameras hard to find, using surreptitious covert techniques which are hard to uncover, etc. They didn’t “design” their spying for privacy, they designed their spying for minimizing impacts. They still violated their spouse’s privacy. The is the antithesis of proactive design (Cavoukian’s 1st Principle of Privacy by Design ).
The 1st Principle states “Proactive not Reactive; Preventative not Remedial: The Privacy by Design (PbD) approach is characterized by proactive rather than reactive measures. It anticipates and prevents privacy invasive events before they happen. PbD does not wait for privacy risks to materialize, nor does it offer remedies for resolving privacy infractions once they have occurred — it aims to prevent them from occurring. In short, Privacy by Design comes before-the-fact, not after.”
Note the phrase “privacy risks to materialize.” Risk “materializes” when the event occurs., when it goes from a potentiality to a reality, the spy installs a camera, not when the impact is felt (i.e. the spouse learns about the camera). The point is, the very nature of Privacy “Impact” Assessments lends themselves to sanctioning privacy invasive acts while “managing, minimizing or eliminating” the impact.
**** A note on Data Protection Impact Assessments under GDPR
Under the European Union’s General Data Protection Regulation (GDPR), a Data Protection Impact Assessment (DPIA) is a specialized form of Privacy Impact Assessment. The legal requirements for a DPIA underscore what I’ve just described of a PIA. The timing requirements of Article 35 (DPIA), when contrasted against Article 25 (Data Protection by Design and Default), highlight the nature of a (D)PIA versus more proactive Data Protection/Privacy by Design.
Data protection by design and default must be done at the time of determination of the means of processing (i.e. when you’re doing the design, determining what processing activities to engage in), and at the time of processing, whereas a DPIA needs to be conducted only at a time prior to processing but not necessarily prior to the determination of the means of processing. While you could conduct a DPIA prior to, or during, system design, as suggested by the ICO, the only obligation of the controller is to do so prior to processing of real personal data. In other words, you can build the system, but, before you actually go live, you better do an assessment. ?
Practice
Consistent with the meaning discussed above, in practice, in my experience, PIAs tend to come after the point where many design decisions have been made. One almost universal question PIA’s ask is what personal data/information is processed. Forward thinking PIAs may phrase such question as “what personal data do you anticipate processing?” But at the point you already know or anticipate what information a system will process, the organization has already collapsed the design space, already made decisions. Privacy by design should start not with a set design to be tweaked and altered (Cavoukian’s admonition that you shouldn’t bolt on privacy but build it in) but without the preconceptions and allow for the weighing of different design options (not only along the dimension of privacy but other non-functional requirements like safety, security, accessibility, usability, etc.)
Consider a company building micro-climate controllers to adjust temperatures in rooms based on who is in the room and their preferred climate. A response to a PIA’s question of what information does the design team anticipate collecting might be responded with:
Room based airflows, humidity, temperature, thermal video, thermal signatures (of individuals), audio, voice patterns, inferred location of individuals (within the house), personal preference settings, historical information, credentials for administrators.
The PIA might go on to ask about security measures, access controls, and how individuals were informed or give consent or exercise control over the system. But the PIA didn’t ask the fundamental question on design choices, were there other designs (for instance using a touchpad rather than audio and voice pattern for interactions, using a physical token rather thermal signatures to identify people) that could have been used instead that posed lower risks of privacy invasions, such as surveillance by cohabitants. The other design options would be something a privacy by design approach should examine.?Maybe there were justifications for the decisions made, but resolving tensions between competing values (privacy and usability for instance) and optimizing the design is more than rote application of a PIA to a proffered design.?A PIA could be an evolving document, drafted, edited, rewritten and changing with the evolution of a design, but the reality is, the design of PIAs and the business processes they inhabit, more often than not contemplate a single PIA conducted close to launch to be presented to a privacy office for remedial efforts before a system goes live.
In my personal experience, PIAs tend to be post-hoc justifications for the collection and use of data, albeit with some minor tweaks to provide some minimal “protection” of data. As discussed in the Meaning section above, the privacy invasive act is a foregone conclusion. “We need this data to do this.” “We need to share this data to do this.” The organization is going to proceed with X, whatever X is. Now let us add some notification to individuals in the fine print, throw some bones to consent, add in a little security theater and voila, privacy is assured [note the sarcasm in my voice].?
Embedded Privacy Model
There are many normative models for privacy: Hartzog’s Pillars of Privacy, Westin’s States of Privacy, Solove’s Taxonomy of Privacy, Calo’s Subjective and Objective Harms, Citron’s Sexual Privacy, NIST’s Problematic Data Actions and more. There are also pseudo models, such as the well-known and oft used Fair Information Practice Principles (FIPPs). I call FIPPs a pseudo privacy model because the practice principles (transparency, choice, accountability, security, review and correction) are not so much a normative model for privacy but a normative model for fairness in the use of information. Laudable goals but fairness is not equivalent to privacy.
PIAs invariably have an embedded privacy model, or, even more typically an embedded abbreviated model such as the FIPPs [See R. J. Cronk and S. S. Shapiro, "Quantitative Privacy Risk Analysis," 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2021, pp. 340-350, doi: 10.1109/EuroSPW54576.2021.00043 . Also free at Enterprivacy.com ]. Metaphorically, the typical structure of a PIA cuts through a 3-dimensional risk universe with a 2-dimensional view. In other words, PIAs often ignores many privacy risks (e.g. dark patterns or manipulated decision making) in favor of a select few (e.g. secondary use, exclusion and insecurity). They ignore possible mitigation tactics (abstraction, perturbation or decentralization) in favor of a select few (encryption, access control, notice, and consent). They ignore whole segments of the ecosystem in which the design operates often only focusing on risks posed by nefarious actors acting in contravention to the organization’s goals and objectives (e.g. hackers, cybercriminals,?internal bad actors).
Additionally, PIAs are, by their nature, inconsiderate of wider design choices, they merely apply point solutions to point problems (too much data-> data minimization, insecure data-> access controls, mismatched expectations -> privacy notice), consistent with their embedded and limited privacy model. A privacy by design methodology will consider other designs in the design universe. And while, privacy by design may not pick the most privacy friendly option, as there are always other values to consider such as profitability, useability, safety, etc., it provides a much more robust consideration of ways to reduce risk than a PIA could ever achieve with a specific design.
Privacy Impact Assessments are useful as an audit tool to catch risky business processes, but they are not useful as a tool for privacy design.
All of the above reasons (semantics, meaning, practice and an embedded privacy model) aren’t to say that organizations shouldn’t conduct PIAs. PIAs and DPIAs have their role. However, that role is more as one of an audit tool. This is especially important in areas where a privacy design process does not exist in the organization or is limited in application. I find this most often in the creation of business processes that may utilize existing systems. Whereas an organization may have great privacy by design processes when creating new service or product offerings, when a department develops an internal process (such as analyst emails file to IT, IT removes redundant records and put on server for access by analyst, who download and uploads to service provider), there was likely no “privacy” considerations incorporated into the design of that business process. This is where a PIA could catch risky activities. Consistent with the discussion with DPIAs, the purpose of PIAs is to catch potential problem BEFORE processing begins or the system goes live and actually affects the privacy of individuals. It remains, however, of limited use in the design of products, services or business processes with an eye towards privacy.
R. Jason Cronk is the author of the just released 2nd Ed of Strategic Privacy by Design , an official textbook of the IAPP's CIPT course. He is also Chair of the newly form Institute of Operational Privacy Design , a non-profit dedicated to standardizing the process of privacy by design in organizations.
P.S. I fully expect a raft of negative responses but, in line with my Social Media policy, don't expect further discussion on my part.
CIPP/US, CIPP/E, CIPP/C, CIPM, CIPT, FIP, ISO27701 LA, Microsoft SC-400, Microsoft SC-900
1 年Hats down for this analysis. A rare piece of shared knowledge that actually makes you think over and over.
Data | Privacy | Technology | Competition | Litigation | Travel & Hospitality Industry | Co-host @RegInt: Decoding AI Regulation | Co-author of AI Act compact
2 年Odia, Tilman
Principal Consultant | Head of Research@ Privcore | PhD, FIP, CIPM, CIPP/E, CISSP, CISM
2 年My take on your analysis is that some, but not all, PIAs make effective use of PbD principles. Much depends upon the timing, scope and quality of analysis undertaken within each PIA. So I would recognise the potential of PIAs to contribute towards PbD, but scrutinise each PIA to assess the extent to which it actually makes such a contribution. Annelies Moens' analysis of "what makes a great PIA" includes discussion of how timing can affect the extent to which PbD can be incorporated into a PIA: https://www.privcore.com/_files/ugd/440bc4_772f0f06b9cf4f37be5da234458062f2.pdf