Privacy and Data Security Violations: What’s the Harm?

“It’s just a flesh wound.”

Monty Python and the Holy Grail

Suppose your personal data is lost, stolen, improperly disclosed, or improperly used. Are you harmed?

Suppose a company violates its privacy policy and improperly shares your data with another company. Does this cause a harm?

In most cases, courts say no. This is the case even when a company is acting negligently or recklessly. No harm, no foul.

Strong Arguments on Both Sides

Some argue that courts are ignoring serious harms caused when data is not properly protected and used.

Yet others view the harm as trivial or non-existent. For example, given the vast number of records compromised in data breaches, the odds that any one instance will result in identity theft or fraud are quite low.

And so much of our data isn’t very embarrassing or sensitive. For example, who really cares what brand of paper towel you prefer?

Most of the time, people don’t even read privacy policies, so what’s the harm if a company violates a privacy policy that a person didn’t even bother to read?

The Need for a Theory of Harm

Courts have struggled greatly with the issue of harms for data violations, and not much progress has been made. We desperately need a better understanding and approach to these harms.

I am going to explore the issue and explain why it is so difficult. Both theoretical and practical considerations are intertwined here, and there is tremendous incoherence in the law as well as fogginess in thinking about the issue of data harms.

I have a lot to say here and will tackle the issue in a series of posts. In this post, I will focus on how courts currently approach privacy/security harm.

The Existing Law of Data Harms

1. Data Breach Harms

Let’s start with data breach harms. There are at least three general bases upon which plaintiffs argue they are injured by a data breach, and courts have generally rejected them.

1. The exposure of their data has caused them emotional distress.

2. The exposure of their data has subjected them to an increased risk of harm from identity theft, fraud, or other injury.

3. The exposure of their data has resulted in their having to expend time and money to prevent future fraud, such as signing up for credit monitoring, contacting credit reporting agencies and placing fraud alerts on their accounts, and so on.

Courts have generally dismissed these arguments. In looking at the law, I see a general theme, which I will refer to as the “visceral and vested approach” to harm. Harms must be visceral – they must involve some dimension of palpable physical injury or financial loss. And harms must be vested – they must have already occurred.

For harms that involve emotional distress, courts are skeptical because people can too easily say they suffered emotional distress. It can be hard to prove or disprove statements that one suffered emotional distress, and these difficulties make courts very uneasy.

For the future risk of harm, courts generally want to see harm that has actually manifested rather than harm that is incubating. Suppose you’re exposed to a virus that silently waits in your bloodstream for 10 years and then suddenly might kill you. Most courts would send you away and tell you to come back after you’ve dropped dead, because then we would know for sure you’re injured. But then, sadly, the statute of limitations will have run out, so it’s too late to sue. Tough luck, the courts will say.

For harms that involve time and money you spend to protect yourself, that’s on your own dime. If you want to collect damages for being harmed, then leave yourself exposed, wait until you’re harmed, and hope that it happens within the statute f limitations. For example, In re Hannaford Bros. Data Security Breach Litigation (Maine Supreme Court, 2010), the court held that there was “no actual injury” from a data breach even when plaintiffs had to take efforts to protect themselves because the law “does not recognize the expenditure of time or effort alone as a harm.”

Occasionally, a court recognizes a harm under one of the above theories, but for the most part, the cases are losers. One theory that has gained a small bit of traction is if plaintiffs can prove that they paid fees based on promises of security that were broken. But this is in line with visceral and vested approach because it focuses on money spent. And many people can’t prove that they read the privacy policy or relied on the often vague and general statements made in that policy.

2. Privacy Harms

Privacy harms differ from data breach harms in that privacy harms do not necessarily involve data that was compromised. Instead, they often involve the collection or use of data in ways that plaintiffs didn’t consent to or weren’t notified about.

The law of privacy harms is quite similar to that of data breach harms. Courts also follow the visceral and vested approach. For example, in In Re Google, Inc. Cookie Placement Consumer Privacy Litigation (D. Delaware, Oct. 9, 2013), plaintiffs alleged that Google “’tricked’ their Apple Safari and/or Internet Explorer browsers into accepting cookies, which then allowed defendants to display targeted advertising.” The court held that the plaintiffs couldn’t prove a harm because they couldn’t demonstrate that Google interfered with their ability to “monetize” their personal data.

In another case involving Google, In re Google, Inc. Privacy Policy Litigation (N.D. Cal. Dec. 3, 2013), plaintiffs sued Google for consolidating information from various Google products and services under a single universal privacy policy. The plaintiffs claimed that Google began using and sharing their data in different ways than had been promised in the original privacy policies. The court held that the plaintiffs lacked standing because the plaintiffs failed to allege that how Google’s “use of the information deprived the plaintiff of the information's economic value.”

In Clapper v. Amnesty International, 133 S. Ct. 1138 (2013), the U.S. Supreme Court held that plaintiffs failed to allege a legally cognizable injury when they challenged a provision of the law that permits the government to engage in surveillance of their communications. The plaintiffs claimed that there was an “objectively reasonable likelihood” that their communications would be monitored, and as a result, they had to take “costly and burdensome measures to protect the confidentiality of their international communications.” The Supreme Court concluded that the plaintiffs were speculating and that “allegations of possible future injury are not sufficient” to establish an injury. According to the Court, “fears of hypothetical future harm” cannot justify the countermeasures the plaintiffs took. “Enterprising” litigants could establish an injury “simply by making an expenditure based on a nonparanoid fear.”

There are some cases where courts find privacy harms, but they too are largely consistent with the visceral and vested approach. For example, in In Re iPhone Application Litigation (Nov. 25, 2013), the plaintiffs alleged that Apple breached promises in its privacy policy to protect their personal data because its operating system readily facilitated the non-consensual collection and use of their data by apps. Judge Koh found that the plaintiffs had made sufficient allegations of harm because of their claim that “the unauthorized transmission of data from their iPhones taxed the phones' resources by draining the battery and using up storage space and bandwidth.” But then the court concluded that the plaintiffs failed to prove that they read and relied upon the privacy policy.

But Wait . . . Courts Do Readily Recognize these Harms Sometimes

So is it really true that harms must be visceral and vested? Not necessarily. In the most influential privacy law article ever written, Samuel Warren and Louis Brandeis’s The Right to Privacy, 4 Harv. L. Rev. 193 (1890), the authors spent a great deal of time discussing the nature of privacy harms. “[I]n very early times,” they contended, “the law gave a remedy only for physical interference with life and property.” Subsequently, the law expanded to recognize incorporeal injuries; “[f]rom the action of battery grew that of assault. Much later there came a qualified protection of the individual against offensive noises and odors, against dust and smoke, and excessive vibration. The law of nuisance was developed.”

Along this trend, the law recognized protection to people’s reputations. Warren and Brandeis pointed out how the law originally just protected physical property but then expanded to intellectual property. Warren and Brandeis were paving the way for the legal recognition of remedies for privacy invasions, which often involve not a physical interference but an “injury to the feelings” as they described it.

Since the Warren and Brandeis article, the law has come a long way in recognizing emotional distress injuries. Originally, the law didn’t protect emotional harm. But the law later developed an action for intentional infliction of emotional distress as well as for negligent infliction of emotional distress. Courts used to allow emotional distress damages only when accompanied by physical injury, but that rule has eased as the law has developed.

A number of privacy cases succeed, and they often do not follow the visceral and vested approach. The law recognizes harm in defamation cases, for example, and this harm is reputational in nature and in some cases does not involve physical or financial injury.

In many privacy tort cases, plaintiffs win when their nude photos are disseminated or when autopsy or death scene photos of their loved ones are disclosed. Courts don’t seem to question the harm here, even though it isn’t physical or financial. Also cases involving embarrassing secrets can win too without proof of physical or financial injury.

There are also cases where courts provide plaintiffs with remedies when they are at risk of suffering future harm. For example, in Petriello v. Kalman, 576 A.2d 474 (Conn. 1990), a physician made an error that damaged the plaintiff’s intestines. The plaintiff would have between an 8% to 16% chance that she would suffer a future bowel obstruction. The court concluded that the plaintiff should be compensated for the increased risk of developing the bowel obstruction “to the extent that the future harm is likely to occur.” Courts have also begun allowing people to sue for medical malpractice that results in the loss of an “opportunity to obtain a better degree of recovery.” Lord v. Lovett, 770 A.3d 1103 (N.H. 2001).

Under these risk of future harm cases, damages can include those “directly resulting from the loss of a chance of achieving a more favorable outcome,” as well as damages “for the mental distress from the realization that the patient’s prospects of avoiding adverse past or future harm were tortiously destroyed or reduced,” and damages “for the medical costs of monitoring the condition in order to detect and respond to a recurrence or complications.” Joseph H. King, Jr., “Reduction of Likelihood” Reformulation and Other Retrofitting of the Loss-of-Chance Doctrine, 28 U. Mem. L. Rev. 491, 502 (1998).

In cases involving rights under the First Amendment to the U.S. Constitution, courts have sometimes recognized a harm when people are “chilled” from exercising rights such as free speech or free association. Courts have always been uneasy about recognizing a “chilling effect” and the law wavers here a bit, but the concept is definitely an accepted one in the law.

What Accounts for these Differences?

What accounts for these differences? Why are courts departing from the visceral and vested approach in some circumstances but not others?

With the photos involving nudity and death, or the revelation of deeply embarrassing secrets, judges can readily imagine the harm. It is harder to do so when various bits and pieces of more innocuous data are leaked or disclosed. With the medical cases, the harm is also much easier to understand.

Harms involving non-embarrassing data, however, are quite challenging to understand and also present some difficult practical issues. In my next post, I will explore why.

* * * *

Daniel J. Solove is the John Marshall Harlan Research Professor of Law at George Washington University Law School, the founder of TeachPrivacy, a privacy/data security training company, and a Senior Policy Advisor at Hogan Lovells. He is the author of 9 books including Understanding Privacy and more than 50 articles. The author thanks SafeGov for its support. Follow Professor Solove on Twitter @DanielSolove.

The views here are the personal views of Professor Solove and not those of any organization with which he is affiliated.

Image Credit: Created by Daniel Solove using clips from the Open Clip Art Library

Doug DePeppe, Esq.

Founder, eosedge Legal

10 年

It seems that the law, like a pendulum, swings into action only after palpable inequities ripen, and broader understanding of the inequities comes into the main stream. That society becomes willing to accept a re-balancing enabled by the law. I suspect that the emergence of privacy data harvesting by sophisticated criminal enterprises, engaged in enterprise-level fraudulent activity, that can undermine trust in markets, indeed trust in the Internet itself, is so destabilizing that courts will begin to envision the harm in the manner Dan writes of.

回复
Richard Beaumont

Product Manager, Inventor, PrivacyTech, SaaS, CIPP/E, CIPM.

10 年

Very informative as always, and looking forward to the rest of the series. I think related to this there also appears to be a lack of consistency in defining Privacy Risks - what they are and how they arise, which perhaps plays into the problem of identifying future potential harms. In the medical case - they could quantify the risk of future harm - and that is perhaps why it was successful? Also, when it comes to data use outside of the terms of a privacy policy - where does breach of contract play a role? Does that change the need to prove harm as a result?

回复
Joice Bass

Consumer Attorney

10 年

Great overview; am really looking forward to the next installment! From a practical perspective, data breaches certainly don't fall neatly into any of the existing common-law causes of action that are available to plaintiffs. Even if economic damages can be established, a business' unintentional loss of data--particularly if caused by unforeseen misconduct of 3rd parties--can hardly be equated to an intentional breach of its privacy policy/promise to consumers, or even a negligent misrepresentation of that policy.

回复
Yisehak L.

Information Security

10 年

Is there privacy harm associated with the monitoring of Internet usage in corporate environments as long as employees are made aware and consent obtained through acceptance of a policy? If so, what would be considered a violation of privacy when it comes such to monitoring?

回复

If human is liberty then DATA must be regulated well or we are facing an alternative ID in cloud ...

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了