Dark Patterns (Deceptive Design) in Data Protection
Photo by Rishabh Dharmani on Unsplash

Dark Patterns (Deceptive Design) in Data Protection

Learn how to identify and stop them from dictating your online choices

Dark patterns (or deceptive design), according to?Harry Brignull?- the first designer who coined the term - are?“tricks used in websites and apps that make you do things that you didn't mean to, like buying or signing up for something.”?Some common examples are websites or apps that make it almost impossible to cancel or delete an account, almost invisible links to unsubscribe from non-requested newsletter and insurance products that are surreptitiously added to your shopping cart. You can check more examples?here?or tweet and expose your own personal findings using the hashtag?#darkpattern?(they might be retweeted by?this account?- it's worth checking out some outrageous examples there).

[??Would you like to receive daily privacy and data protection insights? Follow me on?Twitter?and on?LinkedIn]

One of the chapters of my ongoing Ph.D. in data protection, fairness and UX design is about dark patterns in the context of data protection (you can download the full article I wrote on the topic?here). I defined them as “user interface design choices that manipulate the data subject’s decision-making process in a way detrimental to his or her privacy and beneficial to the service provider.” In simple terms, they are?deceptive design practices used by websites and apps to collect more or more sensitive personal data from you. They are everywhere and most probably you have been encountering some form of dark pattern in a daily basis while navigating online. Below are two examples of practices I call dark patterns:

1- Screenshot from the TikTok sign up page:

No alt text provided for this image

In this example, you cannot know if the “Yes” or “No” buttons are for the “are you over 18” or for the “do you allow TikTok to show personalized ads” question. According to my taxonomy, it would be a “mislead” type of dark pattern, as it misleads users into accepting personalized ads (as the user is forced to say "yes" to "confirm you are above 18."

2- Screenshot from the website groopdealz.com:

No alt text provided for this image

Here, through manipulative language (read the underlined text under the field to insert the email address), the website is pressuring the user to add his or her email and subscribe to the newsletter, so the category of dark pattern is “pressure.” To read more about the taxonomy and the different types of categories and sub-categories, click?here.

In the full?article?about the topic, I have discussed dark patterns’ mechanism of action and the behavioral biases that are exploited by them (showing how they manage to manipulate us to do something that was not our initial intention). I have also presented their legal status regarding the European General Data Protection Regulation - GDPR (no, they are not explicitly illegal) and offered a taxonomy to help us understand what is and what is not a dark pattern. Lastly, I proposed paths of regulatory changes that could help us move forward from here and improve the protection that is offered to users.

My goal in talking about them in this newsletter is to increase awareness about the topic and to show that the design of websites and apps is not inoffensive or neutral.?Design is a powerful tool to manipulate behavior and sometimes - particularly because of behavioral biases - it is difficult to detect and avoid manipulative tricks, especially online.

What you can do as an individual is to?try and be critical about your behavior online and why you are interacting with certain platforms in a certain way (what is your goal? What will you gain with that? What will the platform gain with that? Is is possible that you are being tricked into behaving in a certain way that is actually harmful to you?).

Online platforms that offer services that we love - such as Facebook, Twitter, Tiktok, Amazon, Netflix, Spotify, Tinder and so on - are not “neutral.” They are companies working for (a lot of) profit and an important input for them to make money is your personal data. Their foremost goal is not doing good to people and to the world, but pleasing shareholders (hopefully following the law). Law is not always (perhaps never) at the same pace as the technologies it is applied to (and related manipulative techniques to make them more profitable) so it is important that you are aware of what is happening and decide what is better for you.

There is a lot to unpack here, and I hope to talk more about the topic in future posts. See you next week!

? Before you go:

See you next week. All the best,?Luiza Jarovsky

要查看或添加评论,请登录

Luiza Jarovsky的更多文章

  • ??? LLMs Don't Respect Privacy

    ??? LLMs Don't Respect Privacy

    The Latest Developments in AI Governance | Edition #180 ?? Hi, Luiza Jarovsky here. Welcome to our 180th edition, read…

    9 条评论
  • ?? AI Is Dehumanizing the Internet

    ?? AI Is Dehumanizing the Internet

    Emerging Challenges in AI | Paid Subscriber Edition | #179 ?? Hi, Luiza Jarovsky here. Welcome to the 179th edition of…

    21 条评论
  • ?? What Is AI Literacy?

    ?? What Is AI Literacy?

    The Latest Developments in AI Governance | Edition #178 ?? Hi, Luiza Jarovsky here. Welcome to the 178th edition of my…

    13 条评论
  • ? Manus AI: Why Everyone Should Worry

    ? Manus AI: Why Everyone Should Worry

    Emerging AI Governance Challenges | Paid Subscriber Edition | #177 ?? Hi, Luiza Jarovsky here. Welcome to our 177th…

    18 条评论
  • ?? What Is an AI Governance Professional?

    ?? What Is an AI Governance Professional?

    AI Policy, Compliance & Regulation Must-Reads | Edition #176 ?? Hi, Luiza Jarovsky here. Welcome to our 176th edition…

    14 条评论
  • ??? Can Humans Really Oversee AI?

    ??? Can Humans Really Oversee AI?

    Emerging AI Governance Challenges | Paid Subscriber Edition | #175 ?? Hi, Luiza Jarovsky here. Welcome to our 175th…

    9 条评论
  • ? Legally Risky Robots

    ? Legally Risky Robots

    AI Policy, Compliance & Regulation Must-Reads | Edition #174 ?? Hi, Luiza Jarovsky here. Welcome to our 174th edition…

    13 条评论
  • ?? Quantum Computing Governance

    ?? Quantum Computing Governance

    Emerging AI Governance Challenges | LinkedIn Preview | #173 ?? Hi, Luiza Jarovsky here. Welcome to our 173rd edition…

    4 条评论
  • ?? Lawyers Are Still Using AI Wrong

    ?? Lawyers Are Still Using AI Wrong

    AI Policy, Compliance & Regulation Must-Reads | Edition #172 ?? Hi, Luiza Jarovsky here. Welcome to our 172nd edition…

    7 条评论
  • ?? Premiere: The Global AI Race, Regulation, and Power

    ?? Premiere: The Global AI Race, Regulation, and Power

    ?? Hi, Luiza Jarovsky here. Welcome to our 171st edition, read by 52,600+ subscribers in 165+ countries.

    3 条评论

社区洞察

其他会员也浏览了