Dark Patterns in AI: Privacy Implications
Luiza Jarovsky
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ??Join our AI governance training (1,000+ participants) & my weekly newsletter (39,000+ subscribers)
I have discussed?privacy UX ?and?dark patterns in privacy ?extensively in this newsletter and in my?paper . When approaching the topic, we usually refer to deceptive design practices in the context of privacy that happen in the interface (UI/UX) of websites and apps.
Last week, I spoke about dark patterns in code , situations in which a privacy dark pattern would involve both UX and code, would not be visible to the user (only through auditing), and, as with UX dark patterns, would undermine user autonomy.
This week, I would like to bring a third type of dark pattern to your attention: AI dark patterns. I have proposed that these would be AI applications or features that attempt to make people:
The topic is not new to legal authorities. The Federal Trade Commission (FTC) in the United States, in a recent blog post authored by Michael Atleson, discussed the topic of "fake AI":
"Generative AI and synthetic media are colloquial terms used to refer to chatbots developed from large language models and to technology that simulates human activity, such as software that creates deepfake videos and voice clones. Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles, and fake consumer reviews, or to help create malware, ransomware, and prompt injection attacks. They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud. And that's very much a non-exhaustive list."
The FTC focuses on the prohibition of deceptive or unfair conduct, as established by the FTC Act. So if an organization deceives through an AI tool - even if that's not its intended or sole purpose - there can potentially be legal enforcement.
The European Union's AI Act Proposal, in its Recital 70, also mentions the topic of deceptive AI and highlights the need for transparency obligations:
"Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin."
The classification I proposed above for AI dark patterns is aligned with AI Act Proposal's Recital 70, as the latter mentions deception through impersonation and false appearance.
Some of the deceptive techniques mentioned by the FTC article above can be described as AI dark patterns, such as deepfakes. Deepfakes can be used, for example, to make people believe that a public figure or authority has said or done things that they have not. This would be a typical AI dark pattern, as wide deception is made possible through AI technology.
Even though deepfake technologies can have?legitimate uses , such as learning tools, photo editing, image repair, and 3D transformation, nowadays, their main application seems to be cyber exploitation. As I discussed a few months ago in this newsletter in the context of "creepy technologies ," according to a?Deeptrace report , 96% of all deepfake videos available online are non-consensual pornography. Non-consensual pornography is an AI dark pattern that, besides being deceitful, is deeply harmful to the victim in various aspects, including his or her intimate privacy, a concept that Prof. Danielle Citron has championed .
Another example of an AI dark pattern would be chatbots that behave as if they were humans without giving a clear sign to the user that they are dealing with an AI system. Today, various online services offer chatbot-based customer service. These tools must make it clear, transparent, and evident to customers that they are dealing with an AI system, not a human - similarly to what is brought by Recital 70 of the AI Act.
AI chatbots can be unpredictable, inadequate, unethical, untruthful, and invasive. As I have argued before, for the sake of human autonomy and privacy, people should know when they are dealing with humans and when they are dealing with data processing machines. AI-based chatbots that try to deceive users to think that they are humans are another example of an AI dark pattern.
In the context of AI chatbots, I have argued before in this newsletter that apps that offer AI-based companions , such as Replika, should be much more regulated. Their marketing language targets people in emotionally vulnerable situations, and they can easily become AI dark patterns, as I have described above. AI companion chatbots can convince users that they are in a real relationship, with real feelings, trust, and mutual connection involved. Even though the goal of the app is not to deceive users, when there is such a convincing personification, users do not treat these AI companions as large language model-based programs but as other humans.
As an example of the privacy implications of AI-based companions, Italy's data protection regulator, the Guarantor for the Protection of Personal Data (GPDP), has ordered a temporary limitation of data processing, with immediate effect, regarding data from Replika's Italian users. They specifically mentioned that the risk is too high for minors and emotionally vulnerable people. The GPDP?argued that ?(free translation):
"Replika violates the European regulation on privacy, does not respect the principle of transparency, and carries out unlawful processing of personal data, as it cannot be based, even if only implicitly, on a contract that the minor is unable to conclude."
These are AI systems developed by for-profit companies that are collecting and processing high amounts of personal and sensitive data from adults and children alike. There are extensive privacy risks involved - especially regarding children, teens, and emotionally vulnerable people. These tools should be regulated, and AI dark patterns should be more broadly discussed.
A last aspect of AI dark patterns is that, as with UX-based dark patterns and code-based dark patterns, they also impact user autonomy. AI dark patterns attempt to bypass user autonomy through impersonation-based and false appearance-based deception.
I have discussed autonomy in the past and will continue bringing it up in this newsletter. Data processing-based systems that deceive us - especially with the advanced capacities offered by AI - are particularly harmful to our autonomy and our privacy, and they must be closely regulated.
?? Interested in diving deeper into dark patterns in privacy and how to revert them through privacy-enhancing design??Join our live course ?about the topic in April?(4 weeks, one live session per week + additional material). Check out the program and?register using the coupon TPW-10-OFF to?get 10% off. Visit our website to learn about our privacy courses (Privacy & AI course coming up in May).
--
?? Podcast
I am excited to share the new episode of The Privacy Whisperer Podcast , in which I talk with?Romain Gauthier, the CEO of?Didomi, about:
This was a fascinating conversation. If you work in the tech industry, are a privacy professional, or are an entrepreneur, you cannot miss it. Listen now .
--
?? Live events
Did you miss our 'Women Advancing Privacy' event on Privacy by Design with Dr. Ann Cavoukian? Watch the recording and learn more about:
--
?? Trending on social media
Do you want to understand better and keep track of US State Privacy Laws? Check out my Twitter thread .
--
???Privacy & data protection careers
We have gathered relevant links from large job search platforms and additional privacy jobs-related info on our?Privacy Careers ?page. Bookmark it and check it periodically for new openings. Wishing you the best of luck!
--
? Before you go:
See you next week. All the best,?Luiza Jarovsky
GRC, Compliance - Public Relations & Public Affairs at Wired Relations
1 年This is an important discussion, as it goes to our ability to be human. In the music industry we saw a financial turn from musicians making money through record sales to live performances. Hopefully, we humans will turn to doing more live events, like dancing together or boxing (like I did yesterday) with other humans as interactions online becomes more fake.
Social Media Manager
1 年Lestconnect
Senior advisor in dataprotection / infosec / cybersec / privacy enhancing technologies
1 年Thanks Luiza Jarovsky, important subject.