AI hype and the fake advocate phenomenon
AI-generated image

AI hype and the fake advocate phenomenon

Case study: chatbots with "personas" and their privacy implications

-?LinkedIn subscribers: unlock the full newsletter and additional benefits with a subscription plan.


?? Hi, Luiza Jarovsky here. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.


This week's newsletter is sponsored by?MineOS:

No alt text provided for this image

The latest G2 summer reports are out, and MineOS gets to see what they have accomplished for their customers and the data privacy industry. MineOS received no less than 94 badges - the strongest momentum in all their ranked categories! They were?also named the overall leader in Data Privacy Management.?Get the full G2 report?to see how MineOS is leading a new age of data privacy.


??Meta's intended?change to consent

After five years of?extensive litigation?by noyb in the EU, Meta yesterday?announced?that, for people in the EU and a few other areas, they have the?intention?to change the legal basis that they use to process data for behavioral advertising from “legitimate interest” to “consent.”

Three important points here - and reading between the lines:

  • First, to learn more about the legal aspects of this case, read noyb's?blog post?and background information. [Last month, I had an 80-minute live conversation with Max Schrems (chairperson at noyb), and he explained some of what goes behind the scenes in these cases, watch the?recording].
  • Second, in Meta's official?announcement, they said, “we are announcing our?intention?to change the legal basis that we use to process?certain data?for behavioural advertising.” It is still unclear when this intention will be made concrete and how they are building their interpretation of when consent is applicable in the context of data collection for behavioral advertising. On that, Schrems?commented: “So far they talk about 'highly personalized' or ‘behavioural' ads and it's unclear what this means. The GDPR covered all types of personalization, also on things like your age, which is not a 'behaviour'. We will obviously continue litigation if Meta will not apply the law fully."
  • Third, Meta?added: “we will share further information over the months ahead, because it will take time for us to continue to constructively engage with regulators to ensure that any proposed solution addresses regulatory obligations in the EU, including GDPR and the upcoming DMA.” It is not clear when this switch to consent will happen in practice and how it might be in any way related to the Digital Markets Act (DMA).

Based on previous behavior, existing litigation, and text analysis, my take is that: a) announcing one's intention (instead of announcing the change in itself) and b) saying that “it will take time for us to continue to constructively engage with regulators to ensure that any proposed solution (…)” does not signal that a consent notice is coming anytime soon.

??Fighting against deepfakes: immunizing images

Two days ago, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)?announced?Photoguard, an AI-based tool to protect against AI-based manipulation (such as deepfakes).

According to the MIT?news article?about the project, Photoguard is “a technique that uses perturbations - minuscule alterations in pixel values invisible to the human eye but detectable by computer models - that effectively disrupt the model’s ability to manipulate the image.” The researchers wrote in their?paper?that “the key idea is to immunize images so as to make them resistant to manipulation by these [large diffusion] models.”

This 90-second video is a great visualization of how it works in practice:

In terms of technical capabilities, the proposed solution is fascinating. The big challenge [perhaps like most tech solutions] is implementation.

According to?the team behind the tool, key players in the fight against deepfakes are the creators of photo editing models, and they think that there should be a coordinated effort between policymakers, model developers and social media platforms. In their?paper, they describe a “techno-policy” approach, which would also involve “developers providing APIs that allow the users and platforms to immunize their images against manipulation by the diffusion models the developers create.”

On deepfakes: although they have existed for many years, the current Generative AI wave amplified the problem (as now it is cheaper and easier to create deepfakes), making it a priority in the context of the global AI policy agenda.

I classify deepfakes a type of?dark pattern in AI?in which an AI-based system generates realistic images, videos, sounds, or text that can make people believe they are real.

As I wrote on various occasions in this newsletter, deepfakes are a serious problem as they can cause various types of harm. They can, for example, infringe intimate privacy, as Prof.?Danielle Citron?has denominated violations such as non-consensual pornography, which represent a large percentage of deepfakes circulating online today.

They can erode democracies, increase polarization and promote hate, when it is not possible anymore to distinguish true from false, and when malicious actors use this AI-generated content to promote clashes, disinformation and hate campaigns.

They can even cause geopolitical issues. Recently, a?fake image?showed an explosion at the Pentagon, and it was shared by verified accounts on Twitter/X (which today does not mean authenticity). Depending on the context and the way the AI-generated material circulates, it could lead to devastating consequences.

Solutions such as Photoguard are interesting and welcome. The elephant in the room is definitely implementation, and how to make platforms, AI developers, and other industry players take effective action towards a common social goal (which might make them lose some money).

??AI hype and the fake advocate phenomenon

Still on the topic of deepfakes, I use it now to illustrate the?fake advocate phenomenon:

I?recently wrote?that Adobe has announced that it will integrate?Adobe Firefly, its generative AI tool, into its widely used Photoshop program.

Especially given Photoshop's massive popularity, one of the expected consequences is that “generative filling” a photo will become as common as “editing” it. Good photo edition will mean photo recreation: making it as fascinating, impressionable, and perhaps shocking as possible. If today, deepfakes still surprise us - and the good ones go viral - soon they will be ubiquitous and will change our relationship with what is true and what is fake.

Therefore Adobe's new integration is normalizing and further facilitating the creation of deepfakes.

In parallel to that, Adobe?is also?in a?campaign?against deepfakes, the Content Authenticity Initiative -?CAI, joined by over 1,000 companies, including Microsoft. They declare, “we are a community of media and tech companies, NGOs, academics, and others working to promote adoption of an open industry standard for content authenticity and provenance.” The CAI works by authenticating the metadata attached to an image, which will show if the image was AI-generated. But it can be easily?bypassed, and it is not universal, it can only be useful if the photo editing software is connected to it.

So the real question is: if you are trying to prevent harm, wouldn't it be better to only launch a potentially harmful product or feature when it has a sufficiently safe built-in mechanism to prevent misuse?

It looks like being a fake advocate is more profitable and fits well within the current zeitgeist.

We are living in a techno-social moment in which companies want to reap the financial benefits of a potentially harmful tech, but they do not want to be seen as irresponsible or have their reputational tarnished.

The 2023 solution is to become a fake advocate: companies launch the potentially harmful tech, and then, after public backlash or global regulatory efforts and without changing or deactivating the potentially harmful product - they launch a parallel PR campaign or a movement to urge the world to join them in combatting the harms. They could be more effective by building safer products, but it seems that nowadays, just the PR and social media buzz seem to be enough.

I recently wrote about this topic in a viral?Twitter thread?about OpenAI's “please regulate us” PR campaign and in a newsletter article about OpenAI's unacceptable?privacy-by-pressure?practices.

?? Case study: chatbots with personas

This week, I discuss the ongoing trend (soon to be joined by Meta) of “chatbots with personas” and their privacy implications:

No alt text provided for this image

This is a free preview. Choose a?paid subscription?to access the case study every week and receive discounts on our training programs. Thank you!




Hi Luiza! Is the privacy course re-opening again soon?

回复
Kandy Z.

Cyber Strategist, Cyber OSINT

1 年

#CRIMECHAT

回复
Atilio M.

Senior Project Director: Cybersecurity, Risk Management, Operational Continuity,Payment Systems, IT proyects. Industrial Civil Engineer, postgraduate, CCISO, CISM, PMP, CC, Scrum Master, COBIT5, ITIL, SWIFT15K, ISO27K.

1 年

Excellent articles and blogs... thanks Luisa for sharing them

Angelika Mayen

SOC 2 Female Wizard

1 年

I absolutely love the Privacy Whisperer! What a great information and a format??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了