The DSA hurricane is here
Luiza Jarovsky
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ?? Join our AI governance training (1,000+ participants) & my weekly newsletter (36,000+ subscribers)
Plus: the new privacy paradigm and what companies should know
-?LinkedIn subscribers: unlock the full newsletter and additional benefits with a subscription plan.
?? Hi, Luiza Jarovsky here. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.
This week's edition of The Privacy Whisperer is sponsored by Didomi:
Unravel the complexities of data privacy with Didomi’s upcoming webinar on September?21st at 5pm CEST. Hosted by?Max Schrems and Romain Gauthier, this discussion will explore the latest changes and challenges of the industry, from EU-US data transfers to the intersection of privacy and marketing.?Secure your spot!
??AI copyright wars
As generative AI becomes widespread - and every tech company is adding “generative fill” functionalities to their products - copyright issues receive more and more attention.
For those not familiar, the core questions are: given that AI models are trained using content from “the whole internet,” if I create a new piece of content using AI tools, can I own the copyright? How much human work is necessary to consider a certain creative work as deserving of intellectual property protection? For the artists, creators, or anyone who helps feed content to train those models, should there be any sort of compensation?
We are not yet one year past the beginning of this latest AI hype wave, and the lawsuits are coming in and giving us a glimpse into the current state of copyright concerns.
For example, authors Sarah Silverman, Christopher Golden, and Richard Kadrey have sued OpenAI and Meta for copyright infringement (which occurred through ChatGPT and LLaMA, respectively). In the OpenAI lawsuit, they argue that:
“The unlawful business practices described herein violate the UCL because they are unfair, immoral, unethical, oppressive, unscrupulous or injurious to consumers, because, among other reasons, Defendants used Plaintiffs’ protected works to train ChatGPT for Defendants’ own commercial profit without Plaintiffs’ and the Class’s authorization. Defendants further knowingly designed ChatGPT to output portions or summaries of Plaintiffs’ copyrighted works without attribution, and they unfairly profit from and take credit for developing a commercial product based on unattributed reproductions of those stolen writing and ideas.”
On the topic of AI and copyright, a few days ago, a US judge said that “human authorship is a bedrock requirement of copyright” and denied copyright to an AI-generated image.
Last week, Benedict Evans wrote an interesting article on the topic showing the uniqueness of the current issues involving AI and intellectual property, you can read it here.
In the same way that the internet, music streaming platforms, and other technologies changed copyright forever, it will probably be the case again with generative AI. We can expect more groundbreaking trends, controversies, and legal decisions in the next months.
?? AI governance and geopolitics
There have been continuous discussions about the AI Act and how each country or region is planning to regulate and govern AI. However, we are facing a new technological challenge that does not respect national borders. We should also focus on coordinated global efforts to make sure the whole planet is on board.
I have recently read a very interesting piece covering AI governance and geopolitics issues, written by Ian Bremmer and Mustafa Suleyman: “The AI Power Paradox. Can States Learn to Govern Artificial Intelligence - Before It’s Too Late?" Here is a quote:
“Like past technological waves, AI will pair extraordinary growth and opportunity with immense disruption and risk. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world’s primary geopolitical actors. Whether they admit it or not, AI’s creators are themselves geopolitical actors, and their sovereignty over AI further entrenches the emerging “technopolar” order—one in which technology companies wield the kind of power in their domains once reserved for nation-states.”
They discuss the size and breadth of the global AI governance challenge and propose an approach called “technoprudentialism,” which is aligned with other paradigms in international law that aim at identifying and mitigating risk from a global perspective.
It is unclear to me, however, if the proposed approach works in practice and how we can build a global framework in which:
a) tech companies work together with governments and are held accountable at the global political stage; and
b) countries mitigate risk collectively when there is so much competition, the economic stakes are so high, and AI development is concentrated in the hands of tech companies.
There are all sorts of alliances being formed, but is still unclear how AI governance and regulation will end up consolidated.
This is a well-written piece, and anyone interested in diving deeper into geopolitical waters should read it.
?? The DSA hurricane is here
Last week, the Digital Services Act (DSA) became legally enforceable for very large online platforms (VLOPs) and very large online search engines (VLOSEs).
There are 19 companies that fit these two categories, according to the European Commission's decision from April 25:
Very Large Online Platforms:
Alibaba AliExpress
Amazon Store
领英推荐
Apple AppStore
Booking .com
Google Play
Google Maps
Google Shopping
Snapchat
TikTok
Wikipedia
YouTube
Zalando
Very Large Online Search Engines:
Bing
Google Search
These companies will have to comply with a full set of obligations around transparency, protection of minors, content moderation, privacy, and more.
As an example, Article 34 of the DSA establishes that these companies will have to identify, analyze, and assess systemic risks stemming from their services - including algorithmic systems - such as:
- the dissemination of illegal content;
- negative effects on the exercise of fundamental rights;
- negative effects on civic discourse and electoral processes;
- negative effects on gender-based violence, the protection of public health and minors;
- serious negative consequences to the person’s physical and mental well-being.
Especially in the context of the rapid expansion of AI-based functionalities, the DSA is an important step towards more algorithmic transparency and a meaningful effort to help make the internet a safer and fairer place.
We can expect that these rules, similar to what happened with the GDPR (General Data Protection Regulation), will trigger a global regulatory wave towards a more transparent, safer, and fairer internet.
Talking about the GDPR, another important topic is how it intersects with the DSA. Dr. Gabriela Zanfir-Fortuna and Vasileios Rovilos from the Future of Privacy Forum have just published an article on the topic, check it out.
GDPR enforcement has been lagging behind, as Max Schrems made clear in our conversation in July. Let's hope that DSA enforcement will follow a different path, as EU Commissioner Thierry Breton's video suggests.
I am optimistic that the DSA is a positive step in making the internet a better place.
In the case study below, I talk more about the new privacy paradigm (which includes the arrival of the DSA and its impact on tech companies), do not miss it.
?? Case study: the new privacy paradigm
In this week's case study, I discuss the new privacy paradigm and what companies should know about it, especially in the context of strengthening their compliance efforts:
This is a free preview. Choose a?paid subscription?to access the case study every week and receive discounts on our training programs. Thank you!