Just released! "Generative AI, Democracy and Human Rights." Ever heard of the "freedom of thought"? Well now you have! This is a central topic of my latest work, in collaboration with the brilliant Aaron Shull of the Centre for International Governance Innovation (CIGI). In this piece, we explore *freedom of thought*, a universal human right that has been under-appreciated and overlooked in debates around AI and democracy. Although we don't hear about human rights that much in the United States, they are the underpinnings of numerous international agreements and organizations, and also play an important role in European Union and Canadian law, as well as in the laws of many other nations. We see human rights and freedom of thought in particular as important tools for shaping the digital information ecosystem in support of democracy. Key points: → Disinformation campaigns aimed at undermining electoral integrity are expected to play an ever larger role in elections due to the increased availability of generative artificial intelligence (AI) tools that can produce high-quality synthetic text, audio, images and videos and their potential for targeted personalization. → As these campaigns become more sophisticated and manipulative, the foreseeable consequence is further erosion of trust in institutions and heightened disintegration of civic integrity, jeopardizing a host of human rights, including electoral rights and the right to freedom of thought. → These developments are occurring at a time when the companies that create the fabric of digital society should be investing heavily in, but instead are dismantling, the “integrity” or “trust and safety” teams that counter these threats. → Policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen. They should act quickly to ban using AI to impersonate real people or organizations, and require the use of watermarking or other provenance tools to allow people to differentiate between AI?generated and authentic content. Deepest thanks to Susie Alegre, Owen Doyle, Lynn Schellenberg and Jennifer Wilkes-Thiel for helping bring this project to fruition! #AI #HumanRights #DigitalRights ICSI - International Computer Science Institute | University of California, Berkeley | California Initiative for Technology and Democracy | University of California, Berkeley, Haas School of Business | Brennan Center for Justice | CITRIS and the Banatao Institute
I don’t think I agree with holding AI companies legally liable for wrongs committed by users as a general rule. Ensuring this tech is widely available is the highest policy priority. The (frequently speculative) secondary safety risks are potentially relevant but need to take a back seat to getting the tremendous economic societal benefits of broad access. “Foreseeable” harms are not the same as being responsible for a proximate cause. That responsibility and liabiiity goes to the people who do the bad deeds with the tools.
I am debating myself what to do with GenAI in videos and films. Reading the law in California, it would be possible (but not required) for a content distributor (including social media) to automatically put visible tags when AI generated content is shown. A content producer could also voluntarily do this. This is probably not important for entertainment films, but a news piece or some social media posts would be different. I can totally imagine AI used to create hyper targeted content at scale on social media to influence people into buying things or voting a certain way. This is a step up from just a generic fake video. A visible tag would be similar to what exists in some countries, where photos or videos of models altered by photoshop and used in advertising, must display a warning. As I understand it, the current legal framework does not require the watermark to be visible. Is this where this is going?
It’s striking how disinformation kill chains increasingly rely on platforms that once promised privacy - messaging apps. Regulating public-facing content is one thing, but how realistic is it to preserve freedom of thought when key disinformation tactics move into encrypted, unobservable spaces?
I cannot wait to read this important document! Thank you so much for sharing.
Thanks for sharing this David. From you policy convos, do you have a sense of who might be open to "act quickly to ban using AI to impersonate real people or organizations, and require the use of watermarking or other provenance tools"? I would love to get a sense of who is already on board. Thanks much for this!
Tagging School of Responsible AI (SoRAI) to add this valuable content to our open access library.
Perfect timing - we are covering mis/disinformation in my AI class as it relates to the media. I'll have to add this to the additional materials for my students.
Important
Business Insider AI 100 | Tech Research Leader | AI, Misinfo, Elections, Social Media, UX, Policy | Chancellor's Public Scholar @ UC Berkeley
1 周Find the rest of the briefs in this series, entitled "Legitimate Influence or Unlawful Manipulation?" on the CIGI website here: https://www.cigionline.org/publications/publication-series/legitimate-influence-or-unlawful-manipulation/