Remember JD-Day // Mr. Harris Goes to Paris
The infamous "JD-Day" speech

Remember JD-Day // Mr. Harris Goes to Paris

Bearing Witness in Paris

Sometimes we have to bear witness before we can take action. If you followed my posts about the Paris AI Action Summit last week, you probably know that I’m unhappy with the official outcome. No meaningful progress was made towards creating any binding or enforceable guardrails on AI. [If you missed them, see them here: Macron's speech | leaked declaration | open source | offensive against regulation]

I previously worked on the Responsible AI team at Meta, I use AI tools every day, and I see huge potential for good in AI technologies. I am also dead certain that without being compelled legally to develop AI responsibly, the tech industry will continue to be locked in a race to the bottom to deploy AI products as quickly as possible just to stay competitive. Some of these products are and will be amazing, and others are already causing a plethora of harms, including AI that:

  • encourages a child to commit suicide / kill their parents
  • is trained on and can produce child sexual abuse material (CSAM)
  • produces non-consensual intimate imagery (NCII—aka deepfake pornography of real people), especially of women and children
  • allows scammers to deploy much more believable and personalized scams that disproportionately impact the elderly and people with limited literacy
  • powers campaigns to interfere in elections and democracy through deepfake images, audio and video
  • allows students to subvert their own education by passing off work from AI as their own without fear of getting caught
  • produces discriminatory outcomes in a variety of domains including job applications, credit applications, housing advertisements, predictive policing, criminal sentencing and much more
  • consumes such enormous amounts of energy that a decommissioned nuclear power plant that was the site of a historic accident is being recommissioned just to serve one tech company
  • displaces human workers at a time when social safety nets are being stripped away in many parts of the world

And as AI gets more and more powerful and persuasive, and we see the rise of autonomous “AI Agents” and AI weapons, we can only expect greater and greater need to put guardrails on AI and hold AI developers and deployers accountable when their AI systems cause any manner of additional harms.

Losing the AI Liability Directive

Even in the face of this mounting evidence of the clear need for AI governance, the only major policy development of the Paris Summit was a very bad one. The EU revealed during the summit that they are abandoning the effort to create the AI Liability Directive (AILD), an important follow-on bill to the EU AI Act that was set to become the gold standard for holding AI developers and deployers accountable in clear and systematic ways for harms caused by AI.

Without binding, enforceable AI liability rules, we are counting on the goodwill of tech companies, as well as ill-suited and unpredictable tort law cases in a variety of disjointed legal systems to resolve many questions about who is responsible for AI gone wrong. This uncertainty is bad for people and bad for companies that might not want to be regulated at all, but generally prefer clear rules over chaos and unpredictability.

JD-Day

It seems unlikely (but possible! see comment below from Dr. Laura Caroli ) that it was a coincidence that this EU announcement coincided with the arrival of JD Vance on French soil on February 10, 2025. Perhaps the “accelerationist” community (these are the “AI wants to be free—all tech regulation is always bad” folks) will look back fondly on this day as “JD-Day,” when their American hero arrived on French soil to liberate AI from the menace of meaningful and internationally-coordinated democratic governance of AI. Vance described what was happening precisely in his speech when he said that he liked the “deregulatory flavor” of the AI Action Summit, and imploring the EU to refrain from "tightening the screws on U.S. tech companies."

This is of course not the type of liberation I would like to see. But we must recognize that JD-Day represents a critical moment in the history of technology policy, and also for the transatlantic partnership more broadly, especially as regards security ( Marietje Schaake covered this brilliantly), as was articulated in Vance's speech at the Munich Security Conference a few days after his Parisian diatribe.

Hope on the Horizon - Harmonizing up

I, however, still see some hope on the horizon for those of us who would prefer to put guardrails on AI development through democratically elected leaders and public institutions. The good news is that, even though the AILD may not come to life, we still have the EU AI Act and Digital Services Act, that are both powerful vehicles for steering AI and online platforms towards the public interest.

Unfortunately, we won’t be able to look to to the EU alone to lead the charge on tech policy as they’ve done for quite a few years now, as they will undoubtedly be under greater and greater pressure from the Trump-Musk administration to weaken regulation and enforcement.

I still see hope in California, where Governor Gavin Newsom signed 17 different AI bills into law last year alone (and, yes, vetoed one), and where we’re likely to see dozens of additional AI bills introduced this month as well.

So for me, I’ll exercise my right to jurisdictional promiscuity (academics get this along with academic freedom!) and make a few changes to the strategy I’ve had for the past few years. I will continue to focus on supporting binding and enforceable regulation of AI and social media in both Sacramento and Brussels, but I'm hoping to throw a few more jurisdictions into the mix.

If the US Congress invites me back, I’ll of course be happy to help out. The “Take it Down Act” could end up being a significant piece of AI legislation that could make it through this Congress. But this week I've been having conversations with policymakers in multiple US states and with key actors on at least three continents. Now is a great time for other jurisdictions to harmonize upward towards EU tech laws so that there's less incentive for companies to weaken them.

I’ll also be trying to help foster a growing sense of solidarity between the networks of academics, civil society groups and policymakers working on these issues. And I hope that means more events like the Paris AI Action Summit, but with some AI policy progress next time!


Here are some of my favorite pics from the event:

Stuart Russell gave a great closing speech at the #IASEAI25 conference, the first event I attended upon landing in Paris, which included ~600 participants from around the world and was chock full of amazing content and people. Kudos to the team that launched the International Association for Safe & Ethical AI and hosted this blockbuster event at the OECD - OCDE headquarters—well done Stuart, Mark Nitzberg, Atoosa Kasirzadeh, Elizabeth Seger, and many others for planning and hosting, and thank you to Maria Ressa, Margaret Mitchell, Amandeep Gill, Kate Crawford, Max Tegmark, Anca Dragan, and so many others for amazing talks!


^ I got to meet two heroes—Nobel prize winner, Daron Acemoglu, and butterfly-clad AI & environment scholar, Sasha Lucioni.


^ I subsisted on almost exclusively canapés for 6 days. These came from the IASEAI event at OECD HQ.


^ My new friends after the "Global AI Governance: Empowering Civil Society" event, hosted by a great group of organizations! Pictured here (L to R) are Connor Dunlop (Ada Lovelace Institute), Karine Caunes (Digihumanism - Centre for AI & Digital Humanism), Sarah Andrew (Avaaz), Laura Lázaro Cabrera (Center for Democracy and Technology), Jessica Galissaire (Renaissance Numérique), and Caroline Jeanmaire (The Future Society). Thank you for including me as a speaker in this great event!


^ Unbelievable party at the French equivalent of the State Department offices. ( Ministère de l'Europe et des Affaires étrangères )


^ Got to hang out with Kate Crawford ( 微软 & 美国南加州大学 ), Anne Bouverot (the summit host!) and Prof Sandra Wachter ( Oxford Internet Institute, University of Oxford )


^ And then Justin Trudeau (Prime Minister of Canada) walked by and recorded a video!


^ Audrey Tang 's great panel at the AI & Democracy event hosted by Make.org . Thank you to Make.org's Alicia Combaz and Axel Dauchez for hosting a wonderful set of talks and a great party, and with a particularly great speech from Henna Virkkunen from the European Commission. (full event video here)


^ OECD.AI hosted events for 3 out of the 6 days that I spent in Paris. Their headquarters is an amazing place to hold a conference—this meeting 5.5 hour session on "The current landscape of global AI standardisation" and had fantastic talks by leaders from government, industry, civil society and academia. It was great to hear from Isabel Ebert, PhD (United Nations), Sebastian Hallensleben (CEN and CENELEC), Zaheed Kara (Frontier Model Forum), Kilian Gross (European Commission), Henry Papadatos (SaferAI), and so many others. Full 5.5 hours of video here: https://www.youtube.com/watch?v=RbNU69XQ4fE


^ Honored to have been in the room for the launch event for ROOST.tools and get to ask a question to an unbelievable panel made up of Eric Schmidt ( Schmidt Futures ), Audrey Tang ( Project Liberty ) and Yann LeCun ( Meta ), moderated by Camille Fran?ois ( Columbia | SIPA ). And thanks Rama G. Elluru for encouraging me to attend! Congratulations to Camille and her collaborators, Juliet Shen and Eli Sugarman!


^ Another amazing panel with Martin Tisné (AI Collaborative/ Omidyar Network ), Amandeep Gill ( United Nations ), Abeba Birhane ( Trinity College Dublin ), Vilas Dhar ( The Patrick J. McGovern Foundation ), Janet Haven ( Data & Society Research Institute ) and Clem Delangue ?? ( Hugging Face ). Video here, along with all ~6 hours of talks: https://www.youtube.com/live/CUf6Jb1RxZs?si=BtMVYUWLd4gUOIGk&t=20734.


^ Emmanuel Macron during his charming but ultimately disappointing speech.


^ Loved getting to chat with Hugging Face 's Clem Delangue ?? & Irene Solaiman , as well as Omidyar Network leaders, Michael Kubzansky and Michele Lawrence Jawando . I finally got to personally ask Clem to take down the models Hugging Face hosts that are trained on CSAM. Fingers crossed he'll take action!


^ The AI Safety Connect side event was also chock full of many hours of great panels. One featured representatives of AI Safety Institutes from around the world, and another (pictured above) included Nicholas B. Dirks (former 美国加州大学伯克利分校 Chancellor!), Chris Meserole ( Frontier Model Forum ), Michael Sellitto ( Anthropic ), Katarina Slama, PhD (ex OpenAI ), Miles Brundage (ex OpenAI ), and Roman Yampolskiy ( University of Louisville ). Thank you to the Mohammed Bin Rashid School of Government and the Future of Life Institute (FLI) for hosting!


^ Anthropic 's Sarah Heck , Michael Sellitto , Ashley Zlatinov and their team hosted a magical reception with very memorable canapés, but even better then the food was getting to spend time with Dr. Laura Caroli ( Center for Strategic and International Studies (CSIS) ), Murielle Popa-Fabre ( Council of Europe ) and Dan Nechita ( Transatlantic Policy Network (TPN) - so sad I failed to get a picture with you Dan!). Laura and Dan played major roles in drafting the EU AI Act, and Murielle was very involved in the Council of Europe AI Treaty, so basically I got a world-class lesson in European AI policy in an unforgettable setting. I also got to spend time with Duncan Cass-Beggs ( Centre for International Governance Innovation (CIGI) ), a colleague who I look forward to collaborating with more!


^ Google DeepMind also hosted a fantastic little reception inside this immersive video environment where they showed off the Google Arts & Culture museum explorer tool and gave us a very detailed view of the famous Marc Chagall ceiling paintings at the Paris Opéra Garnier. It was so great to finally meet people I've been video calling with about provenance for over a year now, including Google's Raquel Vasquez, Clement Wolf, Alexandra Belias and Adobe's Andy Parsons. It was also great to see Genevieve Smith here from BAIR!


To close, some final thanks and kudos are due:

Thank you to Emmanuel Macron , Narendra Modi , Anne Bouverot , Philippe Huberdeau , Martin Tisné , and so many others for hosting the amazing Summit, and Florian Cardinaux , Emmanuelle Pauliac-Vaujour and their team in San Francisco for the invites to both the summit and the amazing series of pre-events here. Even though the policy outcome wasn't what I wanted, it was an incredible feat of organizing and a series of beautiful venues. And the party at the Ministère de l'Europe et des Affaires étrangères with Polo & Pan and Kavinsky was unbelievable and unforgettable.

Congratulations to the everyone.AI and Paris Peace Forum teams for a great event focused on launching their global coalition to safeguard children in the age of AI—well done Anne-Sophie SERET , Mathilde Cerioli, Ph.D , Celine Malvoisin and so many others! I'm excited to be a signer and a part of this coalition!

Congratulations to HumaneIntelligence and Dr. Rumman Chowdhury , Theodora Skeadas , Sarah A. and others for hosting the very popular AI & Society House, it was a great place to see great people!

Congrats to PRISM Eval 's Nicolas Miailhe and Tom David for launching a spectacular AI evaluation leaderboard that continues to blow my mind every time I look at it.

And even more thanks to everyone who hosted the incredible receptions and side-events and filled me with canapés and macarons all week long!

#ai #AIActionSummit


Tatiana Caldas-L?ttiger

AI Ethics & Regulatory Compliance Advisor | Humanist | Legal Futurist | International Keynote Speaker | Lecturer | Author | AI Literacy MasterClass

1 周
Theodora Skeadas

Technology Policy and Responsible AI Strategic Advisor | Harvard, DoorDash, Humane Intelligence, Twitter, Booz Allen Hamilton, King's College London

1 周

It was so great to see you in Paris, David! Thanks for all your insights

Wayne Cleghorn

Chief Executive Officer & Practice Director at PrivacySolved | MBCS, FIP, CIPP/E, CIPP/ US, CIPT | Global Data Protection Officer (DPO) | Leader in Global Data Governance, Cybersecurity & Artificial Intelligence (AI)

2 周

This is an amazing dispatch, analysis and commentary from Paris David Evan Harris. So much included. I will keep coming back to it.

Dr. Laura Caroli

Senior Fellow - AI Governance and Regulation

2 周

It was great hanging with you in multiple occasions around town! About the AI liability withdrawal, while it certainly took traction because of the Summit and Vance’s speech etc, the date was not necessarily chosen on purpose. The withdrawal was announced as part of the Commission Work Programme 2025, which it normally presents annually that time of year. If you check the February agenda of the Parliament’s plenary (happening that week but announced weeks earlier) you will find the debate point on the work programme already well in advance. For many in the Brussels bubble, this outcome was foreseeable already months ago, in particular as Council made it increasingly clearer they didn’t want it. While of course the withdrawal is a kind of drastic solution linked to the new accelerationist wave, it should be put in context, as with similar complex processes.

Julia Irwin, PhD

AI Researcher | Fellow, Stanford HAI | Associate Director, International Association for Safe & Ethical AI

2 周

Such a thorough analysis - thank you. Also very glad you were able to join us at IASEAI’25

要查看或添加评论,请登录

David Evan Harris的更多文章