Mr. Harris Goes to Washington (White House, EU, OpenAI...)

Mr. Harris Goes to Washington (White House, EU, OpenAI...)

What do you do when you think some laws need to get made? Go to Washington. And so I did.

My whirlwind visit (60 hours!) was a blast—I met with policymakers at the The White House and the European Union Delegation to the United States ; gave a talk about AI & elections; met with experts and leaders at top DC think tanks; and even hosted a happy hour for fellow members of the Integrity Institute .

The trip was spurred by an invitation from OpenAI and the Bipartisan Policy Center to give a talk at a workshop they were co-hosting about AI and elections. The roughly three dozen participants hailed from leading civil society organizations, foundations and academic institutions and were working on different aspects of this challenging topic. Since the gathering was held under the Chatham House Rule, I can’t share much detail, but funny enough, I was scheduled months before I knew I’d be in Washington to give a talk the day following the workshop via Zoom to the Michigan Institute for Data and AI in Society which was on the same topic, and is now on YouTube (link in comments). The workshop itself was absolutely fantastic—kudos especially to the OpenAI and BPC teams for hosting. The best part though was all the follow-ups with attendees. I’ve already got a new co-authored piece in the works with Larry Norden from the Brennan Center for Justice at 美国纽约大学 . I also got the surprise bonus of meeting John Sands of the Knight Foundation in person for the first time!

On Day 2, I met with officials at the White House Office of Science and Technology Policy to share my views on AI, elections, open source, misinformation, the recent Voluntary AI Commitments that they secured, and what I hoped might be included in future executive orders on AI. They were particularly interested in my take on the specific risks of open source AI, which I hope is an area that they’ll be able to address through regulation—requiring that open source model developers transparently assess and mitigate risks of models before they are released to the public (which includes our enemies), and are held accountable for harms that their technologies facilitate. The risks that I focused on are around generation of misinformation, powering influence operations designed to sway elections, and the ways that AI systems can amplify existing social inequities through bias. Regulating this may be a tall order, especially given the enormous lobbying power of industry, but at least I know I was heard and I could see clearly that the people on the inside really understand the issues are working hard on this. There was a nostalgia bonus too—the meeting took place in the same part of the White House complex where I worked as a Confidential Assistant at the Office of Management and Budget and an intern at the White House Council on Environmental Quality 23 years ago.

On Day 3, I met with the European Union Delegation to the United States , specifically with people who work on the EU’s digital policy and antitrust efforts. We spoke at length about my new favorite law, the Digital Services Act, how it will be enforced, as well as what I’m hoping they’ll include in the EU AI Act, which is currently being negotiated. In particular, I’m hoping that the AI Act classifies general purpose AI models as high risk, due to the many possible ways that they can be misused, and that these models be held to the same standards as more narrow “high risk” models that were demarcated in the 2021 draft of the AI Act that was released before the explosive growth of generative AI. I also hope that they make specific provisions regarding the release of open source AI models, again holding model developers (not just deployers) responsible for the harms that their models facilitate, and requiring rigorous pre-release and pre-deployment risk assessment and mitigation. Ultimately, my hope is that both the White House, the EU, Congress, individual states, and other countries can work together to make sure that the makers of AI systems are legally held accountable for assessing and mitigating the risks of their products. Just as is the case with so many other industries from medicine to architecture, if they can’t mitigate the risks, they should not be permitted to release their products. If we allow them to release risky products without liability, it means they’re pushing the risks onto society, and that’s exactly what policymakers should be stopping—before it’s too late.

Between those meetings, I zipped around to a series of rapid rendezvouses with an inspiring set of DC policy people, the likes of whom are sadly quite hard to find here in San Francisco. After many years of admiring her work from afar, I finally met Kat Duffy in her new office at the Council on Foreign Relations . I reconnected with Ivan Sigal of Global Voices , a long-time friend and collaborator. I visited Joshua A. Bell and saw his absolutely phenomenal new exhibit at the Smithsonian Institution National Museum of Natural History about the Cellphone (see photo). I finally met Renee DiResta , a DC-based researcher at the Stanford Internet Observatory , whose writings I have devoured for years, and who generously showed me the incredible Gravelly Point plane-watching spot and walked me from there to security. I had nostalgic coffees/drinks/walks with my former Facebook / Meta colleagues, Miranda Bogen , Katie Harbath , Theodore Wilhite , Will N. and Eva Guidarini , old friends like Tom Glaisyer and Dan Shore, and new friends Anika Gupta , Naomi Nix , Todd O'Boyle , and Greg Johnson .

A final highlight (yes I still love to throw a party!) was hosting a happy hour for members of the Integrity Institute and friends on the rooftop of the incredible Hotel Zena, which is “dedicated to female empowerment.” And yes, they have a portrait of RBG made entirely out of tampons.?

While I wasn’t traveling to DC in any official capacity—I represented only myself in all of my meetings—I have to thank OpenAI and the Bipartisan Policy Center for inviting me to DC, as well as the organizations that have given me the time and space this year to build my expertise and share my views on these areas, especially the Knight Foundation , ICSI - International Computer Science Institute , Centre for International Governance Innovation (CIGI) , the Psychology of Technology Institute , Integrity Institute , CITRIS and the Banatao Institute and the University of California, Berkeley, Haas School of Business . Next stops are Dublin in October and Geneva next March. Perhaps I’ll squeeze in Brussels or Sacramento somehow too.

Obligatory selfie at the West Wing Entrance
Josh Bell and I at the Smithsonian Cellphone exhibit - he curated it! Don't miss it!
The Integrity Institute + Friends Rooftop Happy Hour
Another happy hour shot—so great to see Katie & Teddy together again!
I stumbled onto a group of photographers shooting the moonrise over the Capitol.
Had to pay a visit to Abe, the mall, the Washington Monument, and the Reflecting Pool, the last of which, I argue, is a brilliant pun. However my friend Dan Shore, Professor of English at Georgetown, disagreed.






Pablo Moronta

Co-Owner at Roxo-forte Produ??o Editorial

1 年

Great news, Mr Harris!

Exciting, such important and timely work. I wish you all the best to have an impact in these areas.

David Evan Harris

Business Insider AI 100 | Tech Research Leader | AI, Misinfo, Elections, Social Media, UX, Policy | Chancellor's Public Scholar @ UC Berkeley

1 年

Love this!

  • 该图片无替代文字

Bummed to miss you this time! Looking forward to meeting IRL

Merve Hickok

President @Center for AI & Digital Policy | Founder @AIethicist.org | CFR-Hitachi Fellow | Lifetime Achievement Award - Women in AI of the Year | 100 Brilliant Women in AI Ethics | Lecturer @University of Michigan

1 年

Sounds like whirlwind of a trip David :) sounds like we have some overlap with MiDAS too

要查看或添加评论,请登录

David Evan Harris的更多文章

社区洞察