Reflections on a big week in AI governance
DALL-E image: expressionist interpretation of the world of computer chips and semiconductors

Reflections on a big week in AI governance

Last week, President Biden released a comprehensive Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Vice President Harris made major policy announcements at the AI Safety Summit in London. And OMB issued draft implementation guidance. All of this coincided with the G7’s agreement on guiding principles for AI and a voluntary code of conduct, 28 nations and the EU issuing the Bletchley Declaration to coordinate in support of safe and trustworthy AI, and the UN's appointment of an advisory board on AI governance to provide initial recommendations by the end of 2023. I was privileged to co-lead the AI and Equity work at the White House until I departed earlier this year, and I’m enthusiastic about these new moves. Here are my major takeaways from the Administration’s actions:


1.) President Biden is taking a nuanced, rather than black-and-white, view of the issues surrounding AI. The new EO leans into the promise of AI, prioritizing research and charging agencies to use the technology to advance their mission, while taking seriously its perils, imposing safety- and rights-protecting guardrails at a time when it’s unclear when or whether Congress will act. The EO also rejects the binary posed between focusing on long-term existential threats and addressing more immediate harms in people's daily lives. It invokes the Defense Production Act to require red-teaming and reporting from companies making powerful foundation models that could be used to threaten national security; imposes monitoring and reporting requirements on cloud providers to detect malicious AI development by foreign actors; and requires new rules for labs to protect against the creation of biological weapons. At the same time, as Vice President Harris said powerfully in London, uses of AI that deny people healthcare, spread vile deepfakes, result in unjust incarceration, or destabilize democracy are equally "existential" for those affected. That's why the EO has a number of provisions tackling the proximate threats of bias, mis- and disinformation, and invasion of privacy. We need to address all of these challenges at once.


2.) The Administration continues to prioritize equity, civil rights, and social and economic well-being. The AI EO explicitly builds on the October 2022 AI Bill of Rights (shout out to Alondra Nelson , Suresh Venkatasubramanian , Sorelle Friedler , Ami Fields-Meyer , Clarence Wardell III, PhD , and Alex Pascal , among others), as well as Executive Order 14091, the February 2023 order that required agencies to advance equity in their design, development, acquisition, and use of AI. The new EO has a host of relevant taskings and encouragements: CEA and DOL reports on supporting workers who could be displaced and guidance about workers’ rights; a DOJ report on the use of AI in the criminal justice system; FHFA and CFPB efforts to combat discrimination in the housing and consumer finance markets; HUD/CFPB guidance on tenant screening systems; FTC action to guard against anti-competitive, monopoly behavior (DOJ will be important here too); an HHS strategy to regulate AI in drug development; and NSF research on privacy-enhancing technologies, to name a few. There is significant variation in the detail of these mandates. The criminal justice provisions are quite developed, calling out the use of automated technology in predictive policing, forensic analysis, and early release determinations (building on elements of the May 2022 Policing EO). Others are more general and will need to be fleshed out in implementation.


I want to call out two particularly important provisions, both of which double down on mandates in EO 14091, that should not escape notice: (1) the EO directs DOJ to coordinate the various agency civil rights offices in enforcement efforts against legal violations related to AI; and (2) the EO requires agencies to ensure those civil rights offices are consulted in agency development and use of AI in the administration of federal programs. The point is, existing civil rights law applies to AI. Look for more enforcement and other actions, potentially including clarifying regulations and guidance, from civil rights offices.


OMB’s draft guidance, too, is structured around protecting public safety and rights. It lays out minimum practices that have to be followed before using new AI, and then on an ongoing basis, effective August 2024; existing uses that don't comply must stop. These minimum practices include impact assessments comparing benefits and risks, real-world context testing, independent evaluations by staff not involved in the AI development, risk mitigation through human oversight, and public notice. Rights-impacting AI uses (defined in the guidance) carry additional requirements, including proactively identifying and removing factors contributing to algorithmic discrimination, mitigating disparate impact, notifying people who are negatively impacted by the AI's use, and giving people an opt-out option where practicable.


3.) The AI EO is not just about AI risk mitigation; it’s also about leveraging AI to solve our most urgent societal challenges. For example, agencies are directed to support the development of personalized immune-response profiles for patients, improve health care data quality, improve veterans’ healthcare, accelerate permitting for clean energy projects, address climate change, and ensure more eligible Americans receive their public benefits. I think even more can be done in this vein, using AI to identify equity gaps and adjusting federal programs to reach Americans who have been left behind in today's economy.


4.) Last week’s commitments create a host of new opportunities for the public to shape agency policy. The AI EO repeatedly requires agencies to consult – with companies, civil rights groups, academics, and others – before proposing regulations or issuing guidance. Under the OMB draft guidance, agencies “must” consult affected groups before using AI and, when they hear negative feedback, “must” consider not deploying it. Researchers and advocates should dedicate resources to plugging into these efforts. Importantly, the philanthropic sector is stepping up to help them do so. Vice President Harris worked with 10 foundations to announce $200 million to support public interest efforts that promote responsible AI innovation and mitigate harms. They include The David and Lucile Packard Foundation , Democracy Fund , Ford Foundation , Heising-Simons Foundation , MacArthur Foundation , Kapor Foundation, Mozilla Foundation, Omidyar Network , Open Society Foundations , and Wallace Global Fund .


5.) It will be a major undertaking to implement this EO effectively. The White House clearly gets this. The President has centralized coordination in a new White House AI Council led by the WH Deputy Chief of Staff for Policy. OMB released its governance and risk management guidance on the EO’s heels. And NIST is launching a new AI Safety Institute, which will create many of the guidelines and tools called for in the EO--including on red-teaming, authentication and watermarking, privacy-preserving AI, and preventing algorithmic bias.


Staffing up across the interagency will be harder. Every agency has to appoint a Chief AI Officer; the DPA reporting requirements mean agencies will need technologists who understand the data and testing results reported by industry; and it will take untold numbers of new staff to leverage AI for the public good and impose the right guardrails to ensure safety and protect rights. The President's launch of an AI Talent Surge—relying on excepted service and direct hire authority, incentive pay, pooled hiring, and other flexibilities—shows the Administration intends to be nimble. But in a world where tech companies are also feeling the dearth of talent and are offering much higher salaries, the Administration will need more resources. Congress will need to help.


Congress will need to do much more than provide resources, of course. We cannot ensure the development of safe, human-centric AI or the protection of civil rights and civil liberties without congressional regulation. That said, the Administration’s actions are aggressive. Dozens and dozens of actions will follow from the President’s EO. Taking action now, and being responsive to the communities most at risk from AI’s potential harms, will put us in a better position to achieve durable support and regulation through legislation.

Jared Alessandroni

Founder, Startup CTO, AI Innovator, SXSW 2024 Finalist

1 年

Thanks, Chiraag. Wondering about your take on the position that much like SBF leading the charge to *regulate* crypto, the reason we see big players lobbying for more regulation in the AI space is that complex regulation favors incumbent tech leader? I know your text didn't really get into that, but it's something that weighs on me as someone in the startup space, but also as we consider the very dangerous repercussions of failing to regulate what will soon be such a major economic driver.

回复

Chiraag Bains this breakdown is really helpful for us in the laity AI community. I appreciate your work here.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了