Key Takeaways from the National Security Memorandum on AI

Key Takeaways from the National Security Memorandum on AI

On October 24, the White House released a national security memorandum focused on advancing the United States’ leadership in fostering safe, secure, and trustworthy artificial intelligence (AI).

The memorandum outlines the Biden-Harris Administration’s vision for leading the world’s development of safe, secure, and trustworthy AI. It expands on themes from the Administration’s Executive Order and OMB memo on AI.

The memorandum is a lengthy document that issues several directives to various agencies; however, five parts of the memorandum are especially important for shaping the US’s AI strategy.

1: The Central Role of the US AI Safety Institute

The US AI Safety Institute (AISI) was founded in 2023 as part of the National Institute of Standards and Technology (NIST). In its strategic vision, AISI describes three goals: advancing the science of AI safety, articulating and disseminating the practices of AI safety, and supporting coordination around AI safety.

The national security memorandum gives AISI a critical role in evaluating AI-related national security risks. Before the memorandum, it was clear that AISI would play an important role in developing and conducting model evaluations on frontier AI systems.

But the memorandum expands on AISI’s role: it clarifies that AISI will be the primary point of contact with industry. This role is especially important given that some of the most significant advances in frontier AI development are occurring within private companies. Several of these companies are preparing for systems with dual-use capabilities in domains like biological weapons, cyber offensive capabilities, and autonomous risks. If systems with dangerous capabilities are developed, the national security memorandum makes it clear that AISI will be in charge of coordinating with industry:

In the event that AISI or another agency determines that a dual-use foundation model’s capabilities could be used to harm public safety significantly, AISI shall serve as the primary point of contact through which the United States Government communicates such findings and any associated recommendations regarding risk mitigation to the developer of the model.

The memorandum also clarifies how AISI will coordinate with other national security agencies and senior national security experts. As one noteworthy example, AISI will send certain reports directly to the President’s National Security Advisor (also known as the APNSA, or Assistant to the President for National Security):

Within 270 days of the date of this memorandum, and at least annually thereafter, AISI shall submit to the President, through the APNSA, and provide to other interagency counterparts as appropriate, at minimum one report.

2: Risks from Autonomous Systems

Dual-use foundation models may possess a variety of capabilities that are relevant to national security and public safety. While much of the national security memorandum focuses on chemical, biological, radiological, and nuclear (CBRN) capabilities, the memorandum also highlights the potential for other kinds of risks.?

Most notably, the memo highlights risks from autonomous systems (AI that can “autonomously carry our malicious behavior) and risks from AI-automated research and development (AI that can “automate development or deployment of other models with such capabilities.”) The memorandum instructs AISI to conduct voluntary testing of frontier AI models to assess of these risks:

AISI shall pursue voluntary preliminary testing of at least two frontier AI models prior to their public deployment or release to evaluate capabilities that might pose a threat to national security.? This testing shall assess models’ capabilities to aid offensive cyber operations, accelerate development of biological and/or chemical weapons, autonomously carry out malicious behavior, automate development and deployment of other models with such capabilities, and give rise to other risks identified by AISI.? AISI shall share feedback with the APNSA, interagency counterparts as appropriate, and the respective model developers regarding the results of risks identified during such testing and any appropriate mitigations prior to deployment.

Evaluating risks from autonomous systems is a particularly novel and challenging scientific enterprise. Until recently, large language models have generally lacked agentic capabilities. As AI progress advances, AI agents will become increasingly relevant.?

The memo reflects this understanding by clarifying that AISI is tasked not only with evaluating “traditional risks” (like biological or cyber risks) but also with “novel risks” that are unique to agentic AI systems (like risks from autonomous systems and risks from AI R&D automation.) This approach is also consistent with that of the UK AI Safety Institute, which has already had a division focused on examining risks from autonomous systems.?

3: Global AI Governance

The memorandum contains a section focused on global AI governance (a “globally beneficial international AI governance landscape.”) The memo highlights that the US aims to “support and facilitate improvements to the safety, security, and trustworthiness of AI systems worldwide.”

To achieve this, the memo directs the Department of State to produce a “strategy for the advancement of international AI governance norms:

The Department of State, in coordination with DOD, Commerce, DHS, the United States Mission to the United Nations (USUN), and the United States Agency for International Development (USAID), shall produce a strategy for the advancement of international AI governance norms in line with safe, secure, and trustworthy AI, and democratic values, including human rights, civil rights, civil liberties, and privacy.? This strategy shall cover bilateral and multilateral engagement and relations with allies and partners.? It shall also include guidance on engaging with competitors, and it shall outline an approach to working in international institutions such as the United Nations and the Group of 7 (G7), as well as technical organizations.?

Notably, the memo not only mentions “allies and partners” but also “competitors.” It will be especially interesting to see how the strategy discusses potential engagement with competitors. It is no secret that the United States views artificial intelligence through the lens of great power competition with China: the White House has invested considerably into semiconductor export controls in an effort to prevent China from obtaining advanced AI hardware.

With this context in mind, it will be interesting to see if the United States and China are able to reach any agreements relating to the safe, secure, and trustworthy development of advanced AI. Even during the Cold War, the United States and the Soviet Union were able to negotiate agreements like the START treaties that were designed to mitigate the dangers of a nuclear arms race. Even at a time when the United States had strong reasons to doubt the sincerity of Soviet commitments, President Reagan’s “trust, but verify” maxim gave the United States confidence that it could monitor compliance with nuclear agreements.?

As AI becomes more capable and its global security threats become more salient, will the United States and China be able to agree to certain standards or principles? And will such agreements have sufficiently strong verification methods to ensure that both sides can be assured that the other party is following through on its commitments? The “guidance on engaging with competitors” section of the international AI governance strategy might be an opportunity for experts to begin exploring these questions and incorporating them into a comprehensive global AI strategy.

4: Recruiting AI Talent

The memorandum recognizes that recruiting and retaining AI talent will be essential for achieving the United States’ AI-related strategic objectives. The memo directs many agencies to consider how they can use existing authorities to bring foreign AI talent to the United States:

On an ongoing basis, the Department of State, the Department of Defense (DOD), and the Department of Homeland Security (DHS) shall each use all available legal authorities to assist in attracting and rapidly bringing to the United States individuals with relevant technical expertise who would improve United States competitiveness in AI and related fields, such as semiconductor design and production.?

Within 90 days of the date of this memorandum, the Assistant to the President for National Security Affairs (APNSA) shall convene appropriate executive departments and agencies (agencies) to explore actions for prioritizing and streamlining administrative processing operations for all visa applicants working with sensitive technologies.? Doing so shall assist with streamlined processing of highly skilled applicants in AI and other critical and emerging technologies.? This effort shall explore options for ensuring the adequate resourcing of such operations and narrowing the criteria that trigger secure advisory opinion requests for such applicants, as consistent with national security objectives.

It also directs a Talent Committee to consider how the federal government can improve its ability to hire and retain AI talent:

Within 90 days of the date of this memorandum, the Coordination Group described in subsection (b)(ii) of this section shall establish a National Security AI Executive Talent Committee (Talent Committee) composed of senior AI officials (or designees) from all agencies in the Coordination Group that wish to participate.? The Talent Committee shall work to standardize, prioritize, and address AI talent needs and develop an updated set of Government-wide procedures for attracting, hiring, developing, and retaining AI and AI-enabling talent for national security purposes.

5: Expansive Definition of National Security

Finally, the national security memorandum offers a somewhat expansive view of national security. In addition to public safety threats (like those from AI-enabled biological weapons or risks from autonomous systems), the memorandum directs some agencies to pursue work focused on broader concepts like human rights and civil liberties.

For example, AISI is directed to develop and conduct model evaluations on these topics:

Voluntary unclassified safety testing shall also, as appropriate, address risks to human rights, civil rights, and civil liberties, such as those related to privacy, discrimination and bias, freedom of expression, and the safety of individuals and groups.? Other agencies, as identified in subsection 3.3(f) of this section, shall establish enduring capabilities to perform complementary voluntary classified testing in appropriate areas of expertise.

The extent to which such activities should be included is somewhat controversial. It is generally also more politically polarized than other areas of public safety and national security.

Proponents of such language argue that AI has major implications for areas like discrimination, bias, and civil rights. AI systems could exhibit harmful biases (for example, in areas like hiring or medical diagnoses) that introduce novel harms to disadvantaged groups or exacerbate biases that may have already been present in society (and in AI training data).

Opponents of this broad approach argue that the federal government should not be trusted to objectively or impartially assess or regulate such areas. They highlight that the federal government could overreach, either by overregulating the AI sector or by defining concepts like “AI bias” in a way that conforms to left-wing ideology. For example, efforts to thwart AI-generated misinformation have been criticized for potentially enabling the government to censor legitimate political positions or control “what content Americans can see and share.”?

Yet another factor involves limited resources: the more work that an agency is assigned, the more money it will need in order to perform that work effectively. AISI received $20M in 2024 (compared to about $130M allocated to the UK AI Safety Institute), and its funding situation moving forward is unclear. Will AISI have enough resources to cover both its (narrower) national security and public safety roles, as well as its (broader) roles relating to bias and human rights?

Time will tell how resources are allocated to AISI and how it ultimately prioritizes its areas of research between various priorities. For now, the national security memorandum makes it clear that the current Administration envisions AISI taking on a broader concept of national security that includes concepts like human rights, discrimination, and bias.?

Conclusion

There were several other noteworthy parts of the national security memorandum. For example, the National Security Agency’s AI Security Center (AISC) is tasked with developing and performing tests to evaluate models’ offensive cyber capabilities, the Department of Energy is tasked with developing tests for nuclear and radiological risks, and several agencies are directed to invest more resources into stimulating research on AI safety and trustworthiness. I encourage interested readers to view the full document.?

Finally, the timing of the memo is worth noting: the memo came out less than two weeks before election day. It will undoubtedly be important to examine how the future Trump Administration or Harris Administration reacts. Will Trump largely overhaul the memorandum, or will he retain many of its core components? Will Harris completely follow the vision set by the memorandum, or will her Administration deviate substantially from the Biden Administration’s AI strategy?

Either way, the ideas in the memorandum could offer a helpful starting point for either new Administration as it thinks about its vision for AI and national security, setting the standard for more protective legislation worldwide.

Authored by: Akash Wasil , Editorial Writer - Fidutam

Edited by: Leher Gulati , Editorial Director - Fidutam

要查看或添加评论,请登录

Fidutam的更多文章

社区洞察

其他会员也浏览了