The Trump administration is developing its AI Action Plan and put out a call for ideas. In response, several of us from the Center for a New American Security (CNAS) have shared 15 pages of ideas to promote and protect America's AI edge. I've highlighted a few below, but you can read the full list through the link.
??♂?Fast-track secure AI infrastructure, including energy generation, transmission, and data centers. America should combine its advantage in AI chips with the ability to quickly deploy them at scale without waiting years for antiquated permitting processes to resolve.
????Attract top AI talent from the world. Specifically, the administration should add high-demand, shortage-prone AI jobs to the DOL's Schedule A List, which makes it easier to hire top foreign talent. It can also clarify guidance and coordination for the O-1A and EB-1A visas for foreigners with "extraordinary ability" in AI. Finally, it should create and expand STEM visa and talent exchange programs with our closest security partners, like members of the Five Eyes alliance.
??Develop a comprehensive strategy to promote U.S. AI globally. We need to better balance efforts to restrict access to American AI from competitors like China, epitomized by the AI Diffusion Rule, with an ambitious vision to promote it with close partners and "swing" states.?
??Partner with industry to boost security at AI labs and data centers. As the capabilities of U.S. AI models grow, so will the incentive for sophisticated foreign adversaries to steal or sabotage them. As an example, government and industry can partner to develop best practices to secure AI model weights.
??Establish mechanisms to detect and track real-world AI incidents. The government needs a systematic and secure way to track and learn from AI incidents as adoption accelerates. This mechanism would also help build an evidence base to tailor future policymaking.
?? Promote foundational research to make frontier AI models more robust and reliable. The potential of AI adoption across the national security enterprise is great, but so is the bar for trust and reliability. Instead of being passive consumers of emerging AI capabilities, DARPA/IARPA should fund high-risk, high-reward projects to drive breakthroughs in AI robustness and reliability.
You can see the rundown of all 17 recommendations here from me and my CNAS colleagues Paul Scharre, Becca Wasser, Janet Egan, Josh Wallin, Bill Drexel, Caleb Withers, and Michael Depp. https://lnkd.in/guztuYPz