Decoding the 'American' AI Framework: What You Need to Know

Decoding the 'American' AI Framework: What You Need to Know

AN AI FRAMEWORK?! FOR THE USA?!

I hope you're half as excited and intrigued as I am.

Oh, and welcome to TomTalks??!

In case this hasn't popped up on your radar yet, the Biden administration issued an executive order regarding artificial intelligence on October 30, 2023. It's what many of us have been clamoring for . . . at least in theory. So what is it in reality?

For starters, it's very "American." The order promises to help Americans know if content is AI-generated, provide Americans with necessary skills, support American workers, and protect Americans' rights.

In total, it uses the word "American" 14 times. It's not an obscene amount for such a long text, but just take that for what you will.

If you'd like more official reporting, here are some places to get started:

  1. Executive order in its entirety (via The White House)
  2. Condensed Fact Sheet (via The White House)
  3. Other sources of coverage in the mainstream media, take your pick:

NY Times CNN CNBC ABC Reuters Politico

Scientific American Ernst & Young Vox

Barack Obama The Verge MIT Technology Review


Disclaimer:

Insights for the overview below were gathered by ChatGPT before extensive review and editing on the part of yours truly. ?? (If you want ChatGPT to read a many-page document, make sure you've enabled one or more relevant beta features.) The devil is in the details, so ideally, refrain from making any important decisions without reading the whole report.

Off we go!

Overview

New rules for AI companies

  • AI systems must undergo robust, reliable, repeatable, and standardized evaluations before deployment.
  • AI systems must be tested for security risks, especially concerning biotechnology, cybersecurity, critical infrastructure, and national security.

Guardrails for development and distribution

  • AI must be safe, secure, and ethically developed.
  • AI should be developed in a manner that promotes responsible innovation, competition, and collaboration.
  • AI should support workers and not be deployed in ways that undermine rights or worsen job quality.
  • AI policies must advance equity and civil rights, ensuring they don't deepen discrimination or bias.
  • AI should protect consumers, especially in critical fields like healthcare, financial services, education, housing, law, and transportation.
  • Privacy and civil liberties must be protected as AI advances.

Implications for businesses using enterprise AI

  • Businesses must ensure AI systems are tested, understood, and have mitigated risks before deployment.
  • Companies must be transparent about when content is generated using AI.
  • Businesses should promote responsible innovation and tackle intellectual property questions.
  • Companies should ensure AI does not undermine worker rights or introduce new health and safety risks.

Implications for entrepreneurs

  • The government will promote a fair, open, and competitive ecosystem for AI, ensuring small developers and entrepreneurs can drive innovation.
  • There will be support for a marketplace that harnesses the benefits of AI to provide new opportunities for entrepreneurs.

Implications for creatives

  • While not explicitly mentioned, the emphasis on intellectual property questions and the responsible use of generative AI implies that creatives using AI in their work will need to be aware of new standards and best practices.

Implications for consumers

  • Consumers will benefit from labeling and content provenance mechanisms, helping them determine when content is AI-generated.

  • There will be increased consumer protections against fraud, bias, discrimination, and other harms from AI, especially in critical sectors.

Implications for geopolitics and socioeconomics

  • The U.S. aims to lead in global societal, economic, and technological progress in the era of AI.
  • The order recognizes the potential national security implications of AI, especially in areas like biotechnology, cybersecurity, and critical infrastructure. It emphasizes the need to address these concerns proactively.
  • Building and maintaining public trust in AI is a recurring theme in the order. The government aims to ensure that AI systems are transparent, accountable, and do not perpetuate or amplify biases, which is crucial for public acceptance and the successful integration of AI into various sectors.
  • The U.S. will engage with international allies and partners in developing a framework to manage AI’s risks and promote common approaches to shared challenges.
  • The order acknowledges the economic potential of AI and its ability to drive growth, innovation, and job creation. It underscores the importance of harnessing this potential while ensuring that AI does not undermine job quality or worker rights.
  • The government seeks to promote responsible AI safety and security principles with other nations, leading global conversations to ensure AI benefits the world without causing harm.


Implementing and enforcing: Who does what?

1. Federal Agencies:

  • The Office of Science and Technology Policy (OSTP): Tasked with coordinating efforts across the Federal Government to ensure the safe and secure development and use of AI. They are also responsible for engaging with the private sector, academia, and other nation-states.
  • The National Institute of Standards and Technology (NIST): Directed to work with the private sector and other stakeholders to develop guidelines and best practices that promote industry standards for the development and deployment of AI. They are also responsible for developing resources for generative AI and incorporating secure development practices for AI models.
  • The Department of Defense (DoD): Tasked with launching initiatives to create guidance and benchmarks for evaluating AI capabilities, especially those that could pose threats in areas like nuclear, biological, and chemical security.
  • The Department of Commerce: Along with the DoD, they are responsible for developing tools and testbeds for evaluating AI capabilities.

2. Private Sector:

  • AI Laboratories: They are expected to collaborate with the Federal Government to ensure the development of safe and secure AI. This includes sharing information about AI capabilities and vulnerabilities.
  • AI Developers and Companies: They are expected to adhere to the guidelines and best practices developed by entities like NIST. This includes ensuring AI systems undergo evaluations before deployment, testing for security risks, and developing effective labeling and content provenance mechanisms.

3. Academia and Civil Society:

  • They are expected to collaborate with the Federal Government and provide expertise and insights into the development and deployment of AI. This collaboration aims to ensure a comprehensive understanding of AI's potential risks and benefits.

4. Third-party Evaluators:

  • They play a crucial role in the AI red-teaming tests, ensuring that AI systems are safe and secure before deployment.

5. International Allies and Partners:

  • The U.S. will engage with international allies and partners to develop a common framework for managing AI risks. This collaboration aims to promote responsible AI safety and security principles globally.


Bottom line

  • This executive order emphasizes the importance of responsible AI development and its potential impacts on various sectors of society. It sets the stage for a more regulated and standardized approach to AI in the U.S., with implications for businesses, workers, consumers, and international relations.
  • The order emphasizes the importance of continued investment in AI research and development. This includes both basic and applied research to advance the state of the art and address the challenges posed by AI.


Still with me? Here's my personal spiel:

For the time being, there are a lot of unanswered questions, from what exactly The White House refers to by "AI systems" to whether or not there will be sufficient incentives or deterrents in place to really enforce the order. And crucially, will AI industry leaders, lobbyists, and the politicians and officials in charge of carrying out this order be trundling back and forth through revolving doors of the sort we see in pharma, biotech, and agriculture?

In short, will this order actually lead to AI being any more safe, secure, or trustworthy?

It's very hard, if not impossible, to say. I think of the order as a sort of first draft. Not that it wasn't thoroughly thought through and undoubtedly revised many times over already, but that the real feedback will come in the form of action from the industry, the public, and the international community. That's when a better picture will emerge of what this all will mean. At that point, of course, regulators will probably have already fallen well behind once again -- technology moves at the pace of innovation, legislation moves at the pace of consensus.

If you see any great or at least thought-provoking takes on the order, tag me or send them my way, because I'm eager to hear a diverse bunch of voices on this topic.

And as always, thanks for reading!


Oh yeah -- let's not forget a ?? Quote of the week:

"The law and technology are historically first adversaries, then collaborators."

- Lawrence Lessig, American legal scholar and activist


AI and its influence are growing exponentially, creating a world filled with deep opportunities and even deeper unknowns. TomTalks?? is a weekly exploration of the benefits, risks, and costs of AI adoption, featuring brief but crucial conversations with AI experts and global business leaders. Hosted by award-winning innovation expert Tom Popomaronis.

Want to be a guest on a video podcast for TomTalks? Use this form.

Pavel Uncuta

??Founder of AIBoost Marketing, Digital Marketing Strategist | Elevating Brands with Data-Driven SEO and Engaging Content??

4 个月

Exciting news on AI regulations! Looking forward to diving into the details in TomTalks. #AI #innovation ??

回复
Lawrence (Larry) Pixa

Foreign Affairs, Science & Technology Policy: Emerging Science & Tech | Strategy | Partnership Design | R&D Collaboration

1 年

Tom, this is a terrific synthesis and take on the EO. Many thanks for providing this. I second your statement (below) and will say you were being very diplomatic when you imply "probably;" regulators can only fall behind if they were current to begin with and most are woefully lacking digital/technical acumen to lead on technology, much less for AI, IMHO. "...regulators will probably have already fallen well behind once again -- technology moves at the pace of innovation, legislation moves at the pace of consensus."

回复
Praful Krishna

Product | AI | Leadership

1 年

IMO too early to say Tom. Lawyers will make a feast of it and could back with some real tactical next steps, or just a display banner the way it happened with GDPR :-) Exciting times ahead, in either case. Thanks for the tag, though.

回复
Ryan Turpin

Executive Ghostwriter | Articles | Speeches | Sustainability Reporting | CSRD & ESRS | External Communications

1 年

I agree Tom, this is just the beginning, and a superficial read doesn't show anything very odd or concerning. It's not the end all, be all, and instead it WOULD be concerning if we went many more months without something like this...

Thanks Tom Popomaronis for boiling down a complex and important subject/event and presenting it in an easy to follow manner. Keep up the good work….. My take is that no matter how detailed a law or mandate the government tries to put down; it will be hard to determine where normal techmology ends and AI begins and this grey area will keep getting greyer as time passes secondly, what is the penalty for an AI crime if proven it has occurred ? To me, this is a good beginning but lacks substantial context. But as you say, its a good start.

回复

要查看或添加评论,请登录

Tom Popomaronis的更多文章

社区洞察

其他会员也浏览了