Can the AI Industry Regulate Itself?
AI regulation in the United States lies at the intersection of existing federal laws, a growing patchwork of state privacy [I] and AI laws [ii], international treaties, and federal policy set by the executive branch. Recent policy changes move sharply away from harmonizing AI regulations in the United States with those of the European Union, signaling a laissez-faire approach for the next four years. Can AI industry self-regulation work, or could it result in unintended consequences perhaps worse than the putative negative effects of social media?
In November 2023, Biden administration Executive Order 14110 [iii] Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence provided clear direction for AI governance to executive departments and federal agencies. Prior to that, in September 2024, the US joined other countries in signing the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law [iv], suggesting that the US, EU, and other democratic countries would take similar approaches to AI regulation. In January, 2025 EO 14110 was revoked by the incoming Trump administration. Conservatives had objected to the EO for several reasons, including potential impact on US AI innovation and competitiveness.
On January 23, Executive Order 14179 [v] Removing Barriers to American Leadership in Artificial Intelligence, was signed by President Donald J. Trump. The new federal policy was “...to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The order called for an implementation plan to be developed and required executive departments and federal agencies to suspend, revise or rescind any actions inconsistent with the new policy.
The AI Arms Race
One day prior to the signing of EO 14179, the $500B Stargate Project [vi] was announced in a White House press conference, evoking parallels with President Ronald Reagan’s 1983 Strategic Defense Initiative [vii], nicknamed the ‘Star Wars’ program. The goals of the project included protecting US national security by ensuring that AI investment remains in the US and that large-scale AI infrastructure is built on US soil. Oracle co-founder, former CEO, and Executive Chairman, Larry Ellison, and SoftBank Group CEO, Masayoshi Son, were present at the announcement, along with Sam Altman, CEO of OpenAI. During the press conference, Ellison spoke about improving health outcomes through cancer detection and personalized vaccines. Altman said “I think this will be the most important project of this era.”
Accelerating the development of AI technology could contribute to US economic growth, and advances in AI promise productivity gains for US workers and companies that could make the US economy more competitive. In theory, AI technology advances could lead to the emergence of artificial general intelligence (AGI), which has the potential to disrupt the existing geopolitical world order [viii] by enabling new levels of automation and autonomous weapons systems. The AI arms race is heating up and the stated policy of the United States is to win; but at what cost?
Widening Regulatory Gaps
Regardless of executive branch policy changes, executive departments and federal agencies operate in accordance with existing federal law. For example, laws governing information privacy, such as the Fair Credit Reporting Act (FCRA) [ix] and Health Insurance Portability and Accountability Act (HIPAA) [x] also apply to AI. Similarly, laws concerning fairness, such as the Equal Credit Opportunity Act (ECOA) [xi] and Americans with Disabilities Act (ADA) [xii], apply to both human and algorithmic decision making, thus to AI systems. Federal laws related to privacy and fairness tend to be industry specific in the US, rather than regulating data collection and use, or algorithmic fairness, across industries. New industries or new technologies can result in new regulatory gaps. Social media companies and data brokers, for example, do a brisk business in consumer data outside of regulations governing lending or employment, and the details of algorithmic decisions outside of regulated industries are proprietary. Additionally, the 2024 reversal of the 1984 Chevron decision [xiii] by the Supreme Court in Loper Bright Enterprises et al. v. Raimondo, Secretary of Commerce, et al. [xiv], curtailed the ability of federal agencies to bridge gaps or ambiguities in regulatory statutes independent of the courts. Ceteris paribus, advances in AI technology can result in a widening regulatory gap.
Learning from Social Media
In addition to the industry-specific focus of US regulations, the US tends to enact regulations in response to actual, rather than potential, harms. For example, the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act [xv] was enacted after the Great Recession [xvi] of 2007-2009. The history of social media in the US illustrates the tradeoff between pre illio and post hoc regulation. Specifically, social media companies were implicitly constrained by market forces, e.g., competition and consumer choice, rather than by government regulations.
On one hand, social media expanded internet information sharing and connectedness for billions worldwide and quickly gave rise to new technology giants such as Facebook, YouTube (now part of Google), and Twitter (now X). On the other hand, potential mental health, social cohesion, political polarization, and radicalization effects of social media have been the subject of numerous studies, and the use of social media in influence operations, e.g., during the 2016 US presidential election, is well known. In response to what amounts to abuses of their platforms, social media companies like Facebook made enormous efforts to address these ongoing issues. While post hoc regulation is unlikely to be helpful at this point, it remains unclear what effect pre illio regulation might have had on social media because the challenges social media companies faced were not previously known. Compared to social media, AI technology could have far greater benefits, but could potentially have greater negative effects.
Doveryai, no Ne Proveryai?
While the US lags behind the EU in AI regulation, it arguably leads the world in AI innovation, albeit at the cost of prioritizing innovation over potential future harms, but the lessons learned by social media companies have not been lost. Compared to social media companies at the same stage of development, US AI companies are far ahead both in terms of moderating content, e.g., curating AI model training data sets and putting guardrails around LLM prompts and responses, and in terms of managing cyberthreats and abuses of their platforms. Additionally, AI companies are commercially incentivized to prevent AI safety and security issues because they compete for market share in a fast-growing market where responsible AI failures would have long term, negative business consequences.
To mitigate commercial risks AI companies have been proactive: Anthropic published a Responsible Scaling Policy [xvii], Google published responsible AI principles [xviii] and a Secure AI Framework (SAIF) [xix], Nvidia published trustworthy AI principles [xx], and OpenAI has an extensive AI safety program [xxi]. All of the above companies, along with virtually every foundation or frontier model developer in the US, are members of the NIST Artificial Intelligence Safety Consortium [xxii]. US-based AI companies are building, or have already built, the observability mechanisms, security controls, and other safeguards needed to comply with the General Data Protection Regulation (GDPR) [xxiii], ePrivacy Directive [xxiv], and Artificial Intelligence Act [xxv], despite the fact that these capabilities are not mandated by US law.
By proactively investing in responsible AI, US-based AI companies are, in effect, self-regulating. Whether self-regulation by industry players is sufficient to ensure adequate AI safety and security for US citizens remains to be seen. What is clear, is that AI investment, innovation, and technical advances in the US will continue in the foreseeable future.
i. “US State Privacy Legislation Tracker,” January 21, 2025. https://iapp.org/resources/article/us-state-privacy-legislation-tracker/.
ii. “US State AI Governance Legislation Tracker,” January 1, 2025. https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker/.
iii. Federal Register. “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” November 1, 2023. https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.
iv. Artificial Intelligence. “The Framework Convention on Artificial Intelligence,” November 8, 2024. https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence.
v. Federal Register. “Removing Barriers to American Leadership in Artificial Intelligence,” January 31, 2025. https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence.
领英推荐
vi. OpenAI, "Announcing The Stargate Project," January 21, 2025, https://openai.com/index/announcing-the-stargate-project/.
vii. Nuclear Museum. “Strategic Defense Initiative (SDI) - Nuclear Museum,” n.d. https://ahf.nuclearmuseum.org/ahf/history/strategic-defense-initiative-sdi/.
viii. Situational Awarenesss. “Introduction - SITUATIONAL AWARENESS: The Decade Ahead.” SITUATIONAL AWARENESS - the Decade Ahead, June 6, 2024. https://situational-awareness.ai/.
ix. “16 CFR Chapter I Subchapter F -- Fair Credit Reporting Act,” n.d. https://www.ecfr.gov/current/title-16/chapter-I/subchapter-F.
x. “48 CFR Part 324 Subpart 324.70 -- Health Insurance Portability and Accountability Act of 1996,” n.d. https://www.ecfr.gov/current/title-48/chapter-3/subchapter-D/part-324/subpart-324.70.
xi. “12 CFR Part 1002 -- Equal Credit Opportunity Act (Regulation B),” n.d. https://www.ecfr.gov/current/title-12/chapter-X/part-1002/.
xii. “49 CFR Part 38 -- Americans With Disabilities Act (ADA) Accessibility Specifications for Transportation Vehicles,” n.d. https://www.ecfr.gov/current/title-49/subtitle-A/part-38/.
xiii. Stevens, J., J. Marshall, J. Rehnquist, J. O’Connor, Deputy Solicitor Bator, Solicitor General Lee, Acting Assistant Attorney General Habicht, et al. “Chevron U.S.A. v. Natural Res. Def. Council.” U.S.A. v. Natural Res. Def. Council, n.d. https://tile.loc.gov/storage-services/service/ll/usrep/usrep467/usrep467837/usrep467837.pdf
xiv. Loper Bright Enterprises and Raimondo, Secretary of Commerce. “Loper Bright Enterprises et al. v. Raimondo, Secretary of Commerce, et al.” Supreme Court of The United States, June 28, 2024. https://www.supremecourt.gov/opinions/23pdf/22-451_7m58.pdf
xv. United States Congress. “Dodd-Frank Wall Street Reform and Consumer Protection Act.” Report. Public Law. Vol. 124, July 21, 2010. https://www.congress.gov/111/plaws/publ203/PLAW-111publ203.pdf
xvi. The Library of Congress. “The Financial Crisis in the US?: Key Events, Causes, and Responses,” n.d. https://www.loc.gov/item/2011379415/.
xvii. “Announcing Our Updated Responsible Scaling Policy,” n.d. https://www.anthropic.com/news/announcing-our-updated-responsible-scaling-policy/.
xviii. “AI Principles – Google AI - Google AI,” n.d. https://ai.google/responsibility/principles/.
xix. “Google’s Secure AI Framework - Google Safety Center,” n.d. https://safety.google/cybersecurity-advancements/saif/.
xx. NVIDIA. “NVIDIA Trustworthy AI,” n.d. https://www.nvidia.com/en-us/ai-data-science/trustworthy-ai/.
xxi. OpenAI. "Safety at every step," n.d. ttps://openai.com/safety/.
xxii. NIST. “Artificial Intelligence Safety Institute Consortium (AISIC) | NIST,” October 23, 2024. https://www.nist.gov/aisi/artificial-intelligence-safety-institute-consortium-aisic/.
xxiii. “General Data Protection Regulation - 02016R0679-20160504 - EN - EUR-Lex,” n.d. https://eur-lex.europa.eu/eli/reg/2016/679/.
xxiv. “ePrivacy Directive - 2002/58 - EN - EUR-Lex,” n.d. https://eur-lex.europa.eu/eli/dir/2002/58/oj/eng/.
xxv. “Artificial Intelligence Act - EU - 2024/1689 - EN - EUR-Lex,” n.d. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.