Evolution of AI Regulations (Part-I): U.S.A. | Cyberolicy
"Hamlet's timeless question, 'To be or not to be,' has puzzled humanity for ages. Now, with the advent of AI, we're confronted with a new riddle: 'To regulate or not to regulate?' This conundrum has rekindled humanity's inquisitive spirit, triggering a flurry of debates, online seminars, committee meetings, research papers, and books, all striving to untangle the complex web of AI regulations.
As a pioneer in technological innovation and legislation, the USA is grappling with this question, seeking to understand and shape the evolution of AI regulation. Today, we will delve into the progression of AI regulations in the USA, a process still very much in flux." This article presents a comprehensive timeline of policy development and AI regulation in the United States from 2016 to 2023. It offers examination of the nation's dynamic approach towards AI, from the establishment of interagency working groups, the formulation of national strategies, to the enactment of specific legislation. The narrative unfolds a series of key milestones that mark the United States' journey in navigating the multifaceted challenges and opportunities presented by AI. By exploring the nation's past and present efforts to balance innovation with ethical, societal, and legal considerations, this article provides a valuable foundation for understanding the evolving landscape of AI policy-making. There is civil liability arising from AI usage, and this liability would arise in the context, field, and industry of usage rather than simply as a result of AI usage. The article is limited to rules, laws, and orders that are directly related to the development or deployment of AI and gives a factual overview of the landscape of AI Regulations.
The White House announced a series of workshops and an interagency working group on May 6, 2016, to learn more about the benefits and risks of artificial intelligence. The White House acknowledged the risks and complex policy challenges of AI in its announcement. In pursuance, the National Science and Technology Council (NSTC) established a subcommittee on machine learning and artificial intelligence (AI), and Carnegie Mellon University hosted a workshop on safety and control. The report followed a series of outreach programmes that included 5 workshops and a request for information, to which 161 people responded. According to the report, AI has many products that are also subject to regulations designed for their respective sectors to protect the public from harm and ensure fairness in economic competition. According to the 2016 report,
The report advised public and private institutions to use AI and machine learning to benefit society, open training data and open data standards, requirement of diverse perspectives and technology of AI, industry to keep the government updated on general progress of AI industry, federal agencies using AI based system should ensure efficacy and fairness, ethics, privacy, security and safety should be integral part of curricula of AI education, AI safety engineering field to be explored, plan and strategize to account for impact of AI on cybersecurity and Govt. should complete the development of a single, government wide policy consistent with international humanitarian law, on autonomous and semi-autonomous weapons.
Based on the Subcommittee's report, the National Artificial Intelligence Research and Development Plan was published in October 2016. The plan recommended seven AI R&D strategies, three of which were for the development of trustworthy AI. First, understand and address the ethical, legal, and societal implications of AI by improving fairness, transparency, and accountability-by-design, building ethical AI, and designing ethical AI architectures. Second, to ensure the safety and security of AI systems, as well as to improve explainability and transparency while building trust, improving verification and validation, and securing against attacks; and third, to measure and evaluate AI technologies through standards and benchmarks: developing a broad spectrum of AI standards, establishing AI technology benchmarks, increasing the availability of AI testbeds, and engaging the AI community in standards and benchmarks.
Along with the National AI R&D Strategy, a national privacy research strategy was launched, emphasizing research on transparency in data collection, storage, and use, which was also deemed appropriate for data protection and privacy in AI.
In February 2018, the Oversight and Government Reform Committee's Information Technology Subcommittee held a number of hearings and reviewed multiple reports from leading AI experts. In September 2018, the Sub-Committee published a report addressing four challenges faced by the AI workforce, privacy, bias, and malicious use of AI. The report reiterated the findings of the 2016 report. A subcommittee of the National Science and Technology Council (NSTC) was formed to assess the risk to public safety and evolve regulations to better account for the need for AI. The report also recommended that the National Institute of Standards and Technology (NIST) should play a key role in the development of standards.
The White House hosted the Artificial Intelligence for Americans Industry Summit on May 10, 2018, where it was decided to remove barriers to innovation and eliminate burdensome regulations.
On August 13, 2018, the National Defence Authorization Act (NDAA) for Fiscal Year 2019 passed. The Act directed the Secretary of Defence (DoD) to establish a set of activities to coordinate the Department's efforts to develop, nurture, and transition Artificial Intelligence Technologies into operational use. Among these was "governance and oversight of AI and ML policy," in which appropriate officials from across departments were to meet to develop and continuously improve AI policy. The act created the National Security Committee for Artificial Intelligence. The act also, for the first time, officially defined artificial intelligence as
1) any artificial system that performs tasks under varying and unpredictable conditions without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
2) A computer software, physical hardware, or other context-based artificial system that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
3) A machine designed to think or act like a human, such as cognitive architectures and neural networks.
4) A collection of techniques, including machine learning, used to approximate a cognitive task.?
5) A rationally acting artificial system, such as an intelligent software agent or embodied robot, that achieves goals through perception, planning, reasoning, learning, communicating, decision-making, and acting.
On February 11, 2019, The President of the United States issued Executive Order 13859 titled "Maintaining American Leadership in Artificial Intelligence," which stated that the American AI Initiative is guided by five principles, two of which are to increase public trust and confidence in AI technologies and to protect civil liberties, privacy, and American values. EO also established goals for developing technical standards to reduce vulnerabilities, as well as priorities for innovation, public trust, and confidence.
The National Artificial Intelligence Research And Development Strategic Plan: 2019 update was published in June 2019, with similar strategic priorities in terms of standards and regulations as its previous iteration from 2016.
In response to Executive Order No. 13859, NIST Published "A Plan For Federal Engagement In Developing Technical Standards And Related Tools" on August 9, 2019. AI Standards Concept and Terminology, Data and Knowledge, Human Interactions, Metrics, Networking, Performance testing and reporting methodology, safety, risk-management, and trustworthiness were identified as nine focus areas in the plan. Trustworthiness standards, according to NIST, include guidance and requirements for security, reliability, objectivity, safety, resiliency, accuracy, and explanability. NIST emphasized that AI standards must be consistent with US government policies and principles, societal and ethical concerns, governance, and privacy. The plan aided agencies in making decisions about AI standards.
The White House hosted The Summit on Artificial Intelligence in Government on September 9, 2019, where AI development was designated as a priority in Administrative Policy.
On February 24, 2020, the Department of Defence (DoD) adopted Ethical Principles for AI, which included five major areas: Equitable, Traceable, Equitable, Reliable, and Governable. In June 2020, the United States Intelligence Community issued the Artificial Intelligence Ethics Framework for the Intelligence Community to provide stakeholders with a reasonable approach for making decisions about how to procure, design, build, use, protect, consume, and manage AI and related data.
领英推荐
On September 14, 2020, the United States House of Representatives passed the AI in Government Act of 2020. The bill died at the end of the 116th Congress after failing to pass the United States Senate.
In accordance with E.O. 13859 "Maintaining American Leadership in Artificial Intelligence," the Director of the Office of Management and Budget issued a Memorandum to the Heads of Executive Departments and Agencies on November 17, 2020, providing policy guidance on how to approach AI applications developed and deployed outside of the Federal Government. The Memorandum directed to avoid regulatory or non-regulatory actions that could stifle AI innovation and growth, as well as to avoid taking a precautionary approach and instead use evidence-based regulations to address specific and identifiable risks.?Furthermore, the memorandum directed that any agency considering regulatory and non-regulatory approaches to AI development and deployment conduct Regulatory Impact Analysis, Public Consultation, Risk Assessment, and Risk Management.
On Dec 3, 2020. The President signed Executive Order 13960, "Encouraging the Use of Trustworthy Artificial Intelligence in the Federal Government." This E.O. declared Policy for the Government of the United States that it promotes innovation and use of AI to improve government operations and services while fostering public trust, building confidence in AI, and remaining consistent with applicable laws, and that the Officer of Management and Budget will issue a common policy on the use of AI in government. The E.O. also established principles for the use of AI in government.
On December 27, 2020, Section 5002 of the William M. (Mac) Thornberry National Defence Authorization Act (NDAA) for Fiscal Year 2021 (15 U.S.C. 9401) passed. The term ''artificial intelligence'' was defined as a machine-based system that can make predictions, recommendations, or decisions influencing real or virtual environments for a given set of human-defined objectives. Artificial intelligence systems use machine and human-based inputs to: (A) perceive real and virtual environments; (B) abstract such perceptions into models through automated analysis; and (C) use model inference to formulate information or action options.
On January 1, 2021, the National Artificial Intelligence Initiative Act of 2020 (NAIIA) was signed into law. It inserted a section 22 into the National Institute of Standards and Technology Act (standards for artificial intelligence) that directed NIST to create a Risk Management Framework for AI, best practices for data sharing, and best practices for data set documentation.
In the summer of 2021. The Government Accountability Office (GAO) developed an AI accountability framework to help managers maintain responsibility and accountability when implementing AI in government schemes and operations. The framework was organized around four key principles: governance, data, performance, and monitoring. Each principle outlines essential practices for federal agencies and other organizations evaluating, selecting, and deploying artificial intelligence systems. Procedures for auditors and third-party assessors are also included in the framework.
In September, 2021. The US Department of Health and Human Services has released a "Trustworthy AI Playbook." The HHI Trustworthy AI Playbook is based on Executive Order 13960 and is a resource for organizations developing or deploying AI systems.
The White House Office of Science and Technology Policy published "Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People" in October, 2022. The Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems in the age of artificial intelligence to protect the rights of the American public. These principles are as follows:
Principle 1: Safe and Effective Systems: Individuals have the right to be protected from unsafe or ineffective automated systems. This includes the right to be free from harm caused by automated systems, and the right to have confidence that automated systems will perform as intended.
Principle 2: Algorithmic Discrimination Protections: Individuals have the right to be free from discrimination by algorithms. This includes the right to be treated fairly and equally by automated systems, and the right to be free from bias in automated decision-making.
Principle 3: Data Privacy: Individuals have the right to have their personal data protected from unauthorized access, use, or disclosure. This includes the right to know what personal data is being collected, how it is being used, and with whom it is being shared.
Principle 4: Notice and Explanation: Individuals have the right to be notified when their personal data is being used by an automated system, and they have the right to understand how their personal data is being used and why.
Principle 5: Meaningful Human Alternatives: Individuals have the right to a meaningful human alternative to automated decision-making. This includes the right to have a human review of any automated decision that has a significant impact on them.
In accordance with the Artificial Intelligence Initiative Act, NIST published the Artificial Intelligence Risk Management Framework in January 2023.?On July 29, 2021, NIST issued an RFI (Request for information) for the NIST AI Risk Management Framework (AI RMF). The RFI requested information on the following topics: the current state of artificial intelligence risk management; the challenges and opportunities of AI risk management; the types of AI systems that require the most risk management, the best practises for AI risk management, and the resources available to assist with AI risk management. NIST received over 100 RFI responses. The information gathered from the RFI was used to create the first draft of the AI RMF, which was published in March 2021. In August 2022, the second draught was released, which included feedback from the public comment period. The final draft, which is the final version of the AI RMF (1.0), was released in January 2023. The framework is intended to be voluntary, right-preserving, non-sector specific, and use-case agnostic, according to NIST. The framework is divided into two sections. Part I discusses the intended audience and the framing of AI risks, while Part II analyses and outlines the characteristics of trustworthy AI systems.
On April 11, 2023, the Department of Commerce's National Telecommunications and Information Administration (NTIA) requested comments on the AI accountability policy, which includes 34 questions about AI accountability objectives, accountability subjects, accountability inputs and transparency, barriers to effective accountability, and AI accountability policies.
As we come to a conclusion, it is evident that the development of AI regulation in the United States from 2016 to 2023 has been gradual but determined. The USA has made progress in establishing a strong framework for AI regulation over the past seven years as a result of a persistent commitment to strike a balance between the pursuit of technological advancement and ethical considerations, societal needs, and legal obligations. The landscape of AI regulation in the United States is a dynamic one, marked by the continuous interplay of innovative breakthroughs, evolving legislative frameworks, and fluctuating societal demands. This journey paints a picture of a nation wrestling with the multifaceted challenges posed by AI, yet striving to harness its opportunities for the greater good. The road ahead remains far from straightforward. As AI continues to permeate every facet of society, the complexity and urgency of the regulatory challenges are set to increase. The lessons from the United States' experience thus far will be crucial in informing this ongoing journey.
Future articles in this series will delve further into the specific legislative measures by countries and EU that have been enacted and their impacts, the role of key stakeholders in shaping AI regulation, and the emerging trends and issues that are set to define the next chapter of the AI regulatory discourse in the World. Stay tuned for these upcoming pieces as we continue to navigate the evolving landscape of AI regulation together.
References: