Key Sources For Keeping Abreast AI Regulations
Karta Legal Training Materials

Key Sources For Keeping Abreast AI Regulations

Given the pervasiveness of GenAI technology and AI generally, all lawyers regardless of their practice area should aim to understand the regulations around it at the local, state, federal, and international levels to the extent applicable. Here is a short list of the more interesting ones to read and follow with links for further reading:

European Union's AI Act

The EU AI Act is a proposed regulation by the European Commission aimed at creating a comprehensive legal framework for artificial intelligence within the European Union. The primary goal of the Act is to ensure that AI technologies are safe, transparent, and respect fundamental rights. Here are the key points:

- Expected Enactment:

  • The Act is anticipated to be fully enacted in Q2 of 2024, with most provisions taking effect two years later (Q2 of 2026).

- Extraterritorial Effect:

  • The Act applies to the sale and use of AI systems in, or affecting individuals located in, the EU.

- Enforcement and Compliance:

  • National authorities in each EU member state will be responsible for enforcing the regulations.
  • Non-compliance can result in significant penalties, including fines of up to 6% of the offending company's global annual turnover.

- Risk-Based Approach:

  • The Act classifies AI systems into four risk categories: prohibited, high-risk, transparency risk, and general purpose.
  • Prohibited AI Systems: AI systems that pose unacceptable risks, such as those that manipulate human behavior to the detriment of individuals, are outright banned.
  • High-Risk AI Systems: These include AI applications that significantly impact safety, fundamental rights, and livelihoods (e.g., biometric identification, critical infrastructure). These systems are subject to strict regulatory requirements.
  • Transparency Risk AI Systems: AI systems that interact with humans, generate deepfakes, or are used for surveillance are required to disclose their nature and use.
  • General Purpose AI Systems: These systems can be used in various applications, necessitating broader regulatory oversight.

- Regulatory Requirements for High Risk AI Systems

  • Rigorous testing and risk management processes.
  • Clear documentation and traceability.
  • Human oversight and robust cybersecurity measures.

- Further Reading: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Colorado AI Bill (SB 24-205)

The Colorado AI Bill (SB 24-205) introduces risk-based regulations for the use of artificial intelligence within the state, closely mirroring the EU AI Act. It aims to address algorithmic discrimination, explicitly excluding efforts to promote diversity, equity, and inclusion (DEI) from the definition of discrimination. The bill's scope is not limited to Colorado businesses, potentially setting a precedent for other states. If enacted, it will come into force in 2026 and likely require businesses to issue separate AI notices. The primary focus of the legislation is to mitigate the risks associated with AI while promoting transparency and accountability in its deployment. Here are some of the key points:

- Risk-Based Rules: Imposes risk-based rules on the use of AI within Colorado, similar to the EU AI Act.

- Broader Impact: Not limited to Colorado businesses, potentially setting a standard for other states.

- Anti-Discrimination Focus: Aims to address the risks of algorithmic discrimination, excluding DEI initiatives from the definition of discrimination.

- AI Notices: Likely requirement for separate AI notices.

- Further Reading: https://leg.colorado.gov/bills/sb24-205


Biden’s AI Executive Order

President Biden's Executive Order on artificial intelligence establishes new standards aimed at ensuring AI technologies are safe, secure, and trustworthy. The directive mandates government entities to take specific actions over a 12-month period, focusing on areas such as privacy, civil rights, and the prevention of algorithmic bias. It emphasizes the importance of transparency, accountability, and ethical AI development, and seeks to bolster public trust in AI technologies. This Executive Order is part of a broader effort to position the United States as a leader in responsible AI innovation. Here are the key points:

  • Establishes New AI Standards: Sets guidelines to ensure AI technologies are safe, secure, and trustworthy.
  • Government Actions: Directs federal agencies to implement specific actions within 12 months, focusing on privacy, civil rights, and preventing algorithmic bias.
  • Emphasis on Transparency and Accountability: Aims to enhance public trust in AI by promoting ethical development and deployment practices.

- Further Reading: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

New York ADM Employment Tool Law

The New York Automated Decision-Making (ADM) Employment Tool Law mandates that employers conduct independent bias audits on their AI-based employment decision tools to ensure fairness and prevent discrimination. This law aims to increase transparency and accountability in AI-driven hiring processes by requiring detailed documentation and public disclosure of audit results. It reflects a broader effort to address potential biases in automated systems and protect job applicants from unfair treatment. Employers must comply with these requirements to continue using such tools in their hiring practices. Key points are:

- Bias Audits: Requires employers to conduct independent bias audits of their AI employment tools.

- Further Reading:

- NYC Local Law 144 Proposed To Regulate Automated Hiring Tools: https://www.littler.com/publication-press/publication/new-york-city-adopts-final-regulations-use-ai-hiring-and-promotion

Other AI and Employment-Related Legislation

- California (SB 36, 2019), Colorado (SB 21-169, 2021), and Illinois (HB 0053, 2021): Enacted legislation to protect individuals from discrimination and ensure equitable design of AI systems.

- Various States (e.g., California, Colorado, Connecticut, Delaware, etc.): Legislation ensuring compliance with AI system rules and standards, holding developers and deployers accountable.

- Further Reading: [Artificial Intelligence in the States: Emerging Legislation](https://www.csg.org/blog/2023/04/04/artificial-intelligence-in-the-states-emerging-legislation/) (The Council of State Governments)

By keeping abreast of these resources and legislative updates, lawyers can better navigate the regulatory landscape surrounding AI advancements to not only advice their clients but also to ensure compliance with their ethical and professional responsibilities..

要查看或添加评论,请登录

Karta Legal LLC的更多文章

社区洞察

其他会员也浏览了