Key AI Developments in May 2024

Key AI Developments in May 2024

Keeping up with developments in AI legislation, regulation, investigations, penalties, legal action, and incidents is no easy task, which is why we have created Tracked, a bite size version of our global AI Tracker. Each month, Holistic AI’s policy team provides you with a roundup of the key responsible AI developments from the past month around the world to keep you up to date with the ever-evolving landscape.


Europe

1. EU AI Act approved by the Council of the EU

  • Over three years since its introduction in April 2021, the EU AI Act has been approved by the Council of the EU on 21 May 2024.
  • This follows the European Parliament’s approval of the law on 13 March 2024 after the Coreper I unanimously endorsed the Act on 2 February 2024.
  • The next steps are the publication of the final text in the European Journal, with the law entering into force 20 days after and marking the start of the grace period.
  • Provisions of the EU AI Act will apply gradually - prohibitions will apply after six months while the general application date is two years after the law enters into force (i.e., mid-2026).

2. Council of Europe adopts international treaty on AI

  • On 17 May 2024, the Council of Europe announced the adoption of the first international treaty on AI.
  • The Framework Convention will act as a guiding international framework that sets out principles and norms for artificial intelligence in line with human rights, democracy, and the rule of law.
  • The framework was shaped by various industry, academic, and policy stakeholders from across Europe, including Holistic AI.
  • The convention will be binding on the States that signed and ratified it, upon which the framework will be translated into jurisdictional-specific actions, laws, and regulations by these countries.

3. European Data Protection Board publishes ChatGPT Taskforce report

  • On 24 May 2024, the European Data Protection Board (EDPB) published a report on the work undertaken by its ChatGPT Task Force (ChatGPT TF)
  • The Task Force was established on 12 April 2023 as part of a dispute resolution decision under Article 65 of the GDPR to explore enforcement actions on the processing of personal data by ChatGPT since OpenAI did not have an establishment in the EU.
  • The EDPB plenary meeting on 16 January 2024 specified the mandate of the Task Force and required the publication of a report with interim findings.
  • A key preliminary position in the report is that OpenAI remains responsible for GDPR compliance and cannot pass this burden onto users, even by placing a clause in the terms and conditions of the chatbot.
  • Moreover, the fact that ChatGPT may not result in factually accurate outputs can complicate compliance with the principle of data accuracy under GDPR.
  • Concerningly, according to the report, OpenAI fails to meet transparency obligations and does not adequately anonymize personal data, allowing the system to output identifiable information about individuals.?

4. MHRA launches AI Airlock to address challenges for regulating medical devices that use Artificial Intelligence

  • On 9 May 2024, the UK Medicines and Healthcare Products Regulatory Agency (MHRA) launched AI Airlock, a regulatory sandbox for AI as a Medical Device (AIaMD).
  • The sandbox forms part of the MHRA’s strategic approach to AI that was set out in April, with the pilot seeking 4-6 virtual or real-world projects to test for regulatory issues when the devices are used in the NHS.
  • To do this, the MHRA will collaborate with the NHS AI Lab and the Department of Health and Social Care (DHSC) and will take into account evidence-based work from other, similarly focused bodies.?

5. Schumacher family win legal case against German magazine over misleading AI-generated interview

  • In April 2023, German Magazine Die Aktuelle published an image of Formula 1 world champion Michael Schumacher on its front cover promoting an “exclusive interview”.
  • Having suffered a near-fatal brain injury in 2013, Schumacher has not been publicly seen since.
  • This remains the case, with the magazine actually using AI-generated quotes, resulting in legal action from the Schumacher family.
  • In May 2024, it was announced that this legal action was successful, with a reported €200,000 in compensation paid out.

US

6. NIST announces program to advance sociotechnical testing and evaluation for AI

  • On 28 May 2024, the National Institute of Standards and Technology (NIST) announced the launch of a new program for the testing, evaluation, validation and verification (TEVV) of AI.
  • Through the Assessing Risks and Impacts of AI (ARIA) program, a new set of methodologies and metrics to measure system safety in societal contexts will be developed.
  • These metrics will span three levels of evaluation: model testing, red teaming, and field testing.
  • The program is aimed at helping entities determine whether an AI system will be valid, reliable, safe, secure, private and fair once deployed and expands on the AI Risk Management Framework (AI RMF).

7. Colorado enacts law for AI consumer protections

  • On 17 May 2024, Colorado’s Governor Jared Polis signed SB24-205 Consumer Protections for Artificial Intelligence into law.
  • First introduced on 10 April 2024 and?passed by Colorado’s General Assembly?on 8 May 2024,?the law?introduces consumer protections for AI, coming into effect on 1 February 2026.?
  • The Bill focuses specifically on the regulation of high-risk AI systems, or those used to make critical decisions about education, employment, financial services, government services, healthcare, housing, insurance, or legal services, and requires developers and deployers of these systems to take reasonable precautions to prevent algorithmic discrimination.
  • Key actions required by developers include providing information to facilitate impact assessments and a publicly accessible statement on how the risks associated with the system
  • Key actions from deployers include conducting an impact assessment and providing consumers opportunities to rectify any incorrect personal data that a high-risk system has processed
  • Having gone through the legislative process particularly quickly, Governor Polis’ signing statement shares some reservations about the law, with the US Chamber of Commerce previously writing to Polis calling for a veto of the law due to concerns about a lack of an adequate assessment of its impact on businesses and consumers.

8. Bipartisan Senate AI working group publishes AI policy roadmap

  • On 15 May 2024, the Bipartisan Senate AI Working Group led by Majority Leader Chuck Schumer (NY) and Senators Mike Rounds (SD), Martin Heinrich (NM) and Todd Young (IN) published an AI policy roadmap
  • The 31-page document Driving U.S. Innovation in Artificial Intelligence outlines key topics discussed during AI Insight Forums hosted by the working group, namely: Supporting U.S. Innovation in AI; AI and the Workforce; High Impact Uses of AI; Elections and Democracy; Privacy and Liability; Transparency, Explainability, Intellectual Property, and Copyright; Safeguarding Against AI Risks; and National Security.
  • Under each of these key topics, the roadmap sets out various actions and recommendations for funding to advance policy efforts, as well as the development of specific legislation to tackle key issues such as education and upskilling to facilitate participation in an AI-enabled economy.
  • The roadmap encourages collaboration from various stakeholders and experts to ensure that the potential of AI can be harnessed while risks are minimized, also calling for shared definitions for key AI terms.

9. Senate Cybersecurity Caucus introduces Secure Artificial Intelligence Act of 2024

  • On 1 May 2024, U.S. Sens. Mark R. Warner (D-VA and Thom Tillis (R-NC), co-chairs of the Senate Cybersecurity Caucus, announced the introduction of the Secure AI Act of 2024.
  • The law aims to improve information sharing between the federal government and private companies by updating cybersecurity reporting systems to better account for AI and creating a voluntary database to record AI-related cybersecurity incidents including near misses.
  • Specifically, the law would require NIST to update the National Vulnerability Database within 180 days to include AI security vulnerabilities and update the Common Vulnerabilities and Exposures Program to track voluntary reports.?
  • A public database would be required to be created within a year to track AI incidents, standardize disclosure, and differentiate between security and safety incidents.?
  • A multi-stakeholder process would also be initiated within 90 days to address supply chain risks in AI

10. Department of Health and Human Services issues final rule on nondiscrimination in health programs and activities

  • On 6 May 2024, the Department of Health and Human Services (HSS) published its final rule on nondiscrimination in health services under section 1557 of the Affordable Care Act, which prohibits discrimination in certain health programs and activities.
  • While the rule applies to various programs and activities, the rules explicitly address the use of AI in patient care decision support tools, having previously invited comments on its earlier proposed rule, which resulted in clinical algorithms being replaced with patient care decision support tool in the final version.
  • Patient care decision support tools are any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities.
  • Predictive decision support interventions are a subset of patient care decision support tools and defined as a technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation, or analysis
  • Under the rules, discrimination occurring through covered entities’ use of these tools is prohibited.

11. Text-to-speech company LOVO faces legal action over voice theft

  • Berkley-based start-up LOVO is facing legal action in New York over its voice-to-text software as plaintiffs Paul Lehrman and Linnea Sage allege that it is using individuals' voices without consent.
  • The voice actors were contacted by LOVO employees and were informed that their voices would be used solely for academic purposes but have been used in various AI-generated videos, including on a YouTube channel promoting videos about Russian military equipment.
  • The lawsuit claims violations of New York Civil Rights Law, deceptive practices, false advertising, unfair competition, unjust enrichment, and fraud as the company is allegedly profiting from these voice recordings without compensating the plaintiffs or disclosing the use of their voices for promotional purposes.

12. New Jersey Introduces Joint Resolution on Do Not Disturb Act for AI

  • On 10 May 2024, New Jersey introduced a Joint Resolution (AJR177) urging Congress and the President of the United States to enact the “Do Not Disturb Act” to combat robocalls and protect consumers.?
  • American citizens endure 2.1 billion spam calls monthly, wasting 195 million hours yearly, and phone fraud is the second most common fraud method
  • The Act would expand anti-robocall protections, regulate the use of AI for scams, and alleviate the cost of robocall-blocking technology for consumers.
  • This follows the FCC’s making of robocalls illegal back in February.

13. New York introduces law imposing liability for chatbot results that lead to financial loss

  • Introduced on 14 May 2024, S9381 seeks to create liability for misleading, incorrect, contradictory, or harmful information provided to a user by a chatbot that hat results in financial loss or other demonstrable harm.
  • The bill modifies the general business law to enforce accountability on businesses regarding their chatbot systems, holding proprietors liable for any misleading or harmful information their chatbots provide, covering businesses, organizations, or governmental entities with over twenty employees utilizing chatbots, excluding third-party developers.?
  • Proprietors cannot evade liability by merely disclosing the chatbot's non-human nature and must provide clear, conspicuous, and explicit notice to users interacting with these AI systems.
  • The bill defines 'chatbot' as an AI system simulating human conversation and 'proprietor' as businesses with over twenty employees using chatbots.

14. Colorado passes law on Candidate Election Deepfake Disclosures

  • Colorado’s HB1147, which amends Colorado’s Revised Statutes regarding campaign finance regulations and deepfake content in candidate communications, was passed on 15 May and has been sent to the Governor.
  • Under the bill's provisions, civil penalties may be imposed for failure to include disclosure statements in communications regarding candidates, with varying penalties based on the severity and distribution of the violation.
  • The amendment also revises the process for filing campaign finance complaints.
  • A new article, Article 46, is added to Title 1 of the Revised Statues, establishing rules, definitions, and enforcement mechanisms specific to deepfake content in candidate communications, aiming to ensure the transparency and integrity of election information.
  • The effective date is slated for 1 July 2024, applicable to communications distributed on or after that date.?

15. FCC proposes AI disclosure requirements for political ads

  • On 22 May 2024, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel proposed a rule mandating disclosure of AI-generated content in political ads on radio and TV.
  • The proposal seeks to enhance transparency by mandating on-air and written disclosures in political files when AI tools are used in candidate and issue ads and extend the disclosure requirements to broadcasters, cable operators, satellite TV and radio providers, and section 325(c) permittees.
  • The proposal aims to inform consumers about the use of AI in political ads, without prohibiting such content, to prevent deceptive information, such as deep fakes, from misleading voters.
  • The FCC would initiate a proceeding once a majority of Commissioners approve the proposal, leveraging authority granted by the Bipartisan Campaign Reform Act to protect the public from misleading programming.

Global

16. UN publishes taxonomy of generative AI human rights risks

  • The United Nations published a Taxonomy of Human Rights Risks Connected to Generative AI as a supplemental resource to B-Tech’s Foundational Paper on the Responsible Development and Deployment of Generative AI
  • That Taxonomy outlines how generative AI poses risks to a spectrum of human rights including privacy, equality, freedom of expression, and the right to work and gain a living.?
  • Examples range from the dissemination of biased stereotypes to the erosion of public trust through the creation of false content, impacting the rights of the child and cultural enjoyment as well.?
  • The taxonomy highlights that implementing robust safeguards and ethical guidelines is essential to mitigate these risks and ensure the protection of human rights across all domains affected by generative AI technologies.?

17. UK announces AI Safety Institute office in San Francisco

  • On 20 May, UK Technology Secretary Michelle Donelan announced the first overseas AI Safety Institute office is to open in San Francisco
  • Established as an evolution of the UK’s Frontier AI Taskforce in November 2023 following the inaugural AI Safety Summit, the Institute’s announcement also shared findings from the safety testing of five publicly available advanced AI models, with models remaining highly vulnerable to basic “jailbreaks”.
  • The US office is envisioned to strengthen international cooperation and facilitate the establishment of international standards on AI safety, which was to be discussed at the Seoul Summit.
  • The summit saw the signing of the Seoul Declaration for safe, innovative and inclusive AI to codify intentions to foster international cooperation and dialogue on AI.

18. UK and Canada announce science of AI safety partnership

  • On the same day as the announcement of the San Francisco AI Safety Institute Office, the Department for Science Technology and Innovation announced a partnership with Canada on AI Safety.
  • As part of the agreement, the countries will share expertise to strengthen existing testing and evaluation work and jointly identify other priority areas for research collaboration.
  • The UK AI Safety Institute will also share priority access to the UK AI Research Resource with the Canadian AISI on their joint research, working toward a Memorandum of Understanding on AI safety collaboration

19. Chile updates National AI policy and action plan after recommendations from UNESCO

  • On 2 May 2024, Chile introduced its updated National AI Policy and action plan following the recommendations of the Chilean AI Readiness Assessment Report that was extended by UNESCO.
  • With a focus on AI governance and AI ethics, the Policy incorporated the results of the UNESCO Readiness Assessment Methodology (RAM), making Chile the first country in the world to implement the RAM.
  • Following this, on 7 May 2024, Chile introduced a bill on AI (16821-19) proposing a legal framework to promote the development, use, and adoption of AI while safeguarding fundamental rights using a risk-based approach that could be compared to the EU AI Act.

20. Reserve Bank of India (RBI) warns against credit algorithms

  • On 15 May 2024, Mr Swaminathan J, Deputy Governor of the Reserve Bank of India warned against the overreliance on credit algorithms in a speech at the Conference of Heads of Assurance of Non-Banking Financial Companies (NBFCs).
  • Stressing the importance of using high-quality data and criteria to build “rule-based credit engines”, Swaminathan highlighted the risk of oversights or inaccuracies in credit assessments driven by algorithms in dynamic market conditions.
  • NBFCs are encouraged to ensure there is suitable awareness of the capabilities and limitations of these models that is supplemented by continuous monitoring and validation of credit scoring models.

21. AI Chatbot used to mimic Bollywood actor restrained by Delhi High Court

  • The Delhi High Court has restrained entities from using actor Jackie Shroff's name and nicknames, voice, and images for commercial applications following an interim order issued on 15 May 2024.
  • As a result, KamotoAl’s unlicensed chatbot featuring the actor’s avatar was banned due to a violation of Shroff's personality and publicity rights, where the unauthorized exploitation caused financial and reputational harm.?

Holistic AI Policy Updates

We recently hosted our monthly Policy Hour Webinar on US AI Policy. Holistic AI’s Nikitha Anand and Ella Shoup gave an overview of initiatives at the federal and state levels, covering key themes such as transparency and non-discrimination, as well as key use cases such as generative AI. Give it a watch here.


?? We’re excited to be hosting our first free in-person AI Policy Connect Event on Monday 3 June 6:00 – 8:30 pm in Brussels. We will be joined by experts from the EU Commission, EU Parliament, and IAPP for our panel 'EU AI Act: Hear from the Experts'.

Sign up to attend here.

Want to dive in?

Check out our blog for deeper insights on key AI developments around the world from our policy team.

Authored by Holistic AI’s Policy Team.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了