Keeping up with developments in AI legislation, regulation, investigations, penalties, legal action, and incidents is no easy task, which is why we have created Tracked, a bite size version of our global AI Tracker
. Each month, Holistic AI’s policy team provides you with a roundup of the key responsible AI developments from the past month around the world to keep you up to date with the ever-evolving landscape.
Europe
1. CMA outlines growing concerns in markets for AI Foundation Models
- Earlier this month, the UK’s Competition and Markets Authority (CMA) noted
“growing concerns” regarding foundation models in both its CEO’s speech
and in an update to its initial report
on AI Foundation Models (FMs).
- They highlight three interlinked risks to fair, open, and effective competition:
- Firms controlling critical inputs for developing FMs may restrict access to shield themselves from competition.
- Powerful incumbents could exploit their positions in consumer business facing markets to distort choice in foundation model services and restrict competition in deployment.
- Partnerships involving key players could exacerbate existing positions of market power through the value chain.
- The CMA is also currently performing a merger review of Microsoft’s partnership with OpenAI
and a market investigation on cloud services.
- In light of these concerns, the agency is also considering which digital activities to prioritize for investigation under the Digital Markets, Competition and Consumers Bill.
2. German Federal Office for Information Security (BSI) releases first version of Generative AI Models Guide
- On 10 April, Germany’s Federal Cyber Security Authority published its guide on ‘Generative AI Models – Opportunities and Risks for Industry and Authorities’
.
- The guide provides an overview of the opportunities and risks of Large Language Models (LLMs) and offers potential countermeasures to address the risks, and will be continually updated with any new opportunities and risks as LLMs develop.
- It is specifically designed to serve as a basis for systematic risk analysis for companies and authorities considering integrating LLMs into their workflows.
3. EU Commissions group of Chief Scientific Advisors publish Scientific Opinion (SO) on the integration of AI in science in the EU
- On 15 April 2024, the?EU?Scientific Advice Mechanism
released its report
on the “Successful and timely uptake of AI in science in the EU”.
- Focusing on how the EU can responsibly incorporate AI into scientific disciplines, the report offers recommendations on what benefits AI could give to scientific productivity and what benefits, incentives and challenges would AI-enabled research bring to the European innovation ecosystem and society.
- The findings in the report are based on evidence from experts of the Science Advice for Policy by European Academies (SAPEA)
consortium, literature reviews, and targeted workshops and interviews.
- SOs are published as part of the Scientific Advice Mechanism
that provides independent scientific evidence and policy recommendations to EU institutions.
4. European Commission opens proceedings against TikTok and Meta under the Digital Services Act (DSA)
- As of 22 April, the European Commission has launched
a second formal investigation against TikTok under the DSA, focusing on the launch of TikTok Lite in France and Spain.
- The investigation aims to determine if TikTok breached DSA obligations by introducing the "Task and Reward Program" without conducting a prior risk assessment and implementing effective risk mitigation measures, leading to concerns about the potential addictive effects of the company’s new program, especially among children due to inadequate age verification measures.
- The absence of effective age verification mechanisms and the suspected addictive design of TikTok are already under investigation by the Commission in the first formal proceedings
against TikTok.
- The following week, on 30 April, the Commission announced formal proceedings?
to assess whether Facebook and Instagram, owned by Meta, have breached the DSA over the company’s policies and practices relating to deceptive advertising and political content on the platforms.
- There are also concerns that the Notice-and-Action mechanism for flagging illegal content and user redress and internal complaint-mechanisms are not compliant with the DSA’s requirements.
US
5. Updates on Biden's AI Executive Order and new publications from NIST
- Following the 180 day update
, the Department of Commerce announced on 29 April new actions
towards the implementation of Biden's Executive Order
(EO) on the Safe, Secure and Trustworthy Development of AI.
- As part of these efforts, the U.S. Patent and Trademark Office (USPTO) is requesting public comments
on how AI could affect evaluations of ordinary skills in the arts to determine if an invention is patentable under U.S. law. Responses are due 29 July 2024.
- NIST has also released four drafts for public comment (due 2 June 2024) on Mitigating the Risks of Generative AI, Reducing Threats to the Data Used to Train AI Systems, Reducing Synthetic Content Risks, and Global Engagement on AI Standards.
- NIST has also announced the NIST GenAI
new program to evaluate and measure generative AI technologies as part of NIST's response to the EO.
6. Generative AI Copyright Disclosure Act of 2024
- On 9 April, HR7913
, the Generative AI Copyright Disclosure Act of 2024 was introduced to the House of Representatives by Adam Schiff.
- The Bill mandates that individuals altering or creating datasets for AI training provide a detailed summary of any copyrighted works used in the training of the model to the Register of Copyrights, including the URL of the dataset.
- This information should be provided no later than 30 days before the generative AI tool using such dataset is made available to consumers.
- The Register would be required to establish and maintain a publicly available online database that contains each notice filed.
7. Bipartisan Lawmakers Introduce the American Privacy Rights Act (APRA)
- Earlier this month, Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Committee Cathy McMorris Rodgers (R-WA) released
a discussion draft for a bipartisan proposal for the American Privacy Rights Act (APRA),
which could result in the first national comprehensive data privacy framework in the United States.
- The proposal marks a breakthrough
in the long-standing stalemate on developing a national online privacy standard, even as lawmakers and industry groups alike have regularly discussed the need for such a law.
- The proposal includes several consumer data privacy provisions, including limiting the types of consumer data companies can collect, retain, and use to what they need to operate their services, and would have significant implications for AI
.
- In the case of conflict between federal and state law, APRA would take precedence over state privacy laws but would allow state regulation
on more specific issues, such as health or financial data, civil rights, and consumer protection
.
8. U.S. Agencies Release Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems
- The U.S. Equal Employment Opportunity Commission (EEOC)
issued a joint statement
this month with other US federal agencies affirming that their agencies’ enforcement authorities apply to automated systems.
- The statement further clarifies that existing legal authorities (such as discrimination laws) will ensure that automated systems are developed and used in a manner consistent with federal laws.
- It references the fact that automated systems may contribute to unlawful discrimination due to problems with data, model opacity and access, and the design and use of such systems.
- The statement was signed by leaders from the EEOC, Consumer Financial Protection Bureau, Department of Justice, Federal Trade Commission, Department of Education, Department of Health and Human Services, Department of Homeland Security, Department of Housing and Urban Development, and Department of Labor.
9. EEOC weighs in on Workday lawsuit
- The EEOC further demonstrated its commitment to regulating automated employment decision tools in early April when it filed an amicus brief
in San Francisco federal court that argued Workday can be qualified as an ‘employment agency,’ and therefore subject to Title VII of the Civil Rights Act of 1964 and other laws.
- Previously, Workday had claimed it is not an employer or employment agency under the law’s definition that bans discrimination based on disability and age.
- The EEOC’s amicus brief rebukes that claim because the software company performs ‘precisely the same screening and referral functions’ as traditional employment agencies, and therefore should be subject to relevant laws.
- Workday has pushed back
on the EEOC’s brief, saying the Commission has no case.
10. Shein sued for copyright infringement
- Artist Alan Giana filed a class action lawsuit
against the fast fashion company Shein for copyright infringement at the start of April, alleging that the company uses electronic monitoring and AI to identify popular designs and copy them for their own products.
- The lawsuit claims that Shein’s technology tracks consumer behavior online, especially over social media, to identify, monitor, and eventually steal the trends and designs it anticipates will be popular.
- Giana’s complaint also argues that the electronic monitoring system is particularly disenfranchising for creators who publish their designs online looking for greater exposure.
- Shein, a Chinese company with an online marketplace of 600,000 items
, has been the subject of multiple lawsuits
in recent years, most of which have alleged copyright infringement, though they have not included
the allegation that the infringements are powered by an algorithm.
- As of 26 April 2024, Shein was also designated
as a Very Large Online Platform (VLOP) under the Digital Services Act, meaning that the company will be under increased scrutiny.
11. U.S. AI Safety Institute Expands Leadership Team
- U.S. Secretary of Commerce Gina Raimondo announced
on 16 April new members of the? U.S. AI Safety Institute (AISI)’s
executive leadership team, who will join AISI Director Elizabeth Kelly and Chief Technology Officer Elham Tabassi in the new institute, which sits within the National Institute of Standards and Technology (NIST). ?
- The additional team members include Paul Christiano as Head of AI Safety, Adam Russell as Chief Vision Officer, Mara Campbell as Acting Chief Operating Officer and Chief of Staff, Rob Reich as Senior Advisor, and Mark Latonero as Head of International Engagement.
- AISI aims to advance the science, practice, and adoption of AI safety across different risks, including those to national security, public safety, and individual rights. It will also support the responsibilities assigned to NIST under EO 14110.
- AISI was created following President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI
in October 2023 that directed NIST to house the new Institute.
12. Florida advances several AI laws
- Several bills related to AI have passed through the Florida legislature, with some awaiting only the final signature from Governor Ron DeSantis (R), marking significant developments in the regulation of AI technology within the state.
- Senate Bill 287 addresses
the regulation of autonomous vehicles and AI technologies, outlining provisions aimed at ensuring the safe and responsible deployment of AI systems in various sectors, including transportation and healthcare.
- HB 919
mandates the disclosure of AI usage in campaign ads and requires that the disclosure be prominently displayed on various types of media, including print, TV, internet, audio, and graphics. The law was approved by the Governor on 26 April.
- Both bills follow the passing of Senate Bill 1680
in March, which created the Government Technology Modernization Council and was approved by the Governor on 26 April.
13. Veto overridden and law on autonomous vehicles passed in Kentucky
- Despite Governor Andy Beshear's veto of HB7 in early April, Kentucky's legislature has pushed forward the bill
allowing fully autonomous vehicles on state roads.
- Governor Beshear's concerns
centered on the safety and security of such vehicles, advocating for extensive testing with human drivers before permitting fully autonomous vehicles.
- The veto was overridden with a 58-40 vote in the Senate and a 21-15 vote in the House on April 12.
- The bill, effective from July 31, 2026, will initiate with a phased-in approach, requiring a human driver behind the wheel during the initial two years.
- Law enforcement is mandated to create an interaction plan, to be submitted before fully autonomous vehicles can operate on Kentucky roads without a human driver.
14. Virginia passes law on AI use by public bodies
- On 8th April, Virginia Governor Glenn Youngkin signed Senate Bill 487, which instructs
the Joint Commission on Technology and Science to examine the use of AI by public bodies in the state and establish a Commission on AI.
- The law, effective July 1, 2025, imposes regulations on the implementation of AI systems by public bodies.
- Public bodies are required to conduct initial and ongoing impact assessments of AI and submit annual reports to the Commission on how they are preventing unlawful discrimination or disparate impact on certain groups.
- If adverse effects are detected, the public body must cease using the system and must maintain an inventory of AI systems.
15. Massachusetts Attorney General issues guidance to developers, suppliers, and users of AI
- Attorney General Andrea Joy Campbell issued
an advisory on 16 April aimed at guiding developers, suppliers, and users of AI systems regarding their obligations under Massachusetts state laws concerning consumer protection, anti-discrimination, and data security.
- The advisory emphasizes that existing state laws governing consumer protection, anti-discrimination, and data security are applicable to AI systems, reflecting their broad usage across various sectors.
- It serves as a notice that the Attorney General's office will enforce these laws in the event of a violation.
- The document also includes a non-exhaustive list of acts and practices that may be deemed unfair and deceptive under the Massachusetts Consumer Protection Act. These include falsely advertising AI system quality, misrepresenting system reliability, and failing to comply with relevant statutes and regulations.
16. National Telecommunications and Information Administration (NTIA) release AI Accountability Policy Report
- At the end of March, the NTIA released its AI Accountability ?Policy Report
following a public Request for Comment
that garnered more than 1,400 submissions. The subsequent includes policy recommendations to support safe, secure, and trustworthy AI innovation based on those comments.
- Among the recommendations is the need for the U.S. government to promote guidance, support, and regulations for AI systems; independent evaluations to verify the claims made about these systems; and consequences for imposing unacceptable risks or making unfounded claims.
- The role of standards also plays an important role in the report, which advocates for the development of robust international standards, especially in areas including AI risk and performance, data quality, stakeholder participation, and internal governance controls.
- The NTIA also emphasizes that the sheer variety of sectors using ‘AI terminology’ presents a challenge in AI standards development.
Global
17. UK & United States announce partnership on science of AI safety
- The United States and the United Kingdom have announced a partnership
aimed at enhancing the science of AI safety. The newly formed UK and US AI Safety Institutes will work in tandem, pooling resources and expertise to address the evolving challenges posed by artificial intelligence.
- The Memorandum of Understanding (MOU), signed by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo, signifies a joint commitment to develop tests for cutting-edge AI models.
- This initiative builds upon agreements established at the AI Safety Summit in November and aims to synchronize scientific approaches between the two countries, fostering close collaboration to accelerate the development of comprehensive evaluation frameworks for AI models, systems, and agents.
- As part of their commitment to AI safety, both governments pledged to extend similar partnerships with other nations, fostering a global network dedicated to mitigating AI-related risks and promoting shared approaches to safety.
18. UK and Republic of Korea announce AI Seoul Summit for May 2024
- Further building on the AI Safety Summit, the UK and the Republic of Korea are set to host
the AI Seoul Summit on May 21 and 22, marking renewed global efforts to ensure the safe development of AI.
- The summit will serve as a platform for leading AI nations to engage in constructive dialogue on AI safety, innovation, and inclusivity.
- Day one will feature a virtual leaders’ session co-chaired by Prime Minister Rishi Sunak and Republic of Korea President Yoon Suk Yeol. Global industry leaders will provide updates on their efforts to uphold AI safety commitments made at Bletchley Park.
- Day two will witness an in-person meeting of Digital Ministers co-hosted by UK Technology Secretary Michelle Donelan and Korean Minister of Science and ICT Lee Jong-Ho. This session will facilitate collaboration and knowledge-sharing among policymakers.
- Attendees will also explore ways to make AI technology more inclusive, ensuring that its benefits and opportunities are equitably distributed. Discussions will highlight the transformative impact of AI in various sectors, including healthcare, education, and environmental conservation.
19. European and American experts release new edition of the EU-U.S. Terminology and Taxonomy for AI
- The EU-U.S. Trade and Technology Council (TTC) announced on 5 April significant progress
in their efforts to advance trustworthy AI and risk management with an initial draft of AI terminologies and taxonomies.
- The document provides clarity on a range of terms and concepts, including ‘AI Lifecycle,’ ‘Measurement’, ‘Technical System Attributes,’ ‘Governance,’ and ‘Trustworthy.’
- The European Commission and the National Institute of Standards and Technology (NIST) solicited input from external experts between October and November 2023.
- The initiative between the two bodies aims to align approaches to AI risk management and foster cooperation in international standards bodies related to AI.
20. Five Eyes agencies release joint guidance on secure deployment of AI
- On 15 April, the U.S. National Security Agency’s Artificial Intelligence Security Center (NSA AISC), along with key international partners including the U.S. Cybersecurity & Infrastructure Security Agency (CISA), the FBI, and cybersecurity agencies from Australia, Canada, New Zealand, and the UK, published joint guidance
on best practices for deploying and operating externally developed AI systems.
- The guidance focuses on improving confidentiality, integrity, and availability while addressing known vulnerabilities and mitigating risks associated with malicious activities.
- The joint initiative also offers methodologies and controls to protect, detect, and respond to threats targeting AI systems and associated data and services.
21. Competition Commission of India (CCI) begins market study to assess AI’s impacts on competition, efficiency, and innovation
- The CCI announced plans
on 22 April to commission a comprehensive market study to analyze the impact of artificial intelligence (AI) on competition and local market dynamics.
- The study will investigate AI systems, market structures, and the roles of stakeholders within the AI value chain to identify emerging competition issues, opportunities, risks, and regulatory frameworks governing AI systems in India and globally.
- The Indian government has allocated significant funds
for the IndiaAI Mission as part of its efforts to stimulate the country’s AI ecosystem.
- Prime Minister Narendra Modi has previously emphasized AI's role in achieving his administration’s economic objectives, including making India into a $5 trillion economy by 2027-2028.
Holistic AI Policy Updates
We recently hosted our monthly Policy Hour Webinar with Rashad Abelson, Technology Sector Lead at the OECD Centre for Responsible Business Conduct to Managing Risks with AI Governance. Give it a watch here
.
Our next policy hour on Thursday 16 May 2024 at 9am PST/ 12pm EST/ 5pm BST will be on US AI Policy, where we will cover:
- Significant federal developments, including the NIST AI RMF, AI Executive Order, American Privacy Rights Act (APRA), and AI Disclosure Act.
- Already enacted state-level AI Laws and executive orders.
- The interplay of horizontal and vertical legislation at both federal and state levels.
Around in Brussels at the start of June?
Join us at 6pm on Monday 3rd June for our Holistic AI Policy Connect event where we will be joined by experts from the EU Commission, EU Parliament, and IAPP for our panel 'EU AI Act: Hear from the Experts'.
Want to dive in?
Check out our blog
for deeper insights on key AI developments around the world from our policy team.
Authored by Holistic AI’s Policy Team.