Key AI Developments in July 2024

Key AI Developments in July 2024

Keeping up with developments in AI legislation, regulation, investigations, penalties, legal action, and incidents is no easy task, which is why we have created Tracked, a bite size version of our global AI Tracker. Each month, Holistic AI’s policy team provides you with a roundup of the key responsible AI developments from the past month around the world to keep you up to date with the ever-evolving landscape.


Europe

1. EU AI Act published in the Official Journal of the EU

  • On 12 July 2024, after a three-year process, the EU AI Act was published in the Official Journal of the EU.
  • This marked the start of the 20-day countdown until the law enters into force on 1 August 2024, after which the gradual implementation of the act will begin.
  • Provisions on prohibited systems will be enforced after six months ?– in February 2025 – and general entry into force will be in August 2026
  • Throughout this period, additional provisions will also come into effect, including for General Purpose AI, as seen below.



2. European Commission sends X preliminary findings over breach of the Digital Services Act

  • On 12 July, the European Commission sent its preliminary findings to X (formerly Twitter) as part of formal proceedings it launched in December 2023 under the Digital Services Act (DSA).
  • The Commission’s preliminary findings indicate that X has breached the DSA by misleading users with its "verified accounts," lacking transparency in advertising, and restricting researchers' access to public data.?
  • X now has the possibility to examine the findings and respond, which will then be followed by a formal investigation by the EU Commission.

  • If the investigation ultimately?finds X in breach of the DSA, X could face fines up to 6% of its annual worldwide turnover and be required to implement measures to address these breaches, with enhanced supervision and potential penalty payments to ensure compliance.?

?

3. UK Court of Appeal rules that artificial neural networks are unpatentable as computer programs without a technical contribution

  • On 19 July, the UK Court of Appeal ruled that Emotional Perception AI Ltd’s artificial neural network system (ANN) falls under the definition of a computer program (whether it is implemented as a hardware or software) and should be treated as such
  • The ANN system recommends media files (like music) based on emotional responses rather than traditional genre classifications that are used by common search algorithms. ??
  • The Court found that the system’s subjective and cognitive effects (which results in better music recommendations) were not patentable since, under UK patent law (Patents Act 1977), programs for computers are excluded from being patentable unless they provide a technical contribution.?
  • As a result of the ruling, the UK Intellectual Property Office (IPO) has updated its guidance on patent applications for ANNs.


4. European Commission investigates Meta under DMA

  • On 1 July 2024, the European Commission sent Meta its preliminary findings from an investigation into its “Pay or Consent” model under the Digital Markets Act (DMA).
  • The model forces users to choose between paying for an ad-free experience or consenting to personalized ads without offering an equivalent service that uses less personal data.?
  • Since users are forced to make a binary choice that does not offer a less personalised, equivalent option of Meta’s social network, the Commission’s preliminary conclusion was that Meta is in breach of the DMA.
  • If confirmed, the Commission could impose fines up to 10% of Meta's global turnover and additional remedies for non-compliance with DMA requirements.

?

5. Irish Data Protection Commission (DPC) looking into training of X AI chatbot Grok

  • X is also receiving attention from the Irish Data Protection Commission over the use of user posts to train chatbot Grok.
  • Available to public users, Grok was originally trained on publicly available sources, but the company now intends to incorporate user interactions and posts to improve the model.
  • While users can opt-out of their data being used on the web-based app, they are auto-enrolled and their data may be shared with X's partner company xAI if they choose not to opt-out.
  • The DPC has reached out to X and is awaiting further engagement.

US

6. NIST publishes three finalized documents on AI and draft guidance

  • Alongside the relaunch of its Open Source Platform for AI Safety Testing, on 26 July, the National Institute of Standards and Technology (NIST) announced the publication of three finalized documents on AI that were released for public comment in April, as well as draft guidance from the AI Safety Institute:
  • The AI RMF Generative Profile (NIST AI 600-1) – a cross-sectoral profile and companion resource for the AI Risk Management Framework (AI RMF 1.0), focusing on Generative AI (GAI), to help organizations manage AI risks in line with President Biden’s EO 14110.?It adapts the AI RMF functions to the specific needs and risks associated with GAI, providing guidance on managing these risks throughout the AI lifecycle and across various sectors.??
  • Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST SP 800-218A) – updates the Secure Software Development Framework (SSDF) Version 1.1 with specific practices for securing generative AI and dual-use foundation models, and follows Executive Order 14110.?It is aimed at AI model producers, AI system producers, and AI system acquirers, offering detailed guidance for integrating security throughout the AI development lifecycle.?
  • A Plan for Global Engagement on AI Standards (NIST AI 100-5) – a federal strategy mandated by President Biden’s Executive Order on AI and outlines strategies for global engagement in developing and implementing AI standards, focusing on scientific rigor, diverse stakeholder input, transparency, and international cooperation.?It aims to create AI standards that are scientifically grounded, globally accessible, and adaptable to various sectors, ensuring they are informed by diverse global perspectives and address societal needs.?
  • Managing Misuse Risk for Dual-Use Foundation Models (NIST 800-1 Initial Public Draft) – a guidance document for managing the misuse risks of dual-use foundation models, focusing on preventing harm related to their use in developing weapons, cyber attacks, deception, and illicit content, in line with the National AI Initiative Act and Executive Order 14110.?It addresses both technical and social aspects of misuse risks, outlining best practices for identifying, measuring, managing, and governing these risks throughout the AI lifecycle, and emphasizes the role of various actors in these efforts.??


7. Federal VET AI ACT introduced


8. Manchin, Murkowski Introduce Bipartisan Legislation to Advance Department of Energy AI Research for Science, Security, and Technology

  • On 10 July 2024, Senators Joe Manchin (D-WVA) and Lisa Murkowski (R-AK) announced the introduction of the “Department of Energy (DOE) AI Act”, aiming to enhance U.S. leadership in AI by utilizing the existing infrastructure and workforce at the DOE's National Laboratories.
  • The legislation seeks to authorize the Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative, creating AI research clusters and developing a comprehensive R&D program focusing on AI applications in science, energy, and national security.
  • The bill also establishes an AI risk evaluation and mitigation program to ensure the safety and security of AI technologies.
  • It directs the DOE to study the growth of computing data centers, improve the federal permitting process using AI, and instructs FERC to expedite the interconnection queue process with advanced computing technologies.
  • The DOE supervises 17 National Laboratories and 35 user facilities while employing a workforce of over 70,000 scientists, engineers, and researchers.

??

9. United States Patent and Trademark Office (USPTO) releases updated guidance on AI patents

  • As part of Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, the USPTO published guidance this month on patent subject matter eligibility in the context of AI.
  • The guidance provides clarity on patent eligibility Requirements, where claims must fall within the categories of processes, machines, manufactures, or compositions of matter, excluding abstract ideas, laws of nature, and natural phenomena.???
  • It also provides examination guidance, providing new examples to help examiners apply these criteria consistently, focusing on AI inventions.?
  • The publication comes as the White House issued a 270-day update on the executive order and announced that Apple had signed onto the voluntary commitments towards safe, secure, and transparent AI.

?

10. Senators Introduce Legislation to Combat AI Deepfakes and Protect Content Creators

  • On 11 July 2024, Senators Maria Cantwell (D-WA), Marsha Blackburn (R-TN), and Martin Heinrich (D-NM) introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT).
  • This bipartisan bill aims to increase transparency and combat the rise of harmful AI-generated content, particularly deepfakes, by establishing new federal guidelines for marking, authenticating, and detecting such content.
  • The COPIED Act has garnered support from various industry groups including the News/Media Alliance, National Newspaper Association, Rebuild Local News, NAB, SAG-AFTRA, Nashville Songwriters, Recording Academy, RIAA, Music Publishers, Artists, and Performers.
  • Senator Cantwell emphasized that the COPIED Act would provide transparency around AI-generated content and empower creators like journalists, artists, and musicians to control their content through a provenance and watermark process.
  • Meanwhile, Senator Blackburn highlighted the threat of deepfakes to individuals, especially in the creative community, and the necessity of the COPIED Act to defend against counterfeit content.


11. Court grants in part and denies in part First Amended Complaint in Mobley v. Workday lawsuit

  • The latest action in the ongoing Workday lawsuit over alleged racial, age, and disability discrimination saw Judge Rita Lin of the Northern District of California issue a mixed ruling on 12 July.
  • Workday’s motion to dismiss federal claims under Title VII of the Civil Rights Act of 1964, Age Discrimination in Employment Act of 1967, and ADA Amendments Act of 2008 was denied.
  • The court concluded that the First Amended Complaint plausibly alleges Workday's liability as an agent in the hiring process, regardless of whether the agent is an automated system or a live human.?
  • However, the court granted Workday's motion to dismiss the intentional discrimination claims based on race and age, without leave to amend for claims under Title VII, ADEA, ADA, and Section 1981, but granted Mobley to amend his Fair Employment and Housing Act claim within 21 days.?


12. New Hampshire act requiring disclosure of deceptive AI usage in political advertising passed

  • On 23 July, the New Hampshire state legislature passed HB1596 “Requiring Disclosure of Deceptive AI usage in political advertising”
  • The law aims to address the proliferation of synthetic media and deceptive deep fakes, defined as digitally manipulated images, audio, or video that create false impressions of reality.
  • It prohibits the distribution of such deceptive deep fakes of candidates on the ballot within 90 days of an election unless accompanied by a clear disclosure stating it's manipulated by AI.
  • This disclosure must be easily readable in the case of or audible in the media synthetic media, depending on its format.
  • There are exemptions for media used in bona fide news broadcasts or for satire and parody.
  • Additionally, candidates depicted in deceptive deep fakes may seek injunctive relief or damages.

?

13. FTC issues orders to eight companies offering surveillance pricing to understand the AI market

  • On 23 July, the FTC announced that it had issued orders to Mastercard, Revionics, Bloomreach, JPMorgan Chase, Task Software, PROS, Accenture, and McKinsey & Co., all of which offer surveillance pricing products and services that use consumers’ characteristics and behavior.
  • The order aims to improve understanding of the opaque market of third-parties that use advanced algorithms or AI with consumer data including location, demographics, credit history, and browsing or shopping history to categorize individuals and set personalized pricing.
  • Specifically, the FTC is seeking to understand the potential impact of AI-driven surveillance pricing on privacy, competition, and consumer protection by requesting information on:

  1. The types of products and services being offered.
  2. Data collection and inputs.
  3. Customer and sales information.
  4. Impact on consumers and prices.

?

14. Ready Player Me sued over AI-driven personalized avatar creation platform

  • On 16 July, Michael Crawley filed a lawsuit in an Illinois court against Ready Player Me, a personalized avatar platform that uses AI to create avatars from user-uploaded photos.
  • The platform uses AI to scan the depicted person’s facial geometry to create an avatar with their characteristics.
  • The lawsuit 1:24-cv-05995 alleges that this violates Illinois’ Biometric Information Privacy Act (BIPA) as user personal data is collected and stored without written consent and neither the Terms of Service and Privacy Policy mention the word “biometrics”.
  • The lawsuit is seeking damages of $5000 per intentional or reckless violation or $1000 for negligent violations.

Global

15. International joint statement on generative AI foundation models

  • On 23 July 2024, competition authorities from the UK, US, and EU published?a joint statement?on generative AI foundation models and AI products.
  • Although they operate in different jurisdictional contexts and have different legal powers, in their statement, the Competition Commissioner of the European Commission, UK Competition and Markets Authority, US Department of Justice, and US Federal Trade Commission outline three key principles to support competition and innovation:

  1. Fair dealing to avoid exclusionary tactics.
  2. Interoperability to support greater competition and innovation.
  3. Choice among a variety of products and business models.

  • The statement also lists a number of risks of generative AI, including firms restricting key inputs for AI foundation model development, digital market firms with existing market power extending that power to AI markets, a lack of choice for content creators, and partnerships that can undermine competition or steer market outcomes.

16. NATO releases updated AI strategy

  • NATO released an updated version of its 2021 AI strategy to consider recent advances in AI technologies, such as generative AI and the AI-enabled information tools.
  • The strategy lists a number of the Alliance’s AI-related priorities, including advancing the implementation of NATO’s Principles of Responsible Use; increasing interoperability between AI systems through the Alliance; the combination of AI with other emerging disruptive technologies; and expanding NATO’s AI ecosystem through closer cooperation with Allied industry and academia.
  • In a first, NATO also identifies ‘issues of concern’ for the Alliance and democracy, including AI-enabled disinformation, information operations and gender-based violence.

?

17. Mandatory AI and Automated Decision Risk Reviews to Land in Australian Province of Queensland

  • The Queensland state government public sector projects will soon undergo mandatory internal assessments and external reviews to evaluate and mitigate risks associated with AI and automated decision-making (ADM), as part of a new framework being finalized by the Queensland Government Customer and Digital Group (QGCDG).
  • The AI governance policy and supporting AI risk assessment framework, developed by QGCDG, aim to ensure responsible use of AI across government projects, potentially incorporating the ISO42001 AI Management System Standard and other industry standards.
  • Current AI projects, like the Department of Agriculture and Fisheries' drone-based weed detection and the assistive chatbot QChat, are examples of AI applications already subjected to scrutiny under existing assurance processes.
  • The new policy will outline the required level of assurance checks for AI and ADM projects, enhancing oversight and risk management across Queensland’s public sector initiative.


18. Brazil’s Data Protection Watchdog Halts Meta’s Use of User Data for AI Training

  • On 2 July, Brazil's National Data Protection Authority (ANPD) announced its order for Meta to immediately cease using data from its platforms, such as Facebook and Instagram, for AI training, citing risks to users' rights.
  • The ANPD's concerns include Meta's insufficient legal basis for data processing, lack of transparency regarding privacy policy changes, excessive limitations on user rights, and inadequate safeguards for minors.
  • The privacy policy update by Meta, allowing the use of public posts, photos, and captions from its platforms for AI development, raised alarms among privacy advocates and regulators.
  • Meta faces daily fines of 50,000 reais ($8,820) if it does not comply within five days, while it claims its data use complies with Brazilian privacy laws and deems the decision a setback for innovation.
  • This action underscores the growing tension between AI advancement and privacy protection, with Brazil's stance seen as a significant win for user data control amidst global regulatory scrutiny of tech giants.


19. Tech Industry Releases Comprehensive Guide for Governing High-Risk AI Systems and Frontier AI Models

  • The global tech trade association ITI unveiled on 12 July a set of practices aimed at the safe and secure development and deployment of AI technology, in an effort to foster consumer trust.
  • ITI’s AI Accountability Framework delineates responsibilities across the AI ecosystem, outlining steps for AI developers, deployers, and integrators to manage high-risk AI uses, including frontier AI models.
  • The framework introduces the concept of auditability, promoting transparency by maintaining documentation of risk assessments.
  • ITI's Vice President of Policy, Courtney Lang, emphasized the framework's role in building consumer trust and serving as a guide for policymakers crafting AI governance approaches.
  • The framework outlines seven key practices for the AI ecosystem:

  1. Conducting continuous risk and impact assessments throughout the AI development lifecycle.
  2. Testing frontier models for flaws and vulnerabilities before release.
  3. Documenting and sharing information about AI systems within the value chain.
  4. Implementing explanation and disclosure practices for end-users.
  5. Using high-quality training data to mitigate biased outputs.
  6. Ensuring AI systems are secure-by-design.
  7. Appointing AI Risk Officers and training personnel interacting with AI systems.


20. Türkiye publishes National AI Strategy Action Plan for 2024-2025

  • On 24 July, Türkiye published its 2024-2025 National AI Action plan, updating its previous 2021-2025 Action plan in light of recent AI developments and the evolving needs of the country
  • The Action Plan aims to develop advanced artificial intelligence technologies and create large Turkish language models and value-added products and services
  • It also aims to enhance the R&D, innovation, and entrepreneurship ecosystem, improve access to high-performance computing infrastructures and data, and transform the workforce while increasing the number of expert human resources.
  • A support program will be implemented to encourage SMEs to use AI products and solutions resulting from domestic R&D efforts, and guidance will be issued to clarify the intellectual property rights of content created by AI and support the patenting of AI products.
  • An Impact Analysis Framework for AI Values and Principles will also be created, and a "Trustworthy Artificial Intelligence Stamp" will be established in line with a certification mechanism for the auditing and legal compliance of AI applications.


Holistic AI policy updates

The National Telecommunications and Information Administration published its Dual-Use Foundation Models with Widely Available Model Weights Report on 30 July following a request for public comments in February. Holistic AI is proud to have our comments cited multiple times in the report!


Last month, we launched the Holistic AI Tracker Feed and Expert Community - your go-to for updates on AI regulation, legislation, legal action, penalties and fines, and standards around the world. Bringing together expertise across domains such as policy, law, business psychology, computer science and more, this month, we published the first contributions from our expert community:

Create a free account here to keep up to date with the latest AI Governance developments.


Authored by Holistic AI’s Policy Team.



要查看或添加评论,请登录

Holistic AI的更多文章

社区洞察

其他会员也浏览了