Key AI Developments in August 2024

Key AI Developments in August 2024

Keeping up with developments in AI legislation, regulation, investigations, penalties, legal action, and incidents is no easy task, which is why we have created Tracked, a bite size version of our global AI Tracker. Each month, Holistic AI’s policy team provides you with a roundup of the key responsible AI developments from the past month around the world to keep you up to date with the ever-evolving landscape.

Create a free account for the Tracker Feed to keep up to date with the latest AI Governance developments.


Europe

1. UK Government to Disclose Its AI and Algorithmic Tools in Public Register

  • On 25 August 2024, the UK government confirmed it will publish the AI and algorithmic tools it uses in a public register.
  • The decision came in response to campaigners who were pushing for greater transparency on the government’s use of AI systems in the public sector, especially as the use of such tools in the government is set to expand in the future.
  • The main campaigner for the disclosure was the access-to-justice charity the Public Law Project (PLP), which said it has identified 55 automated decision-making systems used by government departments.
  • The risks of using AI in the public sector have been well-documented. In August 2020, the Home Office agreed to stop using a computer algorithm to help sort visa applications after a legal challenge by the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove. The legal challenge claimed the algorithm perpetuated “entrenched racism and bias.”

2. Social Media Platform X to Pause Training Grok on Europeans’ Public Posts

  • On 8 August 2024, the Irish Data Protection Commission (DPC) announced that the social media platform X (formerly Twitter) agreed to stop using Europeans’ public posts to train its AI chatbot Grok.
  • Two days earlier, the DPC had filed a complaint against the platform at the Irish High Court over concerns that the platform is using users’ personal data to train its AI model Grok.
  • As a result, X will stop using data gathered between 1 May and 1 August of this year to train the chatbot. ?European agencies will continue to investigate X over potential violations of the GDPR.
  • X responded in a statement saying that the complaint is “unwarranted” and claimed it has been proactive in working with regulators. The statement noted that the company plans to challenge the complaint.
  • Grok, which was released in May 2023, is only available to X’s premium users at various paid tiers. It was used with data from X’s public posts without notifying users or requesting their consent.?

3. UK Competition and Markets Authority Probes Google’s Ties with Anthropic

  • This month, the UK’s Competition and Markets Authority (CMA), the country’s antitrust regulator, collected comments following an announcement at the end of July that it would investigate the partnership between AI developer Anthropic and Alphabet Inc., Google’s parent company.
  • The announcement came after Google invested in Anthropic over several rounds of funding, which may amount to the creation of a “relevant merger situation” under the UK’s Enterprise Act 2002. If so, the creation of that situation may be expected to result in a substantial lessening of competition within any market or markets in the UK for goods or services.
  • The investigation is one of many the CMA has announced over the past year that target large tech companies. In April, it stated that it would investigate Microsoft’s investment in the French start-up Mistral AI. This was later dropped as CMA concluded that the investment did not qualify under current merger regulations due to the size of the investment.

4. Dutch Copyright Enforcement Group Removes Dataset Used for AI Training

  • On 13 August 2024, the Dutch-based copyright enforcement group BREIN removed public access to a large language that was being promoted for use in training AI models.
  • The dataset was comprised of information collected without permission from thousands of books, news sites, and Dutch language subtitles pulled from a range of films and TV shows.
  • BREIN does not know whether or how widely the dataset may have already been used by AI companies.
  • The removal of the dataset comes after the EU AI Act came into force on 1 August 2024. The AI Act requires AI firms to disclose what datasets they have used to train their models.
  • Other copyright protection groups are taking similar actions; last year, the Danish Rights Alliance forced the take-down of a wide-ranging dataset known as “Books3.”

US

5. Telecom Company Lingo Telco to Pay $1M Fine to Federal Communication Commission for Biden Deepfake

  • On 21 August 2024, the telecom company Lingo Teleco agreed to pay a $1 million fine for its role in the deepfake robocall before the New Hampshire Democratic primary.
  • The robocall used an AI-generated impersonation of President Joe Biden’s voice to tell voters not to vote in the January 2024 Democratic primary.
  • Lingo Telecom is a voice service provider and distributed the AI-generated robocalls through fake phone numbers.
  • Along with paying the fine, Lingo Telecom also agreed to stricter oversight protocols, which federal authorities say is the first time enforcement action has been taken against malicious deepfakes.

6. Federal Trade Commission Bans Fake Online Reviews

  • On 14 August 2024, the Federal Trade Commission (FTC) voted unanimously to ban marketers from creating and using fake reviews, including those generated by AI technology. The rule also prohibits businesses from busing positive or negative reviews.
  • The ban will go into effect in mid-October, 60 days after it is published in the Federal Register.
  • It also precludes marketers from exaggerating their own influence by paying for bots to inflate their follower count.
  • Companies that violate the rule will have to pay a fine for each fake review.
  • The new rule is designed to increase deterrence and strengthen FTC enforcement actions after the Supreme Court ruled in the AMG Capital Management LLC v. FTC case in 2021 that the FTC doesn’t have?the authority to seek equitable monetary relief in federal court under the FTC Act.

7. Representative J. French Hill Introduces House Bill on AI Regulatory Sandboxes

  • On 6 August 2024, House Representative J. French Hill (R-Arkansas-2) introduced HR 9309 “To provide for regulatory sandboxes that permit certain persons to experiment with artificial intelligence without expectation of enforcement actions.”
  • The bill was referred to the House Committee on Financial Services.
  • Rep. French Hill has previously introduced the “Unleashing AI Innovation in Financial Services Act” in the same committee, which is designed to promote AI innovation in the financial services industry.
  • Regulatory sandboxes, where authorities engage firms to test innovative products or services that challenge existing legal frameworks, have been an increasingly popular mechanism for regulators looking to balance innovation and safety when it comes to AI.

8. Senate Bill Introduced to Bar Use of Adversarial AI by Federal Government

  • On 5 August 2024, Senators Marco Rubio, (R-FL), Rick Scott (R-FL), and John Barrasso (R-WYO) introduced the “AI Acquisitions Act,” which would prohibit the federal government and private companies it contracts with from procuring or using “adversarial AI.”
  • Adversarial AI includes AI services and tools developed in China, Russia, North Korea, Iran, Syria, Venezuela, Cuba, and other countries of concern to US interests.
  • The Act would direct the Undersecretary of Commerce for Standards and Technology to collaborate with the Federal Acquisitions Security Council to develop a list of such AI services and tools from those countries.
  • It would also allow contractors doing business with the US government two years to discontinue using AI products and services on the list
  • The bill is designed to prevent potential national security threats that may arise from using technology built in certain countries.

9. The Federal Aviation Administration Releases Roadmap for AI Safety Assurance

  • On 21 August 2024, the Federal Aviation Administration (FAA) released the first version of its “Roadmap for AI Safety Assurance,” which outlines the air safety regulator’s approach to safely integrating emerging AI technology in aviation.
  • The roadmap also makes recommendations for how the FAA can use AI to make the aviation industry safer and the core principles that will guide AI safety assurance methods.
  • Among the key actions are collaborating with government and industry, educating and training the FAA’s workforce on AI technology, and conducting ongoing research to evaluate the effectiveness of its safety assurance methods.
  • It also lists key expected milestones, such as fully autonomous commercial airlines to hit the market around 2050.
  • The FAA was obligated to develop the document under President Biden’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of AI that was signed in October 2023.?
  • The FAA consulted with its European counterparts, who published a similar document in 2020 and 2023, on the issue.

10. Leaked Documents Reveal Nvidia Scraping ‘A Human Lifetime’ Worth of Videos Daily for AI Training

  • On 6 August 2024, the investigative media outlet 404 Reports accused Nvidia of scraping millions of videos from various online sources to train its AI systems, including those for 3D world generation, self-driving cars, and digital avatars.
  • Nvidia is accused of scraping the equivalent of 426,320 hours of video per day—essentially a human lifetime's worth—to train its proprietary AI models.
  • Internal communications and anonymous sources reveal that Nvidia employees were instructed to download videos, raising ethical and legal concerns. The company’s executives have defended the practice, claiming compliance with copyright law and internal clearance for the data use.
  • The report highlights a broader trend of AI companies scraping online content amid ongoing legal and regulatory uncertainties surrounding AI training practices.?

11. Illinois Enacts AI Legislation Addressing Digital Replicas

  • On 13 August 2024, Illinois Governor JB Pritzker signed into law HB 4875, which amends the existing Illinois Right of Publicity Act to prohibit the unauthorized use of AI-generated digital replicas.
  • The bill defines “digital replica” as a “newly created, electronic representation of the voice image, or likeness of an actual individual created using a computer, algorithm, software, tool, AI, or other technology.” Unauthorized use generally refers to doing so without consent.
  • This prohibition applies to both commercial and non-commercial applications and also holds individuals or entities liable if they materially contribute to, induce, or facilitate a violation of the law by another party, knowing that the other party is in violation.
  • It may be necessary for organizations that have obtained consent from workers on the use of their name and likeness to review the language in those agreements to ensure that it complies with the new law.
  • The law will go into effect on 1 January 2025.

12. AI Start-Ups Suno and Udio Push Back Against Record Label Lawsuits

  • On 1 August 2024, AI start-ups Suno and Udio refuted claims by various music publishers that their music-generating AI systems infringed on copyright law.
  • The start-ups claim that the use of copyrighted sound recordings to train their systems qualifies as fair use under US copyright law, and said the lawsuits were attempts to stifle independent competition.
  • Both start-ups develop AI systems that create music in response to user text prompts.
  • The music publishers Universal Music Group (UMG), Warner Music Group (WMG), and Sony Music sued the start-ups in June in federal court and alleged that they copied hundreds of songs from major artists to teach their systems to create music that will “directly compete with, cheapen, and ultimately drown out,” human artists.
  • The legal dispute marks the first regarding music generated by AI. Previous lawsuits on AI-generated content have primarily focused on text content and have been brought forth by authors and news outlets, among others.

13. Illinois Human Rights Protection Expanded to Include AI Use in Hiring

  • On 9 August 2024, Illinois Governor J.B. Pritzker signed HB 3773 into law, which amends the Illinois Human Rights Act to include specific regulations on the use of AI in employment decisions.
  • The law specifically requires employers to be transparent with employees and job applicants about the use of AI in employment decisions, such as hiring, promotion, or termination.
  • HB 3773 also prohibits employers from using AI that “has the effect” of subjecting employees to discrimination based on a protected class with respect to, for example, recruitment, hiring, and promotions.
  • The expansion reflects a broader trend within state-level regulations that aim to protect consumers, users, and employees from certain risks related to AI.

14. California Passes Bill AB 2905 on the Use of Artificial Voices in Automatic Dialling-Announcing Devices

  • On 19 August 2024, the California state legislature passed AB 2905, which seeks to amend Section 2874 of the Public Utilities Code to require automatic dialing-announcing devices to inform recipients if the prerecorded message uses an artificial voice.
  • Currently, these devices must provide an unrecorded, natural voice announcement explaining the call's nature and the caller's identity before operating.
  • The bill suggests adding a requirement for the announcement to inform recipients if the prerecorded message uses an artificial voice, including those generated using artificial intelligence. Violating this requirement would constitute a crime and mandate a local program.
  • The bill aims to provide transparency to individuals regarding the use of AI-generated voices in telecommunications applications.
  • The implementation of the bill may be difficult as it could be technically challenging to inform the recipients of the use of artificial voices. Businesses or organizations may also seek to push back on the bill.

15. US AI Safety Institute Signs Agreement with OpenAI and Anthropic for Formal Collaboration on Testing and Evaluation

  • On 29 August 2024, the US AI Safety Institute – which is housed in the Department of Commerce’s National Institute of Standards & Technology – announced it had reached formal collaboration agreements with OpenAI and Anthropic.
  • Each agreement establishes the framework for the US AI Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.
  • The institute, along with the UK AI Safety Institute, will also provide feedback to both companies on potential safety improvements to their models.

Global

16. Hong Kong Monetary Authority Releases Consumer Protection Guidance for Generative AI

  • On 19 August 2024, the Hong Kong Monetary Authority (HKMA) released a set of guiding principles on the use of generative artificial intelligence.
  • The guidance stipulates that authorized institutions must ensure that their board and senior management are accountable for all GenAI-driven decisions and processes, with a clear definition of the scope for customer-facing applications and the establishment of responsible usage policies.
  • In addition, proper validation of GenAI models, especially during initial deployment, should include a "human-in-the-loop" approach to maintain fairness and prevent bias. Institutions must also ensure transparency with customers about GenAI usage, provide options for human intervention, and implement robust data privacy and protection measures in line with regulatory standards.
  • The guidance follows a range of different rules and documents issued by the HKMA this year, including a Generative AI Sandbox and data protection guidelines.

17. African Union Approves AI Strategy

  • On 9 August 2024, the Executive Council of the African Union (AU) approved the “Continental AI Strategy,” which promotes AI adoption in the public and private sectors among member states.
  • The strategy, which aims to harness AI for the continent’s development and the well-being of its people, outlines several key recommendations and focus areas.
  • The five focus areas are maximizing AI benefits, building capabilities for AI, minimizing AI risks, African public and private sector investment in AI, and regional and international cooperation and partnerships.
  • The recommendations include establishing an integrated hardware and software environment designed for AI and machine learning workloads to facilitate data processing and deployment.
  • The strategy outlines a five-year implementation period from 2025-2030 in which the AU will first prioritize the establishment of governance frameworks, the development of AI strategies, and resource management, for example. The second phase of the period will focus on the practical implementation of critical projects and initiatives.
  • Only six African countries – Algeria, Benin, Egypt, Mauritius, Rwanda, and Senegal – have developed national AI strategies. Kenya, South Africa, and Uganda are taking a sectoral approach and integrating AI with other frameworks.

18. Australian Government Releases AI Policy for Responsible Government Use

  • On 16 August 2024, the Australian government’s Digital Transformation Agency introduced the ‘Policy for Responsible Use of AI in Government’, which aims to create a unified strategy for the government’s use of AI.
  • The policy comes into effect on 1 September 2024.
  • The three overall aims of the policy include embracing the benefits of AI, strengthening public trust through transparency, governance, and risk assurance, and adapting to AI over time.
  • Its recommended actions include staff training on AI fundamentals, the creation of publicly available documents on information on compliance policy, measures to monitor the effectiveness of deployed AI systems, and efforts to protect the public against negative impacts.
  • The principles laid in the policy are protecting Australians from harm, ensuring AI risk mitigation is proportionate and targeted, and that AI use is ethical, responsible, transparent, and explainable to the public.

19. Latin American Countries Adopt Expansive AI Declaration

  • On 10 August 2024, 17 Latin American countries adopted a Ministerial Declaration on AI that includes commitments to cooperate on developing and using AI in ethical, safe, inclusive, efficient, and dynamic ways, and to harness its potential to spur economic growth.
  • The declaration concluded the multi-day Ministerial Summit on AI in Cartagena, Colombia. Signatories include Argentina, Brazil, Chile, Colombia, Costa Rica, Cuba, Cura?ao, Ecuador, Guatemala, Guyana, Honduras, Panama, Paraguay, Peru, Dominican Republic, Suriname, and Uruguay.
  • Specific to the declaration was a focus on integrating the Recommendations on the Ethics of AI released by UNESCO in May 2023. The recommendations were developed following the AI for Good Summit hosted by the International Telecommunications Union (ITU) the same year.
  • The declaration marks another step towards multilateral AI governance regimes, with other regional bodies continuing to adopt voluntary commitments this year.

20. Australia Criminalizes Non-Consensual Sharing of Sexually Explicit Deepfakes

  • On 21 August 2024, the Australian parliament passed the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, which added new penalties to the original Criminal Code Act 1995.
  • The amendment imposes criminal penalties on individuals who share sexually explicit material without consent, including material that is created using AI or other technology.
  • Offenses under the bill are subject to criminal penalties of up to six years imprisonment for sharing non-consensual deepfakes of sexually explicit material. Imprisonment can be increased by one more year if the offender is also the creator.
  • Australia’s Attorney General Mark Dreyfus stated that deepfake sexually explicit material “overwhelmingly” affects women and girls and that the government considers it a form of abuse.
  • The amendment follows a similar law in England and Wales, where the Ministry of Justice announced in April of this year that the creation of sexually explicit deepfake images will be made a criminal offence.


Holistic AI policy updates

We are excited to share that Holistic AI ’s Safeguard is now available on the Azure marketplace!

Safeguard is our solution for governing, auditing, and monitoring Large Language Models (LLMs) to ensure their safety and security.

Find out more about Safeguard here.


Authored by Holistic AI’s Policy Team.


Goldin KS

Electronics Technician | Electro-Mechanical Manufacturing | Prototyping | CIS/IPC/WHMA-A-620 | CIS/IPC-7711/7721

6 个月

This design is really beautiful, I hope I can have a chance to see itThis design is really beautiful, I hope I can have a chance to see itThis design is really beautiful, I hope I can have a chance to see itThis design is really beautiful, I hope I can have a chance to see it

回复

要查看或添加评论,请登录

Holistic AI的更多文章

社区洞察

其他会员也浏览了