?? AI Policy this week #029. AI Convention open for signatures; G20 agreement to set AI guidelines; open probe into Google.

?? AI Policy this week #029. AI Convention open for signatures; G20 agreement to set AI guidelines; open probe into Google.

A quick summary of news, reports and events discussing the present and future of AI and the governance framework around its development.

1. News

Council of Europe opens first ever global treaty on AI for signature; the EU, US, UK, Israel and others sign. The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law. The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union. The treaty will enter into force on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Countries from all over the world will be eligible to join it and commit to complying with its provisions.

G20 nations agree to join efforts to fight disinformation and set AI guidelines. Group of 20 leaders agreed to join efforts to fight disinformation and set up an agenda on artificial intelligence as their governments struggle against the speed, scale and reach of misinformation and hate speech. The ministers, who gathered in Maceio, the capital of the northeastern state of Alagoas, Brazil, emphasized in a statement the need for digital platforms to be transparent and “in line with relevant policies and applicable legal frameworks.” G20 representatives also agreed to establish guidelines for developing artificial intelligence, calling for “ethical, transparent, and accountable use of AI,” with human oversight and compliance with privacy and human rights laws. “We hope this will be referenced in the leaders’ declaration and that South Africa will continue the work,” Renata Mielli, adviser to Brazil’s ministry of science, technology and innovation, said. The G20 Leaders’ Summit is scheduled for November, in Rio de Janeiro.

China proposes new regulation on labelling AI-generated content. The Cyberspace Administration of China (CAC) has released a draft regulation that aims to standardise the labelling of AI-generated synthetic content to protect national security and public interests, reported Xinhua. Titled “Measures for identifying AI-generated synthetic content,“ the draft regulation is open for public feedback until Oct 14, 2024. Under the draft regulation, internet information service providers must adhere to mandatory national standards when labelling such content. Providers offering functions like downloading, copying or exporting AI-generated materials must ensure that explicit labels are embedded in the files. Platforms that distribute content are also required to regulate the spread of AI-generated materials.

Unesco Launches Open Consultation on New Guidelines for AI Use in Judicial Systems. Unesco, in collaboration with international experts, has developed draft Guidelines for AI use in Courts and Tribunals. The guidelines, shaped by the UNESCO Recommendation on the Ethics of AI, aim to ensure AI technologies are integrated into judicial systems in a way that upholds justice, human rights and the rule of law. They will be receiving inputs until September 25th.

Nine AI Bills Pass US House Science, Space and Technology Committee. The bills encompass a range of initiatives, including increased support for AI research and development and the promotion of AI education and workforce training programs:

  • H.R. 9197, the Small Business Artificial Intelligence Advancement Act– Favorably reported to the House by voice vote as amended. This bill would require the Director of the National Institute of Standards and Technology to develop resources for small businesses in utilizing artificial intelligence.
  • H.R. 9194, the Nucleic Acid Screening for Biosecurity Act – Favorably reported to the House by voice vote. This bill would amend the Research and Development, Competition, and Innovation Act to support nucleic acid screening.
  • H.R. 9211, the LIFT AI Act – Favorably reported to the House by voice vote as amended. This bill is intended to improve educational efforts related to artificial intelligence literacy at the K through 12 level.
  • H.R. 9215, the Workforce for AI Trust Act – Favorably reported to the House by voice vote. This bill is intended to facilitate a workforce of trained experts to build trustworthy AI systems.
  • H.R. 9402, the NSF AI Education Act of 2024 – Favorably reported to the House by voice vote as amended. This bill would support National Science Foundation education and professional development relating to artificial intelligence.
  • H.R. 9403, the Expanding AI Voices Act – Favorably reported to the House by voice vote as amended. This bill would support a broad and diverse interdisciplinary research community for the advancement of AI and AI-powered innovation through partnerships and capacity building at certain institutions of higher education and other institutions to expand AI capacity in populations historically underrepresented in STEM.
  • H.R. 5077, the CREATE AI Act – Favorably reported to the House by voice vote as amended. This bill would establish the National Artificial Intelligence Research Resource (NAIRR).
  • H.R. 9497, the AI Advancement and Reliability Act – Favorably reported to the House by voice vote as amended. This bill would amend the National Artificial Intelligence Initiative Act of 2020 to establish a center on artificial intelligence to ensure continued United States leadership in research, development, and evaluation of the robustness, resilience, and safety of artificial intelligence systems.
  • H.R. 9466, the AI Development Practices Act – Favorably reported to the House by voice vote. This bill would direct the National Institute of Standards and Technology to catalog and evaluate emerging practices and norms for communicating certain characteristics of artificial intelligence systems, including relating to transparency, robustness, resilience, security, safety, and usability.

US targets advanced AI and cloud firms with new reporting proposal. The U.S. Commerce Department said it is proposing to require detailed reporting requirements for advanced artificial intelligence developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The proposal from the department's Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of "frontier" AI models and computing clusters. It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons.

Ireland’s DPA opens probe into Google's AI compliance. Ireland’s Data Protection Commission said it has opened an inquiry into Google’s Pathways Language Model 2, also known as PaLM2. It’s part of wider efforts, including by other national watchdogs across the 27-nation bloc, to scrutinize how AI systems handle personal data. Google’s European headquarters are based in Dublin, so the Irish watchdog acts as the company’s lead regulator for the bloc’s privacy rulebook, known as the General Data Protection Regulation, or GDPR. The commission said it wants to know if Google has assessed whether PaLM2’s data processing would likely result in a “high risk to the rights and freedoms of individuals” in the EU.

Meta to push on with a plan to use UK Facebook and Instagram posts to train AI. Meta said it had “engaged positively” with the Information Commissioner’s Office (ICO) over the plan, after it paused similar proposals in June in the UK and EU. The ICO said it will monitor the experiment after Meta agreed changes to its approach. These include making it easier for users to opt out of allowing their posts to be processed for AI. Meta confirmed that for UK users of Facebook and Instagram it will resume plans to use publicly shared posts to train AI models. It will not use private messages or any content from those under 18, Meta said.In a statement, Meta said: “This means that our generative AI models will reflect British culture, history and idiom, and that UK companies and institutions will be able to utilise the latest technology.

Documentary producers release new ethical AI guidelines for film-makers. The Archival Producers Alliance (APA), a volunteer group of more than 300 documentary producers and researchers formed in response to concerns over the use of generative AI in nonfiction film, developed the guidelines over the course of a year, after publishing an open letter in the Hollywood Reporter demanding more guardrails for the industry. The guidelines, announced at the Camden Film Festival, are not intended to dismiss the possibilities of a technology that is already shaping all forms of visual storytelling, but to “to reaffirm the journalistic values that the documentary community has long held”.


2. Reports, Briefs and Opinion Pieces:

“AI and the future of journalism: an issue brief for stakeholders” by Anya Schiffrin for Unesco. “Developments in Artificial Intelligence (AI) and Generative AI are changing constantly. Governments, educators and the public struggle to keep up. Designing policies that will not get out of date seems almost impossible. Generative AI could transform (or even destroy) journalism as we know it, so the journalism community has been fully focused on many aspects of this phenomenon”.

“Safe and responsible AI in Australia. Proposals paper for introducing mandatory guardrails for AI in high-risk settings”, by the Department of Industry, Science and Resource. “The guardrails in this paper set clear expectations from the Australian Government on how to use AI safely and responsibly when developing and deploying AI in Australia in high risk settings”. They aim to:

  • address risks and harms from AI
  • build public trust
  • provide businesses with greater regulatory certainty.

The Proposal is now open to public consultation, closing 5pm AEST on Friday 4 October 2024.


3. Events:

Dubai AI & Web3 Festival 2024 (Sep 11-12, Madinat Jumeirah, Dubai). There, the Dubai Electronic Security Center (DESC) announced the launch of a policy specifically designed to oversee the adoption of AI in the region as well as look into security related concerns surrounding AI. The Dubai AI Security Policy aims to bolster confidence in AI solutions and technologies, promote their growth and development, and mitigate electronic security risks.

AI Summit Budapest 2024 (Sep 10-11, Budapest, Hungary). Artificial intelligence plays a "key role" in development policy decisions, Public Administration and Regional Development Minister Tibor Navracsics said at the AI Summit. Navracsics pointed to AI’s rapid advance and said decision-makers needed to address how to apply it in policy areas now. He added that AI could support data-driven policymaking by collecting and compiling data, and improving targeted measures with more precise data management and measurable objectives.

Off topic, great logo for the summit:

That's all, see you next week!

Ariel Riera

Head of Research and Public Affairs Services at SmC+

1 个月
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了