?? AI Policy this week #038. G20 Leaders’ declaration warns of AI risks; the US promotes talks on safety.

?? AI Policy this week #038. G20 Leaders’ declaration warns of AI risks; the US promotes talks on safety.

A quick summary of news, reports and events discussing the present and future of AI and the governance framework around its development.

1. News

US gathers allies to talk AI safety. Hosted by the Biden administration, officials from a number of U.S. allies — among them Australia, Canada, Japan, Kenya, Singapore, the United Kingdom and the 27-nation European Union — began meeting Wednesday in the California city that’s a commercial hub for AI development. Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse . It’s the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology.

NIST sets up new task force on AI and national security. The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which brings together partners from across the U.S. Government to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology. This announcement comes as the United States is set to host the first-ever convening of the International Network of AI Safety Institutes in San Francisco. The Taskforce will enable coordinated research and testing of advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more.

US congressional commission proposes Manhattan Project-style AI funding plan. A US congressional commission proposed a “Manhattan Project-like” initiative for the funding of artificial intelligence (AI) development in a larger push to stay ahead of China’s technological advancements. The report , issued by the bipartisan U.S.-China Economic and Security Review Commission (USCC), recommended Congress yield “broad multiyear contracting authority” to the executive branch for AI, cloud and data center companies for AI development. Congress was advised to “establish and fund a Manhattan Project-like program dedicated to racing and acquiring an Artificial General Intelligence (AGI) capability,” the report’s executive summary stated.

Fed’s Bowman Says Regulators Shouldn’t Rush to Contain AI. Speaking in Washington, Bowman warned that jumping into strict rules could backfire. “We need not rush to regulate,” she said. Her primary concern is that over-regulation could drive innovation out of the banking sector entirely, leaving valuable tools like AI on the sidelines. AI, according to Bowman, has a lot of potential in finance. It can make systems more efficient, crack down on fraud, and widen access to credit. The technology could also lend central bankers a hand by improving the reliability of economic data.

Rwanda’s new proposed media policy requires Gen-AI labeling. A stand-alone defamation law as well as legislations regulating artificial intelligence (AI)-generated content are some of the expected outcomes if the proposed media policy is approved. Formulated by consultants in 2023, the policy got preliminary approval by a sector cabinet earlier this year. One point is for AI-generated content to be labeled. Digital content creators and distributors will be required to label AI-generated content. A standard digital stamp label will be developed for this purpose. Daniel Sabiti, a senior journalist for KT Press noted that there are aspects in the policy which are called for that are crucial to address as emerging things like AI and social media, which were not served in the old policy. Yet such technologies already have a big impact on the media today.

Indonesia Aligns AI Regulations with Global Standards. The Communication and Digital Affairs Ministry sorts several global standards related to artificial intelligence (AI) to support the development of regulations for this technology in Indonesia. "Indonesia is trying to adopt all the regulations that are developing (to gauge) which one is most appropriate according to the Indonesian context," Deputy Minister of Communication and Digital Affairs Nezar Patria remarked on the sidelines of the World Public Relations Forum (WPRF) in Bali on Thursday (Nov 21). He affirmed that the special regulations governing AI implemented in the US are carried out vertically or quite similar to those implemented in China.

CIFAR leader expects CAISI to help inform AI policy in Canada and abroad. Elissa Strome, executive director of the Pan-Canadian AI Strategy at the Canadian Institute for Advanced Research (CIFAR), said she sees room for the Canadian Artificial Intelligence Safety Institute (CAISI) to inform not just our knowledge of AI, but how to use and regulate it, both domestically and abroad. Lawyer Carole Piovesan, who specializes in AI, credited the federal government for drawing on Canada’s existing strengths and infrastructure and focusing on understanding and mitigating some of the bigger risks associated with AI through CAISI. Launched last week by Canada’s Liberal government, CAISI has been tasked with studying some of the risks associated with “advanced or nefarious” AI systems and how to mitigate them, in collaboration with other countries around the world. The feds committed $50 million CAD over five years to CAISI in Budget 2024 as part of a larger $2.4-billion AI package containing funding for AI computing and startups. The feds have also allocated $27 million to CIFAR, which already leads the Pan-Canadian AI Strategy , to administer CAISI’s research stream.


2. Reports, Briefs and Opinion Pieces:

“Insuring Emerging Risks from AI”, by researchers of the Oxford Martin AI Governance Initiative. “This report examines the implications of recent progress in artificial intelligence (AI) for liability regimes and insurance markets within the United States. We argue that the insurance industry faces both a potential decline in traditional markets like auto insurance and emerging growth opportunities in AI agent and cybersecurity coverage. (...) Key recommendations include implementing strict liability regimes for a subset of AI harms, mandating insurance coverage for certain AI applications, and expanding punitive damages to address catastrophic, uninsurable risks”.

“Semiconductors, AI, and the Gulf: Policy Considerations for the United States” by the Washington Institute for Near East Policy. Researchers Elizabeth Dent and Grant Rumley “describe how the current debate focuses on semiconductors, which are essential components for advanced computing and AI. They proceed to analyze how U.S. policymakers can navigate a range of options from permissive to restrictive when considering the export of semiconductors to the Middle East, especially Gulf countries”.

“India’s Advance on AI Regulation”, by Amlan Mohanty and Shatakratu Sahu of the Carnegie India’s Technology and Society Program. “This paper provides a comprehensive analysis of AI regulation in India by examining perspectives across government, industry, and civil society stakeholders. It evaluates the current regulatory state and proposes a policy roadmap forward”.


3. Events:

G20 Leaders' Summit (Nov 18-19, Rio de Janeiro, Brazil). The final declaration of the G20 Summit of Heads of State, unveiled on November 19 in Rio de Janeiro, acknowledges the threats associated with artificial intelligence (AI) in the field of information, calls for the regulation of this technology, and promotes measures to ensure transparency, accountability, human oversight, and the protection of copyright.

The ADIA Lab Symposium (Nov 19-21, Abu Dhabi, UAE). “How much, if at all, can we trust AI?” was one of the main topics under debate by computer science experts. “It's not a fault of the technology, it's a fault of how people use it,” said Alex Pentland, director of MIT's human dynamics laboratory and Adia Lab advisory board member.

Thanks for reading, please share any comments and see you next week.


要查看或添加评论,请登录