Promoting innovation, mitigating risk: AI policy around the world

Promoting innovation, mitigating risk: AI policy around the world

As the EU announces the world’s first AI Act and generative AI dominates the headlines, Russell Seekins reviews how policy is taking shape among leading economies.


Many jurisdictions, most notably the EU, have been progressively considering the policy implications for AI development over recent years. Most had favoured a ‘wait and see’ approach alongside a statement of principles designed to guide developers and reassure the public.

It was the release of ChatGPT in November 2022 that galvanised the debate. Its power and accessibility caught the public imagination and adoption spiralled, reaching 100 million users within two months of launch. 1 Discussions about AI regulation broadly, and AI safety in particular, came into sharper focus in light of growing public concern (see figure 1).


Figure 1. ?Feelings towards AI in the US. Source: Pew Research Center

Categorising risk

Drago? Tudorache is the European parliament’s co-rapporteur for the EU’s AI Act. In his opening remarks during a session at? the IIC annual conference he framed the EU’s approach to AI governance in terms of the ‘protection of society and citizens’. The aim, he said, was to categorise risk and allocate mitigation without hindering innovation (read the full article here).


The United States

The Biden administration has a stated aim of leading the development of AI policy, framed in terms of the United States’ strategic competition with China. The government has already limited China’s access to AI chips and the policy objective is to maintain technological leadership (read the full article here).


The United Kingdom

The UK government has a stated ‘pro-innovation’ approach to AI regulation. The framework is based on the identification of risks and ethical challenges, comparable to the EU. However, the focus is on AI impacts rather than technology. For example, a chatbot offering advice on fashion choices presents a very different level of risk from one involved in dispensing medicines (read the full article here).


China

China is broadly recognised has having two main goals for its AI policy. First, it is concerned with stability, especially political and social stability. This means that concerns for national security, public opinion and ‘social mobilisation’ take priority over individual rights. Second, it wants to be the major AI power in competition with the United States, so enabling trade and innovation is critical (read the full article here).


Asia

APEC has said that it is seeking to collaborate as much as possible. In November 2023 its business advisory council stated that international cooperation was necessary to avoid the adoption of conflicting approaches to governance and that interoperability and consensus were preferable to creating a ‘noodle bowl of regulation and data policies’. The priority should be on open trade and the free flow of data. It suggested that it was in the interest of APEC countries to participate and contribute to efforts to create international frameworks (read the full article here).


The International Institute of Communications is a member organisation which exists to inform and thereby shape the global policy agenda for the ICT and digital ecosystem. View our membership options here and our current members here.

Keep up with the latest IIC news and event updates by following us on Instagram and Twitter. Or sign up to our mailings and monthly newsletter here.

要查看或添加评论,请登录

International Institute of Communications (IIC)的更多文章

社区洞察

其他会员也浏览了