Promoting innovation, mitigating risk: AI policy around the world
International Institute of Communications (IIC)
We exist to enable the balanced, open dialogue that shapes the communications policy agenda.
As the EU announces the world’s first AI Act and generative AI dominates the headlines, Russell Seekins reviews how policy is taking shape among leading economies.
Many jurisdictions, most notably the EU, have been progressively considering the policy implications for AI development over recent years. Most had favoured a ‘wait and see’ approach alongside a statement of principles designed to guide developers and reassure the public.
It was the release of ChatGPT in November 2022 that galvanised the debate. Its power and accessibility caught the public imagination and adoption spiralled, reaching 100 million users within two months of launch. 1 Discussions about AI regulation broadly, and AI safety in particular, came into sharper focus in light of growing public concern (see figure 1).
Categorising risk
Drago? Tudorache is the European parliament’s co-rapporteur for the EU’s AI Act. In his opening remarks during a session at? the IIC annual conference he framed the EU’s approach to AI governance in terms of the ‘protection of society and citizens’. The aim, he said, was to categorise risk and allocate mitigation without hindering innovation (read the full article here).
The United States
The Biden administration has a stated aim of leading the development of AI policy, framed in terms of the United States’ strategic competition with China. The government has already limited China’s access to AI chips and the policy objective is to maintain technological leadership (read the full article here).
领英推荐
The United Kingdom
The UK government has a stated ‘pro-innovation’ approach to AI regulation. The framework is based on the identification of risks and ethical challenges, comparable to the EU. However, the focus is on AI impacts rather than technology. For example, a chatbot offering advice on fashion choices presents a very different level of risk from one involved in dispensing medicines (read the full article here).
China
China is broadly recognised has having two main goals for its AI policy. First, it is concerned with stability, especially political and social stability. This means that concerns for national security, public opinion and ‘social mobilisation’ take priority over individual rights. Second, it wants to be the major AI power in competition with the United States, so enabling trade and innovation is critical (read the full article here).
Asia
APEC has said that it is seeking to collaborate as much as possible. In November 2023 its business advisory council stated that international cooperation was necessary to avoid the adoption of conflicting approaches to governance and that interoperability and consensus were preferable to creating a ‘noodle bowl of regulation and data policies’. The priority should be on open trade and the free flow of data. It suggested that it was in the interest of APEC countries to participate and contribute to efforts to create international frameworks (read the full article here).