Is A Global AI Regulatory Framework Possible?
Global AI Regulatory Picture 2025

Is A Global AI Regulatory Framework Possible?

Artificial Intelligence (AI) is transforming industries, economies, and societies at an unprecedented pace. However, the absence of a unified global AI regulatory framework has led to fragmented approaches across major jurisdictions. The EU, China, the US, and the UK each follow distinct strategies in AI governance, creating compliance challenges for businesses operating across borders and exposing risks of regulatory arbitrage.

This article explores how these leading economies regulate AI, highlights key gaps, and outlines what a potential global AI regulatory framework could include.

AI Regulation Across Four Key Jurisdictions

The following table compares AI laws and policies in the EU, China, the US, and the UK, highlighting areas where alignment is lacking:

Table: Global AI Regulation Comparison

Global AI Regulation Comparison 2025

This comparison highlights the diverse approaches to AI regulation across these jurisdictions, reflecting varying priorities and stages of legislative development. I have included excellent sources of Global AI Legislation below.

Notes:

  • Australia: As of April 2024, Australia does not have a comprehensive AI law. The government has released voluntary AI Ethics Principles (2019) and is considering a risk-based approach to regulate high-risk AI applications.
  • Canada: The proposed Artificial Intelligence and Data Act (AIDA) is part of Bill C-27, aiming to regulate AI at the federal level. Provincial legislatures have yet to introduce specific AI laws.

Regulatory Gaps and Challenges

??No Unified Global AI Governance Body

  • Despite the global nature of AI, no international regulatory body oversees AI development, deployment, and risk management.
  • The EU AI Act sets strict rules for high-risk AI but is not enforceable globally.
  • China, the US, and the UK follow different governance models, causing inconsistencies.

??Potential solution: Establishing a Global AI Oversight Council under G7, UN, or OECD leadership to set baseline AI regulatory principles.

??Fragmented Technical Standards & Interoperability Issues

  • The EU (CEN/CENELEC), UK (BSI), US (NIST), and China (TC260) develop separate AI safety and risk standards, increasing compliance burdens.

??Solution: Develop ISO-led global AI technical standards to ensure interoperability and alignment across markets.

??No Universal AI Risk & Ethics Framework

  • The EU defines high-risk AI, China regulates public-facing GenAI, but the UK and US lack uniform classification.
  • Ethical concerns, such as algorithmic bias, workplace surveillance, and AI decision-making transparency, vary by region.

??Solution: A Global AI Risk Framework, similar to climate change impact assessments, could standardise risk classification.

??Weak Cross-Border AI Accountability Mechanisms

  • AI systems operate across borders, yet there is no international enforcement mechanism to hold companies accountable.

??Solution: A cross-jurisdictional AI audit & compliance mechanism would enhance oversight.

??No Standardised Public AI Registration

  • The EU and China require registration for certain AI models, but the US and UK do not.

??Solution: A global AI transparency database for high-risk AI systems could increase public accountability.

??Lack of AI Workforce Readiness & Literacy Standards

  • Only the EU mandates corporate AI literacy, while China, the US, and the UK lack explicit requirements.
  • AI’s impact on jobs and skills demands global AI workforce training initiatives.

??Solution: A UN or G20-led AI skills development framework could align education and workforce strategies.

The Path Towards a Global AI Regulatory Framework

Regulation is struggling to keep pace with AI’s rapid evolution, and while recent international efforts mark progress, they also highlight key challenges.

??Global AI Governance is Gaining Momentum

The UN’s call for a global AI governance body signals recognition of AI’s cross-border risks, but enforcing uniform standards globally remains difficult. Countries have differing priorities some focus on innovation, others on security and ethics.

??The UK’s International AI Treaty

The UK’s new AI treaty, the first of its kind, underscores growing international consensus on responsible AI. However, it primarily focuses on research collaboration and safety rather than enforceable regulation.

??US Push for AI Risk Assessments

The US is taking a more domestic approach, requiring risk assessments for AI in critical areas like national security and employment. However, without global alignment, these regulations may create compliance headaches for multinational companies.

??AI’s Evolution Outpaces Policy-Making

AI models are improving at an exponential rate, while policy is inherently slow due to political, legal, and economic constraints. By the time regulations are drafted and implemented, the AI landscape has already shifted.

??The Future: Adaptive Regulation?

The best path forward may be adaptive regulation frameworks that evolve alongside AI, rather than rigid rules that quickly become outdated. Sandboxing, real-time auditing, and AI-driven regulatory monitoring could help keep pace.

To address these gaps, a global AI governance framework should include:

?A UN-backed Global AI Council to coordinate international AI policies.

?Baseline AI safety and transparency standards, harmonised across jurisdictions.

?A unified AI risk classification system, ensuring cross-border regulatory consistency.

?A global AI ethics & accountability charter, preventing unethical AI applications.

?A multilateral AI trade agreement, ensuring fair and ethical AI development.

?AI workforce upskilling and literacy initiatives, preparing for AI-driven job transformations.

Regulation is catching up but remains reactive rather than proactive. The real test will be whether governments and international bodies can develop dynamic, enforceable, and globally harmonised regulations that balance innovation and safety.

Conclusion: A Call for Global AI Alignment

AI is borderless, but AI regulation is not. Without a global AI governance model, disparities in AI laws will lead to compliance complexity, hinder innovation, and create security risks, create compliance challenges and stifle innovation. To build a future-proof framework, governments, industry leaders, and international organisations must collaborate ensuring that not only large corporations and academic institutions are included but small businesses, AI developers, and market regulators have a voice in shaping AI governance.

Regulation Expertise

Raymond Sun has spent 2-years developing this excellent Global Regulation AI Tracker here

OECD.AI National AI policies & strategies global: This website provides a live repository of over 1000 AI policy initiatives from 69 countries, territories and the EU here

Oliver Patel, AIGP, CIPP/E has shared a wealth of information about AI legislation and his post about the recent AI summit in France is here


Wiebke Apitzsch

You don't need AI. You need impact. AI.IMPACT CTO I Consultant I Speaker

5 天前

It is obvious, that AI will remain regulated locally. But it is still very good to work on frameworks that could be applied globally and are based on human rights and deep ethical thinking. The answers are not that obvious. Today, our toolset is entirely different than 5 years back. So there are no easy answers. But we have to try either way

要查看或添加评论,请登录

Tess Hilson-Greener的更多文章