A Beginner’s Guide to Keeping Up to Date with AI Regulations
Plus 10 sources to help you navigate the world’s latest AI trends and developments.
Amid a technological revolution that will shape the future of humanity, discussions about regulations for artificial intelligence are increasing globally. While the opportunities AI offers are transformational, these opportunities also come with great risks.?
Governments are racing to establish governance frameworks and place limitations on the development and use of AI systems. This is to protect fundamental rights and to support innovation, but also to establish themselves as an AI governance leader. But AI is a fast-evolving technology, and its rapid development creates challenges for regulators. As individual countries propose their own AI regulatory models, we are moving away from the ideal of global governance, a concept that it is argued would better reflect the global nature of AI.
Given the speed of change of the AI landscape, keeping track of developments and what they mean, can be daunting. The differing governance proposals and the terminology can be confusing, and it is hard to know where to look for up-to-date news on developments. This article provides brief outlines on the current AI governance situation in different countries and will list 10 sources useful for updates on these trends and developments.
European Union and AI
The EU’s Artificial Intelligence Act, more commonly referred to as the AI Act, was the first comprehensive AI legal framework to be proposed. Two years on, and after intensive rounds of amendments, the AI Act is entering its closing legislative stages, expecting to pass by the end of the year with a two-year transition period to follow. While it is beyond the scope of this article to decode the AI Act, we set out its fundamental principles.
The AI Act follows a risk-based approach, distinguishing between different risk categories: unacceptable, high, limited and minimal.
These categories have been established based on the level of risk an AI system poses to health, safety, and fundamental rights. For example, unacceptable-risk AI systems, such as those that exploit people’s vulnerabilities (race, socioeconomic background, disability, or age etc) or have the potential to manipulate people, will be strictly prohibited. This will extend to include systems of real-time biometric identification or facial recognition in public spaces. Minimal risk refers to AI applications that are already widely used today, like spam filters, AI-enabled video games and inventory-management systems.?
Careful regulation is needed for higher-risk AI applications and stringent monitoring and disclosure requirements are imposed for systems in that category. High-risk systems must be registered in an EU public database and must comply with a comprehensive set of risk management, data governance and cybersecurity standards. Other requirements include documentation and traceability, transparency, human oversight, and accuracy. By contrast, the transparency requirements for limited and minimal risk systems only require that the user know that they are interacting with an AI system, for example, talking to chatbots and using cookies.
The AI Act imposes the harshest penalties for non-compliance of any EU regulation. Failure to comply with the relevant provisions associated with an AI system may result in fines as high as €40 million or 7% of a company’s global annual turnover.?
The EU is well on its way towards adopting the world’s first AI legislation and while other countries are proposing their own regulations, it seems inevitable that the AI Act will impact AI governance internationally. To complement the AI Act, a voluntary code of conduct and list of guiding principles in line with the Act is being developed by the G7. The code and principles aim to promote the development of safe, ethical, and transparent use of AI globally. This reminds us that while countries are diverging on their AI regulatory paths, it is acknowledged that some form of global governance or framework will be necessary to reflect the global nature of AI.
United Kingdom and AI
Until now, the UK has leaned towards a more self-regulatory model focusing on AI safety. A White Paper was released in March of this year outlined the UK Government’s pro-innovation approach to AI regulation. The White Paper sets out five principles to guide AI development and use. Essentially, its objective is to strike a balance between fostering innovation and upholding public confidence in the development of trustworthy AI.
领英推荐
At this stage, rather than setting a regulatory framework, these principles focusing on safety, transparency, fairness, accountability, and redress are intended to be interpreted and implemented into different domains by individual sector-specific regulators such as the ICO and MHRA.
The UK Government has made AI proposals that pre-dated the White Paper and over the last year, the House of Commons Innovation and Technology Committee has been conducting an inquiry to evaluate different approaches to AI governance. In an Interim Report published in August 2023, the Committee acknowledged that the White Paper was an initial effort but has made several recommendations to improve it. Significantly, the Committee expressed concern that the current approach risks the UK falling behind the pace of AI development and its global counterparts who are adopting formal regulations. It recommended that a tightly focused AI Bill would help better establish the UK’s position as an AI governance leader.?
Additionally, the UK hosted the first global AI Safety Summit in early November 2023. The Summit brought together international governments, AI companies, scholars and civil society groups. Participants considered the existential threat(s) that some policymakers say AI poses, and how these potential risks can be mitigated through international coordination and cooperation. Significantly, the Bletchley Declaration was signed by the 28 participating countries and is being heralded as a first-of-its-kind global agreement. While it is acknowledged that the Declaration did not set out any specific policy goals, it is considered a positive start to the development of AI that is safe, trustworthy and human-centric. Future Safety Summits are already lined up to take place in South Korea and France in the next year, demonstrating the participants' commitment to ensuring global transparency and accountability. ?
At the Summit, the UK also announced the establishment of an AI Safety Institute that is intended to act as a global hub for researching the capabilities and risks of AI’s fast-moving developments. The Institute will partner with research bodies including the Alan Turing Institute and collaborate with AI companies and other nations to test and evaluate new AI technologies to ensure their safety before they are released to the public.
United States?and AI
It is not expected that the United States will pass broad federal AI legislation any time soon. That said, the US has been actively exploring different options for AI governance. Considerations have included a blueprint for an AI Bill of Rights, and numerous policies such as the respective AI Risk Management and SAFE Innovation frameworks and voluntary codes of conduct. It was unclear what federal-level approach would be the most appropriate.
However, at the end of October 2023, the US Government released an Executive Order on ‘Safe, Secure and Trustworthy Artificial Intelligence’. This Order aims to balance unfettered development and innovation, and strict oversight.?
The Order does not have the permanence of legislation meaning it could be reversed at any time by future presidents. Across eight guiding principles, the Order builds on the previously considered regulations and the EU AI Act. Under the Order, developers of powerful AI systems will be required to share their safety tests with the US Government. However, with its broad scope, it is likely that the Order will influence organisations across all sectors. It is meant to accommodate the fears and desires of the masses, from the tech experts wanting to develop AI systems to the civil rights advocates concerned about the issues of AI bias. While not self-effectuating, the Order provides a long-awaited roadmap for AI regulation in the US. ?
The consequence of the slow pace of federal law decision-making has resulted in an interest in AI governance at a state level. New York has recently released an AI Action Plan to ensure that agencies are better equipped to advance their AI efforts. Essentially, if established, it would assess the quality and function of AI systems before their deployment and would provide a place for citizens to raise complaints related to AI and automated decision-making systems used by public agencies (in similar function to an ombudsman).?
California has recently proposed multiple initiatives including an AI Bill under which high-risk AI systems that use a certain amount of computing power will be subject to transparency requirements with legal consequences for non-compliance. Beyond this, more states are looking to establish their own AI regulatory frameworks and the result will be a patchwork of AI regulations that could cause trade frictions with countries with more concrete AI legislation in place.?
The US also recently announced plans to establish an AI Safety Institute. This will function in a similar manner to the UK version and has the potential to broaden the scope and research into AI safety internationally. However, with both countries vying to be AI global leaders, the extent of their mutual collaboration remains to be seen. ?
***
10 Useful Sources to Keep Up to Date on AI Trends and Developments:
Art Lawyer
1 年This is excellent, Eloise. Having direct access to various resources is most useful. Great article
Corporate M&A Associate at Boies Schiller Flexner
1 年Very interesting article. Clear and concise point of view on what is going on in the world of AI regulation. I loved the idea of the updating sources' section, very usefull for something so 'in the making'. Brava Eloise ??