AI regulation in the UK: promise of progress but falls short on accountability
“A pro-innovation approach to AI regulation” from the UK government is a recently published whitepaper that aims to establish a framework for the responsible use of artificial technologies across the country. It makes several promises to build up AI governance: it outlines key principles for the responsible use of AI and sets out a range of measures the government will take to ensure AI is used in a way that benefits society as a whole. But the question remains: is it enough?
One of the key challenges of regulating AI is balancing data privacy with innovation.
AI relies on data to function, and AI companies need access to large amounts of data to train algorithms. Where does personal data privacy slot in? There are concerns about proper consent or practical safeguards that are perhaps non-existent, leading to abuses around how or why AI is used: particularly in contexts of hiring, education and law enforcement. Bias is also a factor: these systems can sometimes be a reflection of the training data and consequently reflect certain biases in the real world.
AI regulation needs to take into account these various issues and balance out ways of tackling them - while still promoting growth and development in the sector.
The whitepaper argues that AI regulation should be a pro-innovation approach that recognises the potential of AI to drive economic growth, create jobs and improve our quality of life. At the same time, it outlines that AI is to be developed and used in a responsible and ethical manner.
It proposes a proportionate approach to regulation, that focuses on the use of AI rather than the technology itself, with five principles to guide the responsible development and use of AI:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Notably, the UK government will avoid “heavy-handed legislation” which they argue may stifle innovation. Instead, they opt to take a more “adaptable” approach: the five principles will be issued on a non-statutory basis, and existing legislators will take the lead. They will then evaluate the framework’s effectiveness, before introducing a statutory duty on regulators.
The government also have key priorities for the next 12 months, including developing a regulatory sandbox, working with regulators on cross-sectoral principles, publishing an AI regulation roadmap and issuing practical guidance to organisations and tools for risk assessment.
The aim is to build public trust, make innovation easier, and strengthen the UK’s position as a global leader in AI.
While this new release gives some form of a roadmap to what regulation may look like in the near future, the UK is still lacking concrete policies and active movement in this area. The paper does not make any solid commitment to legislation, fails to create new targeted regulatory bodies to govern AI (instead tasking existing bodies with expertise across sectors), and makes AI developers’ use of risk assessment templates non-compulsory.
Yes, this whitepaper sets out a plan for future AI regulation in the UK, but it arguably falls short of what is needed - particularly in comparison to other proposals, such as the risk-based approach in the EU AI Act or the US’s proposed new standards from the NIST (National Institute of Standards and Technology). AI is developing at a tremendous pace and a consistent fear with legislation is that it often fails to keep up with the pace of technology - a critique of the proposed EU AI Act.
There is, of course, a delicate balance to strike in this area to ensure both safety and fairness without stifling innovation; but regulation and innovation do not necessarily have to be in opposition.
The UK is home to several brilliant and knowledgeable think tanks, scientists, and universities that prioritise the study of AI and have valuable contributions to give to structuring a regulatory regime. Regulation helps set common standards which allow for responsible innovation, protect the interests of the public, and is a means for businesses to structure plans and investments. By failing to keep up with the pace of development in the space, the UK risks falling behind in the AI race and stifling real and prosperous growth from companies at home.