A blueprint for UK data protection and AI regulatory policy

A blueprint for UK data protection and AI regulatory policy

“Sustained economic growth is the only route to improving the prosperity of our country and the living standards of working people. That is why it is Labour’s first mission for government.” These big words right at the core of Labour’s 2024 manifesto provide an obvious clue as to what the new UK Government’s top priority is: growing the economy. The question is how those bold statements will translate into actions. More specifically, since the Government’s decisions to shape the digital economy and technological innovation will affect the prospects of economic growth like no other, its regulatory policy in these areas will be key. While geopolitical risk, the UK’s own economic constraints and its relationship with the EU are all significant factors, much of the ambition and success of the new Government now depend on its approach to new regulation, including in relation to data protection and AI.

The King’s Speech listing the actions for the first year of the Labour Government did not give much away in terms of setting out the concrete direction of travel on data protection reform and AI-specific regulation. However, the fact that the previous Government’s proposed Data Protection and Digital Information bill appears to have been abandoned and that the King only made a subtle reference to the aim of passing appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models, suggests that all is still to play for.

If it is true that data protection reform as previously envisaged is no longer in the cards, is it better for economic growth to leave the UK GDPR and e-privacy regime as it is, or to make some changes that will make a valuable contribution towards that growth? Clearly, this is a matter of balance, given that preserving the status quo provides stability – another key buzzword in Labour’s manifesto – but the current law is not really as good as it could be. So in a world where marginal gains can make all the difference between success and failure, the wise move will be to make subtle but meaningful tweaks to the law that do not risk the coveted adequacy determination of the UK’s data protection law by the EU.

Change can deliver greater certainty and stability, and there is definitely room for change by clarifying the application of the various lawful grounds for processing, the boundaries of data subjects’ rights and the mechanisms to legitimise international transfers. This is also an opportunity to emphasise and articulate the risk-based approach of the GDPR in a way that the main focus of the obligations, particularly around accountability, processes and resources, applies to those who should truly bear the weight of responsibility. Even the very concept of personal data should be reassessed to ensure that it is nuanced enough to capture the various degrees of identifiability, which is very relevant in the context of machine learning and AI model training.

Speaking of which, regulating to promote responsible AI is possibly the most effective way of injecting sustainability into digital innovation. AI regulation is an area where the UK Government will need to be bold and careful in equal measure. In a sense, the focus of AI regulation is a simple one: make it safe, make it fair and make it right. In other words, we cannot afford AI to be harmful, deliver an even more unequal or divided society, or not do its job properly. Equally, over the top prescriptive burdens will be in direct conflict with the economic growth objective. Once again, the risk-based approach should play a key role here – as it has in the EU AI Act – but while the UK can also aim to protect safety and rights, it can do so by replacing one-size-fits-all obligations with adaptable principles whose rigour is determined by the potential risks. In such a framework, an AI risk assessment would be the starting point for every provider and deployer, and the appropriate level of accountability would flow from that.

If there is an overarching principle for effective data protection and AI regulatory policy, it is that regulation should not be seen as a zero-sum game. It is not about innovation v. protection, or growth v. responsible practices. The UK needs to be guided by this principle and apply its vision for prosperity to regulate in a progressive, pragmatic and creative way.

This article was first published in Data Protection Leader in July 2024.

Kajol Patel

Partner Alliance Marketing Operations at Data Dynamics

2 个月

I agree that a well-crafted regulatory framework can foster innovation while safeguarding privacy and ensuring ethical AI development. The UK Government has a unique opportunity to strike the right balance between these competing interests. By focusing on risk-based approaches, clarifying existing regulations, and promoting responsible AI practices, they can create a conducive environment for both economic prosperity and digital innovation.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了