Tackling the AI regulatory challenge

Tackling the AI regulatory challenge

ChatGPT has taken the world by storm by simply making artificial intelligence and its awesome power available to all.?AI is certainly not new but its daily presence is more palpable than ever before.?To put it differently, if there was ever any doubt, AI is here to stay and possibly to change our lives.?But since history has taught us to approach radical technological changes with caution and scepticism, policy makers and regulators around the world are rushing to provide a dose of wariness aimed at ensuring that the development and implementation of AI addresses its own risks.?AI regulation is being drafted with a heightened degree of urgency and a global patchwork of rules and laws governing AI are already making their way into the statutory books.?This raises important policy, professional and compliance questions.

From a public policy perspective, the obvious challenge is how to regulate AI in a way that maximises its economic and societal promises and benefits, whilst eliminating the potential for harm, unfairness and inequality.?One can debate how detailed or light touch AI regulation should be, but there seems to be a universal consensus around the fact that whatever framework is devised to regulate AI, it should be risk-based and ideally, future-proof.?In practice, this means that for AI regulation to succeed in achieving its objectives, it must be able to adapt to all types of situations and therefore rely on principles rather than prescriptive rules.?It is also essential to be aware of the global dimension of AI and to approach this challenge in the most internationally collaborative way.?As various legislative initiatives in this space take place around the world, global consistency must become a crucial reference point.

The emerging AI regulation is also creating a professional conundrum.?Who will be best equipped to help navigate the strategic and operational challenges presented by the new legal framework given its novelty and multi-disciplinary nature??The work opportunities for a new generation of AI regulatory specialists are obvious but who is best placed to take a leading role in this area today??Looking at the issues at stake – fair data collection and usage, automated decision-making of life changing consequences, risk management responsibilities – it seems clear that this is familiar territory for privacy and data protection professionals.?So in the same way that our collective knowledge and judgment in relation to privacy and cybersecurity matters is necessary to reap the benefits of data while addressing the risks of misuse, those skills are likely to be put to the test in the context of AI.?It is also not a coincidence that the new European AI regulatory framework is borrowing concepts and obligations from laws like the GDPR, as the methodology for dealing with the potential risks of AI is largely transferable.

Speaking of the emergent EU AI Act, which in terms of compliance obligations, is at least as wide-ranging and ambitious as the GDPR, the time to pay attention to what is coming and what to do about it is now.?Any organisation involved in the development or potential use of AI technology today will be wise to familiarise itself with the diverse but complementary requirements that form part of this developing framework.?At the very least, organisations should be seeking to undertake an AI regulation impact assessment to determine the extent to which their systems are likely to be subject to the law, and if so, decide how best to prepare for it.?As different AI laws make their appearance in different jurisdictions, devising and implementing a global AI regulation compliance programme covering issues such as data governance, transparency documentation and human oversight strategies will resemble a search for the holy grail.

AI may be a difficult issue to pin down – partly because of its underlying technological complexity, partly because its development is taking place in front of our eyes at breath-taking speed, and partly because the implications of its widespread adoption will be crucial for the future of humanity – but what is clear is that it is attracting huge regulatory attention at a global scale.?That is not necessarily a bad thing but for AI regulation to achieve its goals, we must be prepared to move fast, be creative and think globally whilst being as pragmatic as possible.

This article was first published in Data Protection Leader in January 2023.

Sakthi Thangavelu

Independent Consultant | AI Governance | GDPR, DPDPA Compliance | ISO 42001 AIMS Lead auditor | Data Privacy & AI GRC professional community Leader

1 年

Thanks for sharing!. With AI regulatory obligations coming in, the existing GRC teams in orgs need some upskilling (and tooling) too, as the issues are beyond security and privacy.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了