Change is the only constant: How to avoid static regulation in the age of AI and other emerging technologies

Change is the only constant: How to avoid static regulation in the age of AI and other emerging technologies

?


?

*by Wojtek Buczynski

?

Key Points:

·??????? For regulation to be future-proof, it needs to be flexible, anticipatory, timely, proportionate, non-prescriptive, principles- and increasingly outcomes- based.

·??????? Soft laws can play a role as these are usually regarded as de facto binding by the financial services industry.

·??????? Both delegated acts and soft law can “retrofit” existing regulations in the context of AI.

?

Abstract:

In this article we consider different approaches to the regulation of AI to ensure it remains future-proof and relevant in the age of continuous technological change.

?

Intro

It is April 2021. Boris Johnson is the UK Prime Minister, Covid vaccinations are being rolled out to the general population, and the most recent annual inflation figure released by the Office of National Statistics is 1%. OpenAI is a promising and generously funded (USD 1bn from Microsoft alone) AI startup and the European Union releases the first draft of its eagerly-anticipated AI regulation, called simply the AI Act. Its ambitious scope includes a risk-based hierarchy of AI use cases (from prohibited ones, through high-risk, limited risk through to little-to-no risk), detailed governance provisions for the high-risk ones, and very strict penalties for non-compliance[1].

Fast forward to November 2023. The prime minister is Rishi Sunak, the high-profile summit on AI safety he organised and led has just wrapped up at the legendary Bletchley Park location, and inflation stands at 6.7%[2] (down from the 41 year high of 11.1% back in October 2022). Generative AI is on everyone’s lips and OpenAI’s ChatGPT is a household name, having reached 100 million active users in two months. The EU AI Act draft has been substantially expanded, and its maximum proposed penalties for non-compliance have increased even further[3]. The discussions around its newly-introduced provisions on regulating generative AI are reaching fever pitch, at one point risking putting the entire act at risk[4],[5].

?

The different regulatory approaches

This article does not set the stage for an in-depth analysis of the EU AI Act, its many amendments and the many (heated) discussions that have accompanied it. The EU AI Act is merely an illustration of the extreme challenge of regulating a technology that is evolving at an unprecedented pace. The nearly 3 years it has spent in the EU legislative process to date is not a remarkably long time by EU standards, and the expectation is that it will be enacted in early 2024. Paradoxically, it is the very fact that it has not been enacted yet that allowed the lawmakers to keep it current and relevant. However, at some point in the near future the EU AI Act will be finalised and then enacted. The Act will become “solidified” while the technology it regulates will likely continue to develop apace. The EU will be empowered to adopt delegated acts to make amendments to the AI Act (e.g., edit the list of high-risk use cases), but delegated acts can’t be used to modify the essential elements of EU laws. It is highly likely that in the coming months or years (yet) another completely new and disruptive type of AI technology will emerge; one that nobody except researchers and entrepreneurs working at the very cutting edge of AI can even anticipate today. What – if anything – can the lawmakers, regulators and businesses do when that inevitably happens?

Let’s address the elephant in the room first: if regulation is effectively futile, is there a point in having any? Why not just leave AI to the proverbial invisible hand of the market? Having learned from the great financial crisis of 2008 and the ongoing privacy and misinformation issues of many of the Big Tech platforms (chief among them being social media) I think we can all agree that self-regulation just doesn’t work[6].

If we all agree that some form of regulation is required after all, how can it be made future-proof? Technology neutrality is one of the tenets of modern financial regulation and on one hand it may seem like an elegant way of addressing future “unknown unknowns” of AI, but how can technology be regulated in a technology-neutral way? Some regulators (chief among them the EU) seem to have given it a lot of thought. One successful approach seems to be regulating broad and flexible concepts (e.g., “algorithms”) as opposed to specific technologies. Arguably “algorithm” is the key word that has kept MIFID II relevant for nearly a decade now; because it can be interpreted quite openly and broadly it gives the regulators flexibility to adapt their enforcement to current challenges. On the flipside, too-wide a definition of “AI” (a specific technology) caused considerable controversy in the first draft of the EU AI Act. Another approach (complementary, not alternative – having more than one regulatory approach at any one time is itself a new approach) is regulating the technology from the perspective of use cases and business areas (e.g., algorithmic trading). Risks and considerations in existing use cases and business areas can serve as a starting point for extrapolating impacts of emerging technologies. Yet another approach (again complementary, not alternative) is to focus on the outcomes, where AI and other emerging technologies can be tied with prudential regulations such as the Consumer Duty. The risk-based approach is one more (also complementary) option, which breaks down use cases into a number of tiers and regulates them according to their level of “riskiness” as perceived by the regulator.

Please note that these approaches are not only complementary, but potentially overlapping (e.g., risk-based approach could be considered a hybrid of use case-based and outcomes-based approaches).

In its recent report[7] The Alan Turing Institute notes the regulators’ ongoing shift from “wait and see” to a “test and learn” approach. One of the vehicles facilitating this evolution are sandboxes, particularly regulatory sandboxes. The updated draft of the EU AI Act[8] (Art. 53.1d) defines them as a “controlled environment that fosters innovation and facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan agreed between the prospective providers and the establishing authority.”. Sandboxes are not a new concept and are generally a well-regarded (if not somewhat niche) part of the financial ecosystem, particularly on the FinTech startup side. If utilised properly, they can be a win-win for both the industry and the regulators whereby both parties are engaged in the regulatory dialogue from day one, with a lesser risk of unwelcome surprises after deployment to the market. In this mode of collaboration the regulators are actively encouraging innovation (which is also important from reputational and optics perspectives) and stay abreast of the latest developments in AI, thus reducing the risk of encountering an unforeseen, challenging use case live on the market.?

There is one tool in all regulators’ toolboxes, which is so common that we may not fully appreciate it anymore: soft laws. If it turns out that the proverbial unknown unknowns of the AI of the future are so disruptive that existing regulations do not encapsulate it, then regulators can – and on short notice – issue non-binding guidances or best practices. While non-binding de iure, soft laws are usually regarded as de facto binding by the financial services industry and treated effectively at par with hard laws[9] (perhaps with some flexibility around implementation timelines).

One aspect of technology regulation that sometimes feels somewhat overlooked is… human; the proverbial “human in the loop”. EU AI Act Article 14 is dedicated solely to human oversight, but it focuses on “minimising the risks to health, safety or fundamental rights” so it is more a direct, literal oversight; the way an operator of heavy machinery ensures it does not cause injury. When it comes to regulatory oversight, senior management accountability is likely to be an important factor – whether it’s via explicit accountability regimes like the UK’s Senior Management & Certification Regime (SM&CR) or soft laws issued by the local regulators. When there is no ambiguity regarding accountability, and senior managers have clear incentives to actively comply with existing regulations (both hard and soft laws), compliance is likely to be more hands-on and proactive.

?

The road forward?

I see the road forward as three-fold:

1.????? Regulation needs to be flexible, anticipatory[10], timely[11], proportionate, non-prescriptive, principles- and increasingly outcomes-based (the last one being a relatively new concept in financial services regulation). While this seems easier said than done, there is at least one example proving that with sufficient thought and foresight this can be accomplished. EU’s Markets in Financial Instruments II (MIFID II) directive will turn ten in 2024 and yet its provisions regarding algorithmic trading or investment decision-making remain relevant to this day. MIFID II does not even reference AI (it references algorithms), but because it is focused on applications and outcomes, those nearly 10 year old provisions can be applied to AI today.

2.????? Individual industry regulators need to remain alert to the developments in AI and issue relevant guidances when and as needed. While those guidances are usually soft laws (i.e. non-binding), industry players (e.g., in financial services) tend to regard them as de facto obligatory and binding.

3.????? The third, hybrid approach is to use delegated acts to update existing regulations where feasible and to use soft laws to reinterpret or “retrofit” existing regulations (e.g., product safety directives, data protection, equality etc.) in the context of AI.

Much of modern regulation (finance-specific like MIFID II and sector-neutral like GDPR) has long been principles-based, which replaced the more rigid rules-based approach of the past[12]. With AI regulation we are seeing a further evolution into outcomes- and risk-based (or “risk-tiered”) approaches. The successful “regulatory mix” for AI is likely going to contain:

-??????? General, industry-agnostic hard law regulations (e.g., the EU AI Act) complemented by sector-specific soft laws;

-??????? A mix of different regulatory approaches, depending on which one(s) are best suited to address a specific use case.

The final, somewhat provocative thought is that perhaps a promising and as-yet-untapped resource for advice on future-proof regulation of AI and other emerging technologies could be… AI itself. Current versions of Large Language Models (LLMs) such as GPT-4 are not fully capable of producing hallucinations-free complex legal texts taking into account ongoing real-world developments, but in the hands of a skilled prompt engineer they could probably offer useful suggestions on the level of individual principles or use cases. Bearing in mind the quantum leap between natural language processing and generating capabilities (NLP / NLG) of AI systems of yesteryear (i.e., pre-gen AI) and the present-day ones it is entirely plausible that, for example, ChatGPT-7 would be perfectly capable of generating coherent text on the level of complexity matching, if not surpassing, capabilities of the smartest human lawmakers – with perfect memory and recall, and the ability to avoid ambiguities and contradictions. In this scenario years of discussions and countless man-hours could be reduced to a single prompt: "Please generate a comprehensive draft for AI regulation that addresses both existing and foreseeable considerations in the field of artificial intelligence. Ensure that the regulation is flexible enough to accommodate and apply to unforeseeable considerations as technology evolves. Consider ethical, legal, and technical aspects, and strive for a balance between fostering innovation and safeguarding fundamental rights and societal well-being."[13]. Is AI suited to propose regulations to govern AI? We might find out sooner than we think.

?

?



[1] Up to EUR 30,000,000 or 6% of the offender’s revenue – whichever’s greater. By comparison, the maximum penalty under GDPR is 4% or EUR 20,000,000 – whichever’s greater (Article 71.3).

[2] Office for National Statistics (ONS), “CPI annual rate” (2023), https://www.ons.gov.uk/economy/inflationandpriceindices/timeseries/d7g7/mm23 .

[3] To EUR 40,000,000 or 7% of the offender’s revenue – whichever’s greater (Article 71.3).

[4] Bertuzzi, L., “EU’s AI Act negotiations hit the brakes over foundation models“ (2023), Euractiv, https://www.euractiv.com/section/artificial-intelligence/news/eus-ai-act-negotiations-hit-the-brakes-over-foundation-models/ , accessed 15-Nov-2023.

[5] Uuk, R., ”The EU AI Act Newsletter #40: Special Edition on Foundation Models & General Purpose AI” (2023), The AU AI Newsletter, https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-40-special , accessed 01-Dec-2023.

[6] As the former chair of the FCA and board member of the PRA Charles Randell elegantly put in his LinkedIn post (https://www.dhirubhai.net/posts/activity-7123322291005853696-5K_h/ ): “Do you want protection from rip-offs when you rent a flat, pay for energy, buy a washing machine, or borrow money? If the answer is Yes, you probably want effective regulation.”.

[7] The Alan Turing Institute, “The AI Revolution: Opportunities and Challenges for the Finance Sector” (2023), https://www.turing.ac.uk/sites/default/files/2023-09/full_publication_pdf_0.pdf , accessed 23-Nov-2023.

[8] The proposal for EU AI Act, Draft Compromise Amendments (2023), https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf , accessed 01-Sep-2023.

[9] Fiordelisi, F., Lattanzio, G., Mare, D., S., “How binding is supervisory guidance? Evidence from the European Calendar Provisioning” (2022), World Bank Group Policy research working paper, https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099712205172222193/idu04756c4680f87d046860b6af0e7a34c058952 , accessed 10-Oct-2023.

[10] In 2017 Nesta, an influential UK innovation agency, introduced the concept of anticipatory regulation with a ten-item toolkit for governments and regulators in a blog post by (Mulgan, G., “Anticipatory Regulation: 10 ways governments can better keep up with fast-changing industries” (2017), Nesta, https://www.nesta.org.uk/blog/anticipatory-regulation-10-ways-governments-can-better-keep-up-with-fast-changing-industries/ , accessed 01-Oct-2023). The post reads “In the past, regulators assumed that they could ignore new developments until they reached a certain scale.”. While not all the tools from Nesta’s toolkit may be applicable in the context of AI in financial services, the concept of anticipatory regulation is versatile.

[11] Robertson, A., Z., “Timing the regulatory tightrope” (2023), in Research Handbook on Law and Time edited by Frank Fagan & Saul Levmore, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4593670 , accessed 01-Nov-2023.

[12] See the landmark 2007 paper by Black et al. for an in-depth discussion on the gradual shift from rules- to principles-based regulation (Black, J., Hopper, M., Band, Ch., “Making a success of Principles-based regulation” (2007), Law and Financial Markets Review, https://www.lse.ac.uk/law/people/academic-staff/julia-black/Documents/black5.pdf , accessed 20-Nov-2023.

[13] Prompt generated by ChatGPT-3.5 in response to the original prompt: “Imagine I am a lawmaker. What prompt should I use to ask you to generate a draft of a universal AI regulation addressing existing as well as foreseeable considerations, and also flexible enough to accommodate and apply to unforeseeable considerations too?”.

Mohd Gaffar

Client Success Lead | "I Partner with Clients to streamline operations and enhance profitability by implementing strategic technological solutions and automation"

6 个月

That's fantastic! Your article sounds like a must-read. Any key takeaways to share?

要查看或添加评论,请登录

Wojtek B.的更多文章

社区洞察

其他会员也浏览了