Don’t be too fast to regulate AI in Financial Services
AI is nothing new in Financial Services.
In his?2022 letter to shareholders, JPMorgan Chase CEO Jamie Dimon wrote that the bank was "investing more money (think hundreds of millions of dollars) each year on AI for very specific purposes". A?2022 report published by NVIDIA?on the state of AI in financial services described AI use as “pervasive,” with over 75% of companies utilising at least one of the core accelerated computing use cases of machine learning, deep learning and high-performance computing.
But generative AI is set to change the landscape dramatically by complementing "traditional" AI.?
The recent explosion of Generative AI and the potential applications for financial services has been met with both excitement and trepidation. One recent and local example of new AI capabilities being scaled in financial services is Macquarie Bank’s application of an “AI-first approach” across its retail bank (Eyers, 2023).
With GPT4 we have the first useful all-purpose Generative AI available to the public. While some see the potential for increased efficiency and productivity, others fear the risks and call for speedy regulation. Let's take a step back and examine the situation.
Technology has been a source of both economic progress and cultural anxiety throughout history. Three common themes - that have often appeared at times of flagging economic growth - have been: the fear that technological progress will cause widespread substitution of machines for labor, leading to technological unemployment and increased inequality; anxiety over the moral implications of technological progress for human welfare; and the concern that the era of major technological progress is behind us (Mokyr, J., Vickers, C., & Ziebarth, N. L. 2015). In the current context, privacy concerns can be added to this list.
It is not unrealistic to suggest that the risks of Generative AI (including in financial services) are being inflated. You could argue it is, in part, like the Luddites all over again - people fearing new technology and its potential impact on their livelihoods. The Luddites’ targets were machines, but they didn’t view technology itself as the problem.?The problem was wage-cutting, speed-ups, excessive employment of apprentices, unemployment, and high prices (Mohammed, F. 2019). But just like the Luddites were wrong about the impact of machines on their jobs, perhaps so too are those who fear AI. A key difference is that in this situation, it is tech leaders and academics who are protesting. In the "Pause Giant AI Experiments: An Open Letter", they state that if researchers will not voluntarily pause their work on AI models more powerful than GPT-4, the letter’s benchmark for “giant” models, then “governments should step in”. Elon Musk may understand LLMs more deeply than most, and yet is still calling for a pause, while launching his own "TruthGPT". Perhaps we should question at least some of his and their motives.?
There are groups with vested interests supporting speedy regulation of Generative AI. Some of these groups, such as big tech companies, stand to entrench their dominance if regulations are put in place too quickly. “Bootleggers and Baptists” is a theory that explains how regulations tend to come about with the support of both morally and economically interested parties (Yandle, B. 1983). Yandle used the example of early 20th century laws prohibiting the sale of alcohol to illustrate his point.?Regulations were supported by two distinct groups: Baptists, whose religious beliefs led them to support limitations on the sale of alcohol, and bootleggers, whose potential to make high profits from illegal alcohol sales led them to support restrictive policies. The theory holds that regulatory schemes tend to emerge and endure with the support of coalitions of economic and moral interests that desire a common goal.
领英推荐
Generative AI has the potential to dramatically improve productivity in financial services. For example, in IT, everything done in front of a screen could dramatically reduce in cost. Without wanting to offend, walking about a software development floor can remind you of Vikings rowing longships. Why is software development still such a manual and slow activity? There is anecdotal evidence that many engineers in Financial Services leverage code generators like Github's Copilot or Amazon's CodeWhisperer or simply Chat GPT4, mainly for autocompletion of small code blocks. However, some teams utilise language models for more specialised code generation. It seems that inputting information into the context window is more effective than customisation, at least for the present moment. The rationale behind this is that currently, closed-source models surpass the performance of open-source models, and the process of customisation demands considerable effort. GPT4 is probably more able that most Java programmers for only A$384 per year. That makes offshoring look very expensive. Similarly, an AI trained on Basel III & IV regulations could replace expensive consultants in validating code that implements the regulations.
Overall, Generative AI could provide an almost unlimited supply of dramatically good new talent just when our birth rates are plummeting in Australia. Talent that, at the moment (and perhaps always?), needs supervising, of course. Talent that complements our human abilities.
While we don't yet fully (or perhaps even partially) understand how our brains work, the Pattern Recognition Theory of Mind, argues that the brain's neocortex is a hierarchical system of pattern recognisers and that the ability to recognise and predict patterns is a fundamental aspect of human intelligence, which includes capabilities such as language understanding, perception, and even our capacity for creativity and empathy (Kurzweil, 2012). Kurzweil suggests that humans, unlike machines, aren't innately good or fast at logical reasoning. On the other hand, machines, particularly Generative AI, are very good at tasks that involve logic or processing large amounts of data and rules quickly and accurately. AI can complement human abilities.
Through the study of history and using quantitative indicators?to measure cause-effect relationships, it can be argued that equal opportunity is the cornerstone of both prosperous and stable societies (Dalio, 2021). It fuels prosperity by tapping into the talent pool of the widest possible demographic, thereby enhancing overall productivity. It fosters stability by mitigating conflicts arising from perceived injustices. Creating equal opportunities for learning and employment should be one of our key goals for society in Australia, especially when there is public concern (Asahi, 2021) and evidence of increasing gaps between the haves and the have-nots (Richardson, D., Grundnoff, 2023, Davidson, P. & Bradbury, B., 2022). Generative AI holds the potential to level the playing field, offering more individuals an equal chance to learn and work.
To effectively capitalise on the advantages offered by Generative AI, financial services companies may need to fundamentally reconsider how humans and machines interact within their organisations as well as externally with their value chain partners and customers (Deloitte, 2019).?Like Macquarie, Financial services executives should consider deploying AI tools systematically across their organisations, encompassing every business process and function.?
So before we rush to regulate Generative and other AI in financial services, let’s take a deep breath and consider the potential benefits. The risks can be managed and mitigated. Let's consider the vested interests at play. Let's not become a world leader in regulation, like the EU. Let’s embrace Generative AI and all it has to offer - increased efficiency, productivity, and access to amazing talent.
Forde Smith is Director of Financial Services consulting at Syncopate (dna.syncopate.com.au) and is a founder of Vannarho (www.vannarho.com). Ironically, he was in a student band called "The Luddites".
References: