AI in Banking and Finance - A Chief Risk Officers Perspective
For those of you who are regular subscribers to my blog, you'll know that I always like to give a practical view of technology based on actual lived experiences, particularly when it comes to AI. One of the ways I like to do this is by interviewing field practitioners hoping that their lived experiences can help others in my network.
In this month's blog, I had the pleasure of interviewing Marlene Lenaduzzi, who is the Chief Risk Officer for EQ Bank in Canada. Marlene has been in the industry for over 20 years, having worked for large banks in Canada such as CIBC and BMO, and there is no better person to give their perspectives on the risks associated with using AI and GenAI in Banking and Finance and how to address them. In a recent Gartner market survey, 67% of Banking CIOs placed high importance on helping business areas understand AI opportunities and risks. This is an activity I am frequently asked to support. So those of you in my LinkedIn network whom are in Banking and Finance should hopefully find this article useful. Thanks to Marlene for taking the time to speak with me!
If you want to talk more to Marlene about this interview, you can find her on LinkedIn at Marlene Lenarduzzi | LinkedIn. I'm sure she'd love to hear from you!
Background - AI in Banking and Finance
The History of AI in Banking
AI has been used in banking for a long time. The need to process large volumes of financial transactions efficiently and quickly whilst looking for fraud and scams has meant banks were early adopters of this technology. Banks were also in the position to spend the money required to develop and support hundreds of not thousands of specialised AI models used to do everything from perform sentiment analysis, and approve loans to forecast top line performance. The use of AI promises to increase with the advent of the Foundation Model but so do the risks as I will discuss in the next sections.
Foundation Models - Changing the Economics of AI for Banking but also increasing the Risks
For years, Artificial Intelligence (AI) required organisations like Banks to build specific AI models for each individual use case, whether it was machine vision, natural language processing, conversational agents or others. These models, trained on specific use case data, needed to be tuned and had to be constantly monitored for performance, updating the models regularly when performance degraded. Banks needed to pay for the people and tools necessary to build and manage these individual AI models. As a result, AI was only really affordable for the large banks and financial services institutions that could afford the infrastructure investments required.
Then, in 2017, the University of Toronto, in partnership with Google, wrote a research paper called "Attention is All You Need". See here if you'd like to read this paper. Without going into detail, this research paper fundamentally changed AI by giving organisations a computationally efficient way of building extremely large AI models. These extremely large models, trained on a variety of data, could be used for a variety of use cases, not just one. As a result, the concept of the "Foundation Model" was born.
The term "Foundation Model" was first coined by the Stanford AI team (see here) to refer to AI models that could be applied across a variety of use cases. These Foundation Models allow banks to adopt a build once, use many times approach to AI. This radically changes the economics of AI by making even lower volume use cases economical for even very small organisations. It also allows organisations like banks to use models built by other organisations (hyperscalers, other banks) reducing the minimum investments necessary and improving the economics of AI even further. As "Jevons Paradox" states, the more efficient a resource becomes, the more overall demand for that resource will increase (see: here).
Foundation models also facilitated the more recent development of Generative AI late in 2022.
But in purchasing models from other organisations there are also inherent risks which require new forms of AI governance.
Generative AI - a type of Foundation Model
By definition, Generative AI (GenAI) is any time you use AI Foundation Models to generate content, whether text, images or voice. Large Language Models (LLMs) are a form of Foundational, Generative AI that is used to understand text inputs and generate either a text or image response. I recently published a video on what Generative AI is, which you can watch here if you'd like further information and explanation.
ChatGPT, which was released for public consumption in November 2022 by OpenAI (see here) and which most people have played with, is the best example of a Large Language Model, which is a form of Foundational, Generative AI. Able to respond fluently and humanlike to many different types of questions and directions, ChatGPT demonstrated the immense capability of these Foundational GenAI models, and ever since, organisations have been racing to adopt this new technology because of the benefits it can provide.
The adoption rate of these foundational, generative AI solutions has been so fast that the use of ChatGPT has surpassed the adoption of Facebook, mobile phones, and even the Internet (see here). Hence, why I have so many clients experimenting with this technology, and why many organisations are asking big tech companies like Microsoft, AWS, Google, and IBM to help them deploy Foundational, Generative AI solutions into their organisations.
Generative AI Use Cases in Banking and Finance - the Relationship between Business Benefits and Technical Complexity
Broadly speaking, the use cases for GenAI can be broken into three categories: Content Generation, Content Retrieval and Decision Making. Content Generation is easy. Ask ChatGPT to write a poem. It will do so in less than 10 seconds. Asking it to retrieve and generate useful content related to a specific use cases within banking (such as contracts or policy documents) is more challenging. You have to collate that content, make sure it is accurate and then use something called Fine Tuning or Retrieval Augmented Generation to ground your LLM in that content so that it responds correctly when asked a question. Harder still is using the LLM to make decisions. This often requires complex integration, design and testing.
Asking an LLM to route an email and then generate a useful response to that email that a banking call centre agent can use are simple examples of GenAI-based decision-making that are already in use by many banks. But even these use cases are still not technically trivial as integration with existing systems, design and testing are required. The real value-added, more complex decisions I've seen people use LLMs for are activities like making asset investment recommendations or helping to identifying fradulent transactions. There is always a human in the loop, but the LLM makes recommendations and helps with decisions, making the human faster, more efficient, and more effective.
Speaking from experience, the more advanced decision-making scenarios can be even more technically complex and require more effort to get right. But once you get them right, they generate much more value than the more straightforward Content Generation and Retrieval scenarios. It turns out LLMs can actually be quite good at making decisions.
According to Gartner (see here ) the more priority use cases for GenAI in Banking based on prioritising business value and technical feasibility currently include...
... but over time this list has been rapidly expanding for the banking industry.
The Increased Risks to Banks with Foundational Generative AI
As previously noted the concepts of the Foundation Model and Generative AI promise to increase the use of AI in banking even further by changing the economics (reducing the costs) and broadening the number of use cases for which AI can be used in banks. But they also promise to increase the risk of AI use in Banks. This is because when banks were building their own AI models they controlled the supply chain of model development which meant they could ensure key model metrics such as accuracy, data quality, security and privacy.
But using Foundation Models provided by other organisations is akin to buying a house in an area where their are no building codes and no way to get an engineer to inspect and validate the quality of construction. This is because LLM models are essentially "black boxes". You don't know wether the house you are buying is built on poor foundations. Yet once you start using that house you are responsible for all whom enter (in this case the banks customers). And if there are issues (e.g. some trips on you front door step and is hurt... to over extend the analogy! :) ) it will be you that is responsible to that person not the origianl house builder.
In summary then Banks are taking on significant new risk by using someone elses models. As a result, most analysts say that the risk of using GenAI in Banking is largely broken down into the following 8 broad categories:
If you want to read a really good detailed article on the risks associated with GenAI, there is recent article published by the Australian Signals Directorate that gives businesses such as banks clear guidance as to what they need to do to be secure (see: here).
领英推荐
Having set the stage regarding the use of AI and GenAI in banking and the associated risks lets see what a bankers view is about the risks associated with AI and GenAI in banking.
My interview with Marlene Lenarduzzi, CRO at EQ Bank...
Marlene, can you start by describing who EQ Bank is?
EQ Bank is a leading digital financial services company with $127 billion in combined assets under management and administration (as of October 31, 2024). We are Canada’s seventh-largest bank by assets, and we offer banking services. We are known as?Canada’s Challenger Bank?, and we have a clear mission to drive change in Canadian banking to enrich people’s lives. We do this by leveraging technology to deliver exceptional personal and commercial banking experiences and services to nearly 700,000 customers and more than six million credit union members. Our digital EQ Bank platform (eqbank.ca) has earned praise from our customers, who have named Equitable one of Canada's top banks on the Forbes World's Best Banks list since 2021.
So, what is your role within the Bank?
I am the Chief Risk Officer or CRO of EQ Bank. As CRO, I lead a diverse team of risk professionals who tend to have a strong analytic background. Together, we develop and manage an integrated risk management framework to identify, measure, mitigate, manage, monitor, and report on the relevant risks that the bank faces.?
More and more, the CRO role is a strategic function and plays a critical part in the organization’s strategy development and implementation, thereby providing a source of competitive advantage.?
?
How do you see EQ Bank using AI?? Do you have any interesting examples that you can talk about?
AI is emerging and evolving quite quickly. At this point, our use of AI is focused on financial crime prevention, cyber security and tools that improve efficiency by automating routine tasks. There are several use cases in which we are exploring the use of AI in sentiment analysis, model development and testing, as well as detecting trends and anomalies.?
?
What do you think the risks of AI are for EQ Bank and the Banking Sector overall?? How do you see yourself addressing those risks in your role?
I view AI as a transverse risk, impacting both financial and non-financial risks.?AI concepts in financial services have quite a long history. For example, AI and machine learning techniques in fraud detection have been used for over 20 years and are well documented. The use of AI in lending decisions is less established due to concerns about decision transparency. As well, there are risks stemming from using an AI model that is misspecified, leading to incorrect financial decisions.?
Emergence into newer areas of finance and financial services, such as customer interactions, is still in the early stages, and we have moved prudently here.?
The Classic risk management frameworks of identification, assessment – i.e. analysis and quantification – sometimes referred to as measurement, mitigation, monitoring and reporting can form the basis of a robust AI governance framework, but we have to think of the impact of the risks more broadly to include both financial and nonfinancial impacts. Examples of nonfinancial risks that should be closely managed include data privacy, ?consumer data?protection and customer consent for data usage, impacts to employees (e.g. ensure appropriate training and upskilling to support the transition to AI-assisted roles), reputational risks and cyber security.? That’s why the risks associated with AI are considered transverse risks.
Trust is paramount to the future of AI, and a responsible AI framework and governance structure is key to building and maintaining trust. Elements of a responsible AI framework that complement existing Risk Management Frameworks include:
Principles focused on trust and transparency
Trust factors include:
From a model development perspective, AI-generated data should be tagged so that it can be distinguished from 'raw' data. This will help reduce bias in testing. AI models, like all models, can deteriorate over time. Principles of model monitoring and testing with standards for recalibration should be applied to AI models as with any model we use in banking and in risk management specifically.???? Many organizations have established an AI ethics board to ensure transparency and ethical use of AI.
?
What advice would you give other folks in the Banking and Finance Sector as they go through their own AI journeys?
It starts with understanding the problem you are trying to solve. Develop your use cases and determine whether AI is the appropriate tool for the job. Don't believe the hype and marketing glitz - have your quants in Model Validation test AI models to ensure the promises made are legitimate. If you decide AI is right for you, remember the adage - garbage in, garbage out. It starts with data - know your data.?Lastly, you need to establish AI governance and risk management frameworks early – not after the fact.
Conclusions
There is no doubt that there are a lot of potential risks associated with using AI and GenAI in Banking and Finance. While the sector has been using AI for a number of years, the advent of Foundation Models and Generative AI promises to increase both the adoption of AI within banking and the risks that banks are exposed to. These risks are manageable once you understand what they are, and as Marlene has said, the first steps in any AI program are developing solid testing and evaluation regimes along with good AI Governance and Risk Management. On that last point, if you are interested in learning more about AI Governance, you can read one of my previous blog articles on the topic (see: here).
If you have questions about this topic or want further information or assistance, please contact me at [email protected] . You can also read more about GenAI at www.ai-savvy.com.au
Empowering Businesses with Secure Trade Finance Solutions | Unlocking Growth with Letters of Credit and Guarantees ??
3 周Hi Dr Dave G. Wishing you a Happy New Year! If you’re looking for dependable Trade Finance solutions, we’re here to help. Our expertise includes SBLC, LC, and BG services, tailored to support your business goals. Don’t hesitate to get in touch—we’d love to assist!
Co-founder | Risk, Quant & Fintech | Headhunting & Research | Strategic Talent Insights | Recruitment Strategy & Advice
3 周Some good insights from the perspective of Chief Risk Officer. Thanks for sharing
Really helpful thanks.
Azure Enterprise Solutions Architect at IBM with experience in AI, Cloud-Native, Automation, Apps, Microservices with end-to-end Architecture, Consulting and Applications & Services Development.
3 周Dr Dave G. As usual, you've written another great article.