Breaking down the new FCA Guide Update

Breaking down the new FCA Guide Update

CP 24/9 has suddenly caught everyone’s attention. And for all the right reasons, too. From cryptocurrency to artificial intelligence, public-private sharing and boundaryless compliance - FCA’s latest updates to its FC Guide is packed with powerful stuff. For many of us it indicates a direction long due for the British regulator.

Some of the points above are more clearly articulated while others you would need to read between the lines. Not to worry, I will break it all down here.

Break-the-silo

There is no clearly articulated mandate around this point. But many of the changes across the document thematically point towards FCA looking for more synergy in the fight against Financial Crime. We all know how effective it could be when a regulator pivots to such a view and it’s exciting to read some of the subtle nods in the document.

Well, some of the nods are not very subtle. Clearly FCA wants to encourage more Public-Private-partnerships in sharing data and best practices. It’s not that this hasn’t been happening. I have been fortunate enough to have been part of a few. But all of those were initiated through individual endeavours. Having a regulatory endorsement is altogether a different ball game.

But the effort to break the silos doesn’t stop there. Many of us who have battled the intrinsic inefficiency of organisational silos that exist around KYC/CDD, Screening and Transaction Monitoring, are delighted to note in the paper that FCA wants organisations to bring the processes of KYC/CDD and Screening much closer together.

If you think about it, there is nothing more obvious than this idea. Both the processes are multi-billion dollar industries across the globe. Both of them are tremendously information-reliant processes. And the information they respectively rely on have massive overlap. When I am screening a customer against a list how much am I leveraging the information I gathered during KYC to make the hits more accurate? Currently, unfortunately for most organisations, the answer is resoundingly “No”.

While we’re at merging processes, how about Transaction Monitoring? Here the paper is more subtle in its expectation. It asks the organisations to feed the outcomes of Transaction monitoring alerts into the overall risk assessment of clients. We can all agree that that itself is a shift of the dial. Traditionally, risk assessment has been a static process that heavily relies on KYC/CDD data. In other words we assess whether a client is high risk based on what they tell us, and not how they act in reality.

Often any attempt to change that approach within organisations meets with anticipatory challenges from the regulators (in other words people thinking that regulators won’t like us to change that a bit). That is where an explicit mention of this changed outlook is going to help break down barriers. We can expect more organisations opting to move to a holistic risk assessment of clients than an isolated transaction monitoring.

?But that is where the paper, in my view, stops short of where it could have led us to. It continues to acknowledge the artificial barriers between Transaction monitoring and risk assessment. It merely states that the outcome of one should feed the other. But in reality any risk assessment can be effective only if the KYC streams and payments streams are combined into a single event stream.

Another area where the paper has added in a lot of mention is to expand risk assessment beyond AML/CFT and more pronouncedly into Cryptocurrency, AB&C and Proliferation risks. It has not clearly mentioned though whether these should have their own independent processes or be joined up with AML for a more holistic view. For obvious reasons cryptocurrency is a topic that I foresee would find itself in the crossroads of multiple processes - AML, Fraud, TF and Corruption. With a clear mandate to define the risk appetite, it would mean an interesting way forward for the Financial Institutions.

More Research, more AI

With increased encouragement for PPPs would definitely bring more research and innovation into the age-old Financial Crime processes and the possibilities are tremendous for business schools to collaborate with Financial Institutions. But the impact of this change is not going to be felt only in academia. Elsewhere in the document, FCA is bringing spotlight to the importance of experimenting alongside tech providers to identify more advanced ways of monitoring risk.

Of course, there has to be caution applied in that process, and the document emphasizes on that, too. But this is where experiments in a safe environment becomes paramount. That’d also mean that organisations would need such an environment and budget to explore technology in a fail-fast mode. Personally, for me, this is one of the highlights of the document, as the bruises in my career from trying to get budget for research would testify.

But at this point we must recognise where this guidance is coming from. It is coming from the collective effort that his been put in by the experts in the Financial Crime sector to upgrade from rules-based monitoring into more artificial intelligence driven continuous outcome.

However, monitoring FinCrime risk has always had a problem that applies to the most sophisticated AI models as much as it does to the simple rules based ones. It is the problem of choosing the correct focal entity level for monitoring. Focal entity defines the view at which transactional activities are summarised for monitoring and alerting. Traditional focal entities are payments, accounts or customers, which means that every month (or day or week) transactions related to these entities are bunched together into features which then is fed into the model. If you go wrong in selecting the focal entity all subsequent calculations are off resulting in false positives.

There is also explicit emphasis given in the paper on monitoring at the right focal entity level. It mentions network analysis as a specific example on multiple occasions. Indeed network or graph views provide a wider and arguably more effective lens to look at activities through. There has been considerable amount of work both by tech firms as well as banks to develop the network analytics capabilities and it is getting recognised in the paper.

However, the problem of crime detection is essentially a spatio-temporal one. While Network analytics is a great tool to capture the spatial dimension of the problem, most artificial intelligence models continue to ignore the importance of time in modelling FinCrime risk. It throws absolutely zero insight to say that the ratio of credit and debit activities in a month was 1.1. Because it does not tell us in which sequence those credit and debit activities happened. ?The paper does not explicitly call out the gap and that is perhaps because this issue has not garnered enough attention. Most early adopters of AI/ML technologies continue to expect great results without incorporating temporal evolution of trends, and most of them are disappointed, if not all.

This leaves the senior managers who had invested heavily into these technologies somewhat restrained for action.

Senior management involvement

On the other hand, the paper once again places importance on senior managers being more involved in decision making on financial crime matters. This has always been challenging because of what I call the “tech barrier”. As an MLRO or Head of Financial Crime risk it’s impossible for someone to directly access their data and model assets. They would need lean on tech experts – IT, Data Science, Product Management – who will in turn write lines of codes to retrieve the insight needed or make the changes that are manadated. The whole process slows things down and at the same time does not leave the senior managers with a feeling of control.

FCA Updated guide makes it clear that they want the senior managers to be more involved. But stops short of saying how. As we have seen in the last section, advent of sophisticated AI/ML solutions makes it ironically even more challenging to scale the tech barrier. That essentially happens because most banks are trying to force-fit new solutions into old and weary ecosystems. It’s a recipe for disaster and in the short run increases cost and complexity.

Trick is to couple any investment into AI/ML as decisioning tool with an equivalent focus on upgrading the ecosystem so that it can cope with it. Large Language Models are great tools which can come in handy in this process. Is it possible for a senior manager to directly interact with their models and expect intelligent insights real time? Is it possible for an MLRO to ask their risk assessment model whether certain typologies are being picked up in the alerts? Is it possible for the Compliance CDO to directly interrogate the data lake and assess the health of data based on very custom questions?

The answer is “Yes!” to all those questions. It is possible not just in theory or in future, but at the very present using the technology that is available. Is it going to be easy? Hell, no. Any application of Generative AI would have to go through fire to ensure they are safe and ethical. There needs to be run multiple experiments and proof of concepts. But that was the same situation with Machine Learning application in Financial Crime around 2015.

As one of the very few people at that time who had started to experiment seriously on the applicability of machine learning model to assess FinCrime risk of customers, I had presented the (successful) results to my the then manager, only to hear a prophecy that the regulators would never allow Machine Learning models to be used in the Financial Crime space. Now in 2024, we find ourselves in a reality where the regulators are asking us to actively look for emerging technologies and challenge ourselves. That is what time does – breaks all sorts of prophecies. ?

Swagatam Sen

Executive Director, FCC Tech Advisory | Strategic Advisor for Data and Technology | Talks about Innovation | Ex-HSBC

6 个月

Kaushik Das Of course. Particularly in the context of breaking the silos we need to start by breaking the data silos that exist. And that's where most organisations face challenge in navigating. I hear this all the time that it's a day two problem. We know very well that day two never comes.

回复
Kaushik Das

Lead Vice President, Financial Crime Analytics

6 个月

@Swag, Great summary. I additionally feel, there is a need to emphasize on building the data estate with this long term objective Else the challenge will persist longer.

回复

要查看或添加评论,请登录

Swagatam Sen的更多文章

社区洞察

其他会员也浏览了