Bias, transparency and explainability. Main challenges for ‘artificial intelligence’ in financial services.

Bias, transparency and explainability. Main challenges for ‘artificial intelligence’ in financial services.

Artificial intelligence or to be more precise – automation based on natural language processing, deep and machine learning – is one of the most beneficial and promising technologies for almost all parts of the economy. Advanced analytics, effective and precise prediction plus robotics and fraud prevention are important areas that may (and will) be game changers for those institutions that are open for innovation. We all know that deploying new solutions require budget, infrastructure and other resources necessary for effective implementation. In addition, and we cannot forget about that fact, other challenges like legal and regulatory issues will emerge during the process. 

Widely understood artificial intelligence is not yet fully regulated, however, at the European Union level there is debate as to whether (and how) ‘adjust’ legislation and regulation to this emerging technology. Debate is quite interesting, and its outcome is promising. Both the European Commission and Council (with the European Parliament) are proposing certain actions (soft law, hi-risk AI regulation or EP’s resolutions) to find a proper solution. Now, however, we have a state of uncertainty for AI-systems as we don’t know how to apply even existing regulation (liability, general requirements). 

Financial services, in particular banks activity, is highly ‘sensitive’ with all personal data, money and systemic importance topics. Therefore, more tailored approach is not only desirable but inevitable. This fact has been marked-up by the European Commission in its Digital Finance Strategy:

‘As a result, the Commission will invite the ESAs and the ECB to explore the possibility of developing regulatory and supervisory guidance on the use of AI applications in finance. This guidance should follow the upcoming proposal for a new regulatory framework for AI planned in 2021’

Challenges are now, not tomorrow

Wider application of AI within the financial services sector will be a big challenge not only for financial institutions but also supervisors (regulators) on many levels. If we refer to the proposal for digital operational resilience regulation we will see that many requirements will ‘touch’ also AI-systems. If we add NIS2 and cybersecurity threats and challenges it will become even more interesting. We cannot forget about product liability that, according to the European Commission and the European Parliament, should ‘fit’ AI issues as well.

This is not the end. More issues are emerging if we add regulatory component. Starting from the European Banking Authority and its two documents:

1.    report on Big Data and Advanced Analytics and

2.    draft guidelines on loan origination and monitoring,

No alt text provided for this image

we will get a picture of potential challenges that financial institutions will have to face in their road to digital transformation. This is, however, just a piece of big cake to eat. We have also Regulation 2017/589 that imposes additional organizational requirements on investment firms involved in algorithmic trading. Some supervision authorities (including Polish Financial Supervision Authority) has decided to ‘add more’ to robo-advisory providers.

More and more services are using sophisticated algorithms to get better results (take a look at this paper by Bank for International Settlements correlation between machine learning and better credit scoring) while not always ensuring a sufficient level of protection (of customers and those institutions). Why? Because not everything is clear.

Are we done yet?

Not yet. In case of automation and profiling (especially) the most important thing is transparency, explainability and non-discrimination (or algorithmic bias). Many levels of EU regulation require entities to apply such rules (requirements?) irrespective of technology used and service provided.

No alt text provided for this image

When it comes to AI it is even more challenging as soft law by European Commission and other bodies and institutions clearly recommends applying only TRUSTWORTHY AI. I don’t want to elaborate on many documents ‘produced’ by EU (ENISA, Data protection bodies, EP and so on) and Council of Europe but one can be said – algorithms used for commercial and non-commercial should enable:

1.    Revision of decision process – explainability.

2.    Data used for particular decision and training – traceability.

If you are looking for more details, please click here for list for trustworthy AI.

In addition, algorithm should not create a risk of bias or discrimination. Recently published guidelines by the Council of the European Union (not only) are highlighting a need for non-discriminatory execution of AI-systems. This can be achieved – partially – by application of above-mentioned elements of transparent AI and constant monitoring but it will not work always as advanced algorithms may learn quite fast and in surprising way.

We can also add other elements (challenges) like cybersecurity and data protection, but this will be a part of next article. 

What institutions should do? What authorities should?

Ethics by default and design. All principles for trustworthy AI should be a part of each (and early) step of implementation of AI-systems. Robust organizational and technical aspects should also be included. On-going monitoring will be inevitable. 

Authorities’ perspective is even more interesting. I bet that in the future SREP (Supervisory Review and Evaluation Process) will also include audit of AI-systems. This will require authorities to provide and secure resources – not only budget but people and expertise in new technologies, including legal and regulatory issues. This will be a big challenge for all but – without a doubt – with profits for institutions, supervisors and customers.

All opinions expressed herein are solely mine.


Gabriela Bar

Gabriela Bar Law & AI, PhD in law, experienced expert (and enthusiast) of new technologies and AI law, Women in AI, Forbes' list of 25 business lawyers, the TOP100 Women in AI in PL.

4 年

Transparency, incl. Explainability is crucial. And meeting these requirements does not necessarily lead to the disclosure of trade secrets or IPR. The black box does not necessarily has to be opened, and all the "secrets" of the algorithm revealed - the key "feature" of the "explainability" is a better understanding of the scope of automatic decision-making and the reasons behind a particular AI decision, to help challenge that decision and change future behavior to potentially get your preferred result.

Mariusz O?ga

?? Strategic Transformation Executive ?? Driving Enterprise Value & Innovation ?? Expert in Digital Strategy & Corporate Governance ?? Financial Services Leader ??

4 年

The most important thing is that AI stays ?Human”. AI is not to be treated as ?blackbox” that just takes away the ?tedious work” and we move on to other, brighter things. Who controls the computers is important. And that might be a real stumbling block - convince ourselves that we fully control it as much as when we did it ourselves in the past. Stanislaw Lem might have been right - its us that are faulty, not machines... dr Maciej Kawecki ??

要查看或添加评论,请登录

Micha? Nowakowski, PhD的更多文章

社区洞察

其他会员也浏览了