From Data Quality to Ethical AI: A CIO's Perspective on AI in Banking
Francesco Federico
Chief Marketing Officer @ S&P Global | Non Executive Director | Author @ Chronicles of Change
Welcome to Voices of Change, an in-depth conversation with corporate AI leaders aimed at bringing new perspectives and experiences to my regular newsletter Chronicles of Change.
For this inaugural issue, I had the pleasure to discuss the impact of generative AI in financial services with Sergio, my first boss and mentor: a people manager role model I own a lot to.
A Conversation With: Sergio Novelli, CIO @ Agos
Sergio Novelli has an extensive cross-functional background encompassing digital transformation, product/platform strategy development, CRM, digital marketing, and more, sharpened through his work across industries such as telecommunications, broadcasting, financial services, and retail. Since 2017, Sergio has been serving as the Chief Information Officer (CIO) at Agos, a consumer credit company within the Credit Agricole Group in Italy. In this role, he bridges the gap between business and IT, guiding the company's digital transformation journey. Sergio has always cultivated a curiosity for technological advancements, and for this reason, he has been developing proof-of-concept initiatives on AI within Agos for the past year.??
Sergio, can you discuss the role of leadership and organisational culture in fostering a data-centric AI strategy within a bank?
I believe that using AI in the banking and financial sector can revolutionize operations, enhance customer experiences, and drive innovation. The two elements you mention in your question are at the core of bringing a financial institution to be "AI Ready."?
In order to curb expectations and ensure the vision and guidance of its implementation, senior leaders must understand AI's capabilities and limitations. Additionally, they need to implement a "Smart Compliance" approach that fosters the adoption of AI while ensuring compliance with the numerous rules and regulations of the financial space.
Since AI performs better when fed with proper data, they need to encourage a culture where data is the key driver of decision-making. This brings me to the changes in organizational culture: data-driven decision making requires breaking down organizational silos, promoting cross-functional collaboration, and integrating data analytics into daily workflows. Additionally, employees should be encouraged to share ideas and experiment with AI. They should also be trained to understand how to correctly interact with AI systems (e.g., prompting AI agents). For example, Credit Agricole is very attentive to this subject, and we are currently running a worldwide contest to collect ideas that will be selected and implemented.
Finally, we shouldn't forget to address the ethical concerns surrounding AI, such as bias, transparency, and accountability. We must also explain the risks of using a technology that, for now, is not entirely transparent. Financial institutions must establish clear guidelines and frameworks to ensure AI is used responsibly and ethically.
How does your bank handle ethical and compliance issues related to data quality and AI training? Are there particular standards or frameworks you adhere to?
The potential for discrimination in Generative AI is a critical concern, not just for Agos, but for the entire group. Therefore, we are actively discussing with the parent company to define a framework within which to operate. Currently, our scoring algorithms utilize rule-based AI. We believe EU's AI Act is a significant step in this direction and it is well-timed because defining rules and limits is the prerequisite for ensuring sustainable development over time. It also assures us that the experiments and investments we are making now can lead to valid solutions that will not be blocked or limited in the foreseeable future.
What steps does your team typically take to prepare and cleanse the data for AI training? How do you balance the need for automation with the need for manual oversight in this process?
Thanks to the latest multimodal generative AI models, e.g., Gemini 1.5, data preparation takes less and less time, even when different data sources have to be handled. For example, only a few months ago, to analyse the text of a conversation in our call centres we had to combine speech-to-text solutions with text analysis solutions, developing elaborate architectures. Now with multimodal LLMs these different steps are no longer necessary and starting directly from the recording of a conversation we can perform a text analysis and derive the KPIs we are interested in. Obviously since we are still in an embryonic stage, all our experiments require the presence of humans in the loop in order for the result given by generative AI to be validated.
Have there been any specific instances where data quality issues significantly impacted your AI projects? How were these challenges addressed?
We are currently conducting numerous tests on generative AI. What has emerged is that, as advanced as LLMs are, simply providing information to the AI is not enough to achieve good results. The quality of the input is crucial.
领英推荐
To illustrate this with two concrete examples, we created a RAG (Retrieval Augmented Generation) system to manage all procedures related to credit granting, a key topic at Agos. We also tried using tickets resolved in the last year to generate documentation to support operators. In both cases, the performance of the AI, a mix of traditional and generative AI, did not lead to results that were good enough to move to the project's industrialization phase.
In the first case, the manuals were not written with AI in mind, as there are many documents containing Excel tables, graphs, and process diagrams. While these facilitate research and understanding for our colleagues, they do not help with AI searches. In the second case, we realized that there is often a lack of uniformity in ticket management, which did not allow for unambiguous answers to the same problems.
Therefore, the quality of input data has become one of the drivers we adopt when choosing which experiments to pursue. If the quality is not deemed sufficient, we do not proceed with the experiment
Based on your experiences, what advice would you give to other CIOs or technology leaders facing similar challenges in ensuring data quality for AI initiatives?
As for all innovation initiative to flourish, it's critical to address genuine needs and have a strong buy-in from the business, otherwise the risk is to end up with an "AI gimmick" that lack real value for the organization. Due to the nature of our industry, early engagement with legal and compliance teams is vital to confirm the “production” feasibility of the solution.
Specifically for AI, it is essential to establish a robust data governance, to guarantee a clean set of data when starting and data quality and security over time. From this perspective, finding the right balance between automation and ongoing human oversight is fundamental to ensure quality and address potential biases or errors.
Lastly, you should embrace continuous learning and innovation: the AI landscape is ever-evolving, so learning and engaging in discussions with peers and partners is key to identify best practices and potential use cases.
Looking ahead, what trends or innovations do you anticipate will significantly influence how banks manage and utilise data for AI purposes?
I believe that in the future, there will be a growing development of hybrid solutions between traditional and generative AI. This is because, for some tasks, it is crucial to ensure models that are transparent and interpretable, to generate trust in both the customer and the regulator, as well as to reduce banks' concerns about regulatory compliance.
One of the aspects that is emerging strongly in the financial landscape is the use of synthetic data. This is because synthetic data allows for testing without compromising sensitive customer information and facilitates improved model training, especially when dealing with limited real data, such as in fraud cases.
Finally, one area in which AI could really help financial institutions is in keeping up with all the constant changes in rules and regulations and in particular in implementing and executing all the numerous controls companies operating in this space are required to perform.
Follow me
That's all for this week. To keep up with the latest in generative AI and its relevance to your digital transformation programs, follow me on LinkedIn or subscribe to this newsletter.
Disclaimer: The views and opinions expressed in Chronicles of Change and on my social media accounts are my own and do not necessarily reflect the official policy or position of S&P Global.
?
Client Services Director | Helping Companies Deliver Scalable Software, Optimise Operations, and Drive AI Innovation
10 个月AI transforms banking, ethics ensured. Data-driven insights shared.