For banks, GenAI at scale must be responsible GenAI?

For banks, GenAI at scale must be responsible GenAI?

Co-authored with Maria Nazareth, PwC US Partner ?

For banks, GenAI at scale must be responsible GenAI?

Generative AI (GenAI) offers banks incredible opportunities to increase efficiency, automate workflows, and hyper-personalise customer experiences. Forward-thinking financial institutions are already starting to unlock the value of this technology. But with great opportunity comes great responsibility.????

For banks, scaling GenAI isn’t just about innovation—it’s about ensuring safety, compliance, and trust. To fully capitalise on GenAI’s benefits while mitigating its risks, robust governance structures and ethical considerations need to be implemented from day one.????

In this blog, we’ll look at some of the key considerations for financial institutions when establishing GenAI governance frameworks to enable responsible, scalable, and effective implementation.????

Innovating responsibly???

There are useful guidelines already in the public domain. In the US, the National Institute of Standards and Technology (NIST) has published its risk management framework. PwC has also published a responsible AI guide with key considerations for organisations.??

Responsible AI foundations will help banks achieve several critical goals, including:????

  • Building trust and credibility with users, stakeholders, and regulators through identifying and reducing bias in GenAI models to support the fair outcomes vital for sensitive industries such as finance and by being transparent about how GenAI models generate their outputs.???

  • Driving sustainable innovation through compliance with global regulations and standards that underpin adaptable systems able to meet evolving ethical, legal, and technical requirements????

  • Improving model performance and reliability through monitoring and auditing to drive continuous improvement and reduce risks???

  • Boosting user adoption and engagement by promoting inclusivity and accessibility that leads to greater trust and faster adoption???

  • Enabling scalable growth by minimising reputational, operational and compliance risks. Banks should also extend global reach and market opportunities by ensuring systems can adapt to different regions’ cultural, ethical, and legal expectations.???

??

Taking accountability???

Banks should prioritise transparency, fairness and accountability. That means GenAI model outputs are trustworthy and consistent with ethical standards. Paying close attention to the data that a model consumes is especially critical to mitigate bias.???

Protecting data, intellectual property rights, and security is also fundamental. Just as important are the reputational and legal risks associated with the misuse of GenAI, including disinformation and ‘deepfakes’. There are already well-documented instances of financial fraud perpetrated using AI to impersonate key personnel. More will inevitably follow.?????

Balancing ESG goals with AI’s appetite for power???

GenAI’s immense computing power means higher emissions. Balancing those energy requirements with ESG commitments and targets can raise some crucial governance challenges.????

Risk and reward in balance???

A governance framework must also support returns from banks’ substantial investments in GenAI. Creating a hub for GenAI activities–often known as an AI Factory –will reinforce and spread adoption of global standards and capabilities, without having to reinvent the responsibility wheel each time.???

Practical steps to responsible AI???

So how should banks get started?????

  • Define standards and centralise validation? GenAI regulation is constantly evolving, with standards and rules varying across jurisdictions. Constant horizon scanning is vital, as is determining the cost and process implications of selecting a specific regulatory standard. Centralisation of reliability and compliance testing will help facilitate alignment with both internal and external standards.???

  • Encourage cross-functional collaboration??? Teams tasked with developing AI solutions should be encouraged to make use of centralised AI capabilities so that they can achieve their goals and operate within a shared and consistent governance framework.???

  • Operate compliantly across borders???

  • Data localisation: Assist GenAI systems in respecting local data residency laws by deploying data infrastructure regionally or using cloud solutions with localised data storage options.???

  • Data minimisation: Limit the collection and transfer of personal data to only what is necessary for AI operations.???

  • Preserve privacy: Implement federated learning, differential privacy, or data anonymisation to comply with data protection laws while enabling cross-border collaboration.???

  • Streamline compliance and monitor continuously???? Centralisation should also apply to data management to support the consistent application of data governance. And centralised compliance monitoring can use audits and assessments to identify risks early. Making common, standard tools available centrally also makes it easier for everyone to innovate responsibly.????

??

Engage all stakeholders???

Clear communication with employees, customers, and regulators builds trust, and regular feedback loops help refine practices. Taking diverse perspectives into account will improve AI outcomes and build organisational confidence.???

Develop the technical architecture???

Robust technical infrastructure is crucial for AI governance. That requires centralised platforms for model development, testing, and monitoring that support version control, audit trails, and automated compliance checks. It’s also vital to integrate tools like model risk management software, data lineage trackers, and bias detection algorithms. These solutions should fit seamlessly into existing IT ecosystems to facilitate organisation-wide adoption.???

Use and report the right metrics????

Measuring AI governance effectiveness requires clear metrics to track model accuracy, fairness scores across demographics, time to regulatory compliance, and stakeholder satisfaction. KPIs should align with broader organisational goals and be reported to senior leadership.????

A future-ready GenAI strategy rooted in responsibility and scalability???

A future-ready GenAI strategy demands a responsible and scalable governance framework that empowers banks to innovate boldly, adopt consistently, and foster trust while meeting evolving regulatory expectations. It’s not just about efficiency; it’s about aligning AI with the organisation’s values. AI models are tools, and the choice to prioritise responsibility, fairness, and safety over raw predictive value defines their impact. All banks have a choice: shape the future responsibly or risk being shaped by it.???

If you’d like to discuss how to develop and implement a responsible and scalable GenAI governance framework, please get in touch.???

This content is for general information purposes only, and should not be used as a substitute for consultation with professional advisors.???

Noelle Silberbauer, CPA

Partner, PwC Digital Assurance and Transparency

2 周

Great article, thanks!

回复
?ukasz ?ochowski

Risk management geek that really enjoys what he is doing ??

3 周

Responsibility is not that far from trust. Trust always needs time. With the proper framework in place, we will have trust that will pave the way for AI in the future.

Daniel Müller

Partner - Financial Services

3 周

Thank you Sebastian and Maria for this summary. As many people say these days "eventually humans will not be replaced by AI but by humans, who are more effectively applying AI". Looking forward to read more about the human factor to make the change happen. Specifically interested in what leadership can do to enable change at scale.

回复
El Amine OURAIBA

PhD. IT Senior Leader | InsurTech Guidewire | AI & GenAI & LLMs & RAG & Prompt Engineering

1 个月

EU?AI Act

回复

Very informative and thank you for the great insights

要查看或添加评论,请登录

Sebastian Ahrens的更多文章

社区洞察

其他会员也浏览了