The surprising blind spot of financial institutions on Privacy-Enhancing Technologies

The surprising blind spot of financial institutions on Privacy-Enhancing Technologies

Imagine a world where regulated entities can collaborate on sensitive data to enhance risk management, detect fraud, or even train cutting-edge AI models—all without ever exposing or transferring that data. It sounds surreal, doesn't it?

This is the power of Privacy-Enhancing Technologies (PETs). These technologies allow for secure data analysis and usage across multiple entities without compromising privacy or breaching regulations.

Over the past few months, Tune Insight and I have dug deep into the adoption of PETs in financial services. We've interviewed bankers, regulatory bodies, topic experts (notably in Anti-Money Laundering, AML), point-solution software providers to banks, and investors to get a lay of the land about the usage of PETs in financial services.

We were intrigued to learn that the adoption of PETs is still in its infancy, despite the clear value these technologies can unlock in the financial sector.

This post sums up some of our findings and gives a glimpse of the upcoming opportunities in the sectors.


Understanding Privacy-Enhancing Technologies

To grasp the potential of PETs, consider a simple analogy:

Imagine several banks, each holding a piece of a puzzle but unwilling to share their pieces. PETs act like a master puzzle solver, assembling the full picture without the pieces ever leaving their owners' hands. The puzzle represents the data, and the master solver is the PET, allowing for comprehensive insights without exposing the raw data.

A closer look

In practice, Privacy-Enhancing Technologies encompass a range of methods designed to protect data privacy while enabling data analysis. They include notably:

  • Homomorphic Encryption (HE): First proposed in 1978, it allows computations to be performed on encrypted data without decrypting it first. While powerful, it's computationally intensive—often 1,000 to 10,000 times slower than computations on unencrypted data—which can make it impractical for large-scale applications without optimization.
  • Secure Multiparty Computation (SMPC), Secret Sharing-Based: Dating back to the late 1970s, it enables multiple parties to jointly compute a function over their inputs while keeping those inputs private. It distributes the computation among parties, ensuring that no single party can see the others' data. While more efficient than homomorphic encryption for certain computational tasks, secret sharing-based SMPC has higher communication complexity, and doesn’t scale well for more than 3-4 parties.
  • Zero-Knowledge Proofs (ZKP): Introduced in 1985, ZKPs allow one party (the prover) to prove to another party (the verifier) that a statement is true without revealing any information beyond the validity of the statement itself. This technique is particularly useful for authentication and verification processes where confidentiality is crucial.
  • Differential Privacy: Emerging in the early 2000s, it provides a mathematical guarantee that the output of a computation doesn't compromise the privacy of any individual input. It adds controlled noise to the results, balancing privacy and accuracy.
  • Federated Learning: Introduced by Google in 2016 , federated learning brings the model to the data instead of bringing the data to the model. Each institution trains a local version of a shared model on its own data, and only the model updates—not the data—are shared and aggregated to improve the global model. However, simple federated learning requires additional security measures to protect against threats like data leakage and model poisoning.

By combining these technologies, organizations can create robust, secure, and efficient solutions tailored to solve large-scale data problems. For instance, federated learning can be enhanced with differential privacy to protect against data leakage, and secret-sharing-based SMPC can be combined with homomorphic encryption to optimize performance.


The underutilization of PETs in financial services: a missed opportunity

Financial services are driven by data, whether it's used for detecting fraud, assessing credit risk, or ensuring regulatory compliance. This data is also extremely siloed across jurisdictions, legal entities within groups, and across distinct institutions.

Yet, the adoption of PETs within this sector is surprisingly limited. No less than five years ago, the financial sector was touted as one of the most promising fields for PETs by opinion leaders such as the WEF. Gartner predicted in 2022 that “by 2025, 60% of large organizations will use privacy-enhancing computation techniques to protect privacy in untrusted environments or for analytics purposes”.

And yet we are nowhere there. Despite the immense value these technologies can unlock- we are surprised to observe the financial services sector staying unresponsive to PETs, and falling behind other sectors such as healthcare.?

Use cases for PETs in financial services: the road not yet taken

Throughout our exploration of PETs in financial services, we've identified multiple areas ripe for unlocking value. Notably, these include:

1. Fraud detection and anti-money laundering (AML)

Financial crime plagues banks and financial institution, whether it’s fraud (i.e. $32bn estimated in credit card fraud in 2022 according to Nielsen), AML costs ($54bn globally, with 80% of which in personnel costs dealing with false positives and inefficiencies according to Swift) or fines (e.g. Danske with $2bn in fine in 2018). Banks can significantly enhance financial crime-fighting effectiveness and efficiency by pooling transaction data across institutions. However, concerns about data privacy and regulatory compliance prevent this from happening.

“Together we stand, divided we fall” - Mastercard on Privacy Enabling Technologies, Feburary 2024

PETs like federated learning combined with HE, SMPC and differential privacy have the potential to enable this collaboration, allowing banks to jointly analyze data without sharing sensitive information. For instance, derivatives of the Leiden algorithm used for AML can be run on federated and secured data spaces.

2. Risk management

Aggregating data across multiple sources offers a more accurate picture of potential risks. Financial institutions already track their risk exposure in an aggregated view but have limited drill-down or analytical capabilities on a granular level—a limitation that PETs can address.

3. Training proprietary Large Language Models (LLMs)

The financial sector is increasingly exploring AI and machine learning, particularly in training proprietary LLMs on siloed datasets. PETs are crucial here, allowing institutions to train powerful models across distributed data sources in a privacy-preserving manner.

Techniques like federated learning combined with SMPC or differential privacy enhance model quality while safeguarding sensitive information. Tune Insight recently published a series of articles for technical audiences, covering the training of LLMs with private data and the responsible use of AI .

4. ?Refined underwriting capabilities

PETs can enhance underwriting processes by enabling secure collaboration within and among banks or insurers:

  • Credit underwriting: Banks can extract insights from collective credit histories and financial behaviors securely to improve credit scoring models. This collaboration can lead to more accurate assessments of creditworthiness without exposing sensitive customer data.
  • Collateral verification: PETs empower institutions to validate whether a piece of collateral has been pledged elsewhere without revealing transaction details. This prevents multiple loans from being secured by the same asset, reducing fraud and financial risk.
  • Insurance underwriting: Insurers can collectively leverage broader datasets to refine their underwriting models. By collaborating on claims data and risk factors through PETs, they can improve pricing accuracy and risk assessment while maintaining customer privacy.

5. Navigating conflicting regulatory requirements

Regulators demand ever-greater transparency and reporting. However, different regulators—such as the European Central Bank (ECB), FINMA in Switzerland, and BaFin in Germany—may have conflicting requirements.

For example, a multinational banking group might face a situation where one jurisdiction does not allow granular data to be used to meet the requirements of a regulator in another country. Swiss banking secrecy laws may conflict with ECB AML reporting obligations.

PETs, specifically zero-knowledge proofs combined with federated learning, enable banks across established across enough jurisdictions to comply with regulatory demands without breaching local data privacy laws. This allows banks to prove compliance or share necessary indicators without revealing underlying sensitive data or their origin.

6. Reducing Friction in KYC Processes

Zero-knowledge proofs can significantly streamline Know Your Customer (KYC) processes and blacklisting sharing.

It has the potential to enable institutions to verify identities and compliance statuses without disclosing personal information. This enhances identity management across the financial sector while respecting jurisdictional privacy laws.

By securely sharing verification statuses, banks can reduce duplication of efforts, lower costs, and improve customer onboarding experiences. This can even be productized towards third party partners, with point of sale KYC for large value items in retail.


So why aren’t PETs widely used yet?

Several factors contribute to the late adoption of PETs in financial services:

1. Legacy systems and competing priorities

Most financial institutions operate on old legacy systems that are deeply entrenched. Adopting new technologies can be daunting or unreasonable especially when resources are allocated to more immediate concerns.

2. Lack of awareness

Though most of the time technology-savvy, a significant number of financial institutions are simply unaware of the existence or capabilities of PETs and their potential impact.

3. Perceived complexity and integration concerns

Integrating new technologies into existing systems - especially when touching client-identifying data - can be complex and seen as risky. The perceived difficulty of implementing PETs has kept some institutions from taking a first mover approach.

4. Tradeoffs in security and efficiency

Not all PETs are equally secure or computationally practical.

Performing operations on homomorphically encrypted data can be several orders of magnitude slower than on plaintext data. Similarly, while SMPC provides strong privacy guarantees, the computational and communication overhead can be significant, especially as the number of participants increases.

Unsecured federated learning is susceptible to attacks like data poisoning, where malicious updates can corrupt the global model. Additional security layers, such as secure aggregation and differential privacy, are needed to mitigate these risks.

PETs must be implemented in a coherent and intelligent way to securely unlock their potential. This can be daunting for an institution, but can easily be solved with the right technology provider.

5. Regulatory uncertainty

There is sometimes confusion about whether using PETs aligns with regulatory requirements, despite the fact that these technologies are designed to enhance compliance. This uncertainty hampers the adoption of PETs in banks.

For instance, in the case for AML and cross-entity collaboration, article 75 of the EU AML/CTF legislation (adopted in April 2024) effectively clarifies that client or transaction data may be exchanged post-suspicion, but not proactively.

Although federated learning does not involve raw data traveling across entities, the lack of an explicit legal framework applicable to federated learning has kept some legal teams concerned, despite its de-risking benefits on the overall business.

6. Limited willingness to collaborate and trust issues across institutions

Banks have historically shown little willingness to collaborate and have been distrustful of sharing information, especially when opting for a centralisation data sharing model. Fear of stepping on the toes of antitrust authorities is often also mentioned.?

We have howver seen exceptions emerge, often nudged by regulatory bodies or through Public-Private Partnerships (PPP), including notably:

  • The Netherlands with Transaction Monitoring Netherlands (TMNL): Five major Dutch banks launched in 2021 a joint venture to monitor transactions and collectively detect unusual patterns; albeit stalled by the Dutch data privacy authority in 2024.
  • Spain with FrauDfense: With Banco Santander, BBVA and CaixaBank joining forces in 2023 specifically targeting fraud.
  • UAE with KYC data exchanges: In the United Arab Emirates, regulatory bodies have encouraged banks to collaborate on Know Your Customer (KYC) data exchanges to streamline processes and enhance compliance.

These examples show that collaboration is in fact possible, even without centralistic or regulatory push. We are confident that PETs and decentralized or federated approaches are crucial for the financial industry to successfully achieve cross-entity collaboration.


The way forward: modern solutions for modern challenges

The good news is that the barriers to adopting PETs are rapidly diminishing. Today, there are advanced technology platforms specifically designed to integrate securely with existing financial infrastructure. These solutions are built by expert teams who understand both the technology and the unique challenges faced by the financial sector.

Tune Insight is one such team—a leader in privacy-preserving data analytics. Their platform effectively leverages all these technologies in a computationally sensible way by deploying them where needed.

By intelligently combining PETs like SMPC, federated learning, homomorphic encryption, and differential privacy, the team provides efficient and secure solutions tailored to specific use cases, striking the right balance between optimized computation, enhanced security and scalability.

Whether it's for fraud detection, risk management, or the training of proprietary LLMs, Tune Insight offers a cutting-edge approach that can transform how financial institutions operate

In conclusion

The financial industry has the chance to enter an era where data collaboration doesn't have to come at the expense of privacy or compliance. By leveraging Privacy-Enhancing Technologies, banks and financial institutions can finally unlock substantial opportunities for innovation, efficiency, and security.

The technology is ready—it's time for the industry to catch up.

Frederic Pont

Co-founder & COO at Tune Insight

1 个月

It's a pleasure to work with you Alexandre Moreillon, and this article is a great summary of our discussions and findings! Switzerland, as one of the most innovative countries in the world and home to so many great financial institutions, is ideally positioned to lead the way and, as you wrote "enter an era where data collaboration doesn't have to come at the expense of privacy or compliance."

要查看或添加评论,请登录

社区洞察

其他会员也浏览了