Examining the Top Lapses in BaaS Banks’ Compliance
By: Tyler Brown
APRIL 9, 2024
It’s quite clear that Banking-as-a-Service (BaaS) banks need to think more carefully about risk and compliance as regulatory scrutiny of the space continues to ramp up. According to research by Klaros Group, 11 BaaS banks have received enforcement actions from their primary federal regulator, each of them for at least two categories of deficiencies and often failures related to third-party risk management.
As we’ve written, most of BaaS banks’ compliance lapses could just as well be by any other bank subject to typical scrutiny. What distinguishes BaaS banks is that issues are largely stemming from their partnerships with fintechs — in other words, third-party risk issues — and point to leaderships’ need to pay close attention to risk mitigation, regulatory compliance, and the infrastructure that supports third-party relationships. The four most common issues BaaS banks have faced in consent orders are:
1. Board governance. A well-managed, risk-conscious BaaS strategy — like a plan at any bank — depends on a board that communicates a clear vision for the bank’s future while being thoughtful about risk and diligent in its oversight of controls.
“…the Board […] shall effectively supervise all of the Bank’s compliance-related activities, consistent with the role and expertise commonly expected for directors of banks of comparable size and complexity …” — FDIC Consent Order, First Fed Bank
2. Third-party risk management and oversight. Regulators have made clear again and again that the scope and quality of risk management related to third parties must be appropriate for the bank’s size, risk profile, and the nature of the third-party relationships.
“The Third-Party Risk Management Program shall be commensurate with the level of risk and complexity of the Bank’s third-party relationships.” — OCC Consent Order, Blue Ridge Bank
3. Restrictions on business. Nearly all the BaaS banks ensnarled by consent orders have faced restrictions on their businesses related to fintech partnerships. That has put a lid on their growth and interrupted the operations of a line of business.
“The Bank may not initiate, add a new product or service, or modify or expand an existing product or service in a way that is not consistent with the Board-approved Capital and Strategic Plans.” — OCC Consent Order, Vast Bank
4. BSA/AML. Fintech partners’ growth ambitions incentivize them to set a low bar for KYC, and their bank partners’ desire to appease them has led to increased risk of KYC failures. Those failures, along with lapses in transaction monitoring and other fraud concerns, have come back to bite sponsor banks.
“…the Bank shall develop and submit […] a comprehensive Bank Secrecy Act and anti-money laundering (“BSA/AML”) risk assessment program (“BSA/AML Risk Assessment Program”) …” — OCC Consent Order, B2 Bank
Acceptable risk management and compliance starts at the board level. But heading off compliance lapses common to BaaS banks may depend on expertise boards don’t have — particularly the community and small regional banks that make up most BaaS banks. It’s therefore crucial that a BaaS bank’s board includes expertise not only in areas like compliance, risk management, governance, and audit, but also technology — and that it hires senior management that can execute.
BaaS has been a lightning rod for regulators, but that doesn’t mean bankers should be afraid of investing in that line of business. It means that they should consider their board’s expertise and leadership needs as they lay out a strategy, and that they continue to evaluate the controls they need to run the business cost-effectively without attracting regulators’ attention.
领英推荐
Cybersecurity Is Top of Mind While Banks Ponder AI
APRIL 11, 2024
By: Tyler Brown
Technology Strategy and Compliance
Bankers’ concerns about business risks are on the rise, and a top issue is cybersecurity. In fact, 39% of senior leaders at financial institutions (FIs) say their concern about cybersecurity risks has increased significantly over the past year, according to?a Bank Director report, second only to interest rate risk. Compliance and regulatory issues, which tie back closely to cybersecurity, were also in the top five. Bankers need to be mindful of those risks as they refine their business and technology strategies, especially in the context of applications for artificial intelligence (AI).
The rise of AI is poised to pose a real threat to cybersecurity across industries, and banks are no exception. Issues that stem from generative AI in particular, notes a US Treasury report, can include lower barriers to entry for attackers, more sophisticated and automated attacks, and a shorter time for attackers to exploit a bank’s vulnerabilities. A bank’s AI algorithms can also be tampered with or attacked.
That raises the question of what banks can and should do about AI-driven cybersecurity risk. It’s complicated by bankers’ ability to understand AI systems and assess them in the context of technology and security risk management. Given the complexity of AI models and the dependence most banks have on vendors to implement them, bankers need to consider how they manage controls, in two parts.
The first part is by leveraging existing cybersecurity controls. The Treasury report recommends that banks use cybersecurity best practices to secure AI systems and “map their security controls to AI applications,” frequently train employees, diligently patch vulnerabilities, and pay close attention to the handling of data. Critically, this imperative includes even banks that don’t have their own AI systems, as it is important to beef up security to ward off increasingly sophisticated AI-based attacks.
The second part is the integration of AI systems using existing enterprise risk management. As we’ve written, banks will most likely rely on vendors for AI systems, which increases third-party risk, made more complicated by the financial, legal, and security risks related to the handling of data. The successful use of AI depends on addressing risk management and control issues in four areas, according to the Treasury report:
Handling risk and compliance issues at the intersection of AI and cybersecurity stems in part from how banks talk about those topics internally, with technology partners, and with regulators. AI itself does not have specific frameworks for risk or compliance, notes the Treasury report. Instead of building specific new frameworks for AI, “regulators are focused on institutions’ enterprise risk frameworks” and how it fits into practices for cybersecurity and anti-fraud.
Instead of AI in general, bankers should be talking about its specific applications and the risks those specific applications create for the bank. Second, they should be talking about how existing resources and risk management processes already account for potential problems, both related to their own use of AI as well as external threats.