?? This Week in GRC: When risk modelling gets real

?? This Week in GRC: When risk modelling gets real

Welcome to Issue 79 of This Week in GRC, MBK Search 's weekly digest of the news and views in the world of governance, risk, and compliance.


?? This Week's Opening Bell

The prospect of a regional war in the Middle East grew even more acute this week, following Iran’s missile attack on Israel.?

There are enormous stakes at play and risk professionals around the globe will be running through a litany of scenarios for the coming days and weeks ahead.?

The possibility of a shake in oil prices similar to what was seen in the 1970s has been mooted, while the likelihood of Israel going after Iran’s nuclear facilities has dwindled.?

A paper by Darya Dolzikova and Matthew Savill , originally published in April by the Bulletin of the Atomic Scientists, says: “The only conventional weapon that could plausibly achieve this is the American GBU-57A/B massive ordnance penetrator, which – with over 12 tonnes and 6 metres long – can only be carried by large US bombers like the B-2 Spirit.”

How the markets react, how oil reacts, and how the US reacts a month out from the presidential election will give GRC much to think about.?


?? This Week's Issue

?? The one dissenting voice at the Fed is calling for a change in regulatory thresholds for community banks.?

?? Why topical requirements could make new Internal Audit standards even more complex.?

???? And why the EU’s tough stance on AI could come back to haunt them.?


?? This Week's GRC Headlines

SEC Enforcement Director Gurbir Grewal Steps Down

Gurbir Grewal, who oversaw a period of increased enforcement against Wall Street and the fast-growing cryptocurrency industry as the director of the U.S. Securities and Exchange Commission's enforcement division, is stepping down from his role.

Appointed in 2021, Grewal became known for advocating for bigger fines against securities law violators and taking a tough stance on cryptocurrency, which he saw as a new form of activity the SEC had long regulated.

Under Grewal's leadership, the SEC brought over 2,400 enforcement actions, imposed over $20 billion in fines and disgorgement, barred over 340 people from Wall Street, and awarded over $1 billion to whistleblowers.

One area of focus was Wall Street's record-keeping practices, with the SEC levying billions in fines against institutions for failing to oversee traders' use of off-channel communications methods.

Grewal also implemented SEC Chair Gary Gensler's position on regulating the cryptocurrency industry, arguing that U.S. securities laws can be applied to the sale and exchange of cryptocurrencies in many cases. This stance has drawn criticism from the industry and some lawmakers.

Grewal will depart the SEC on October 11, with the division's deputy director, Sanjay Wadhwa, set to serve as acting director.



Starling Bank Fined £29m by FCA for 'Shockingly Lax' Financial Crime Controls

The UK's Financial Conduct Authority (FCA) has fined Starling Bank £29 million for failings in its financial crime systems and controls, which the regulator said "failed to keep pace with its growth."

The FCA's review in 2021 identified serious concerns with the bank's anti-money laundering (AML) and financial sanctions framework. Despite the bank's AML Enhancement Plan and a voluntary requirement (VREQ) not to open new accounts for high or higher-risk customers, Starling Bank opened accounts for almost 50,000 such customers over the relevant period.

In January 2023, the bank identified failings in its sanctions screening controls, with only a fraction of the relevant sanctions lists being screened. A subsequent review revealed widespread systemic issues in its financial sanctions framework.

The FCA concluded that Starling Bank "failed to ensure that its screening of customers and payments was sufficient" to prevent a breach of financial sanctions. The bank's remediation programs and agreement to resolve the matters led to a 30% discount on the fine, reducing it from £40 million to £29 million.

The FCA highlighted the speed of the investigation, which took 14 months compared to an average of 42 months. This reflects the regulator's focus on speeding up outcomes in enforcement cases.


Former CQC Executives Charged with Fraud in Carbon-Credit Scheme

Two former executives of carbon-credit project developer CQC Impact Investors have been charged with manipulating data to fraudulently obtain carbon credits and deceive a backer into investing over $100 million in the firm.

According to federal prosecutors, former CEO Kenneth Newcombe and former head of carbon and sustainability accounting Tridip Goswami used falsified data to get an issuer of voluntary carbon credits to verify unachieved emissions reductions for a project CQC ran from 2021 through 2023.

Former COO Jason Steele, who pleaded guilty to his role in the scheme, is now actively cooperating with the government. The Justice Department, considering CQC's quick disclosure, cooperation, remediation efforts, and agreement to cancel or void fraudulently obtained carbon credits, declined to bring criminal charges against the firm.

The Commodity Futures Trading Commission and the Securities and Exchange Commission filed parallel civil actions, with the CFTC fining CQC $1 million in its first-ever enforcement action alleging fraud in the voluntary carbon-credit market.


?? This Week's GRC Hot Takes

  1. There are already 223 mandatory requirements in the new internal audit standards. Topical Requirements add even more, and Todd Davies argues that’s a bad idea.?
  2. AI regulation is a hot topic, and while the EU seems to be taking the most direct approach to keeping tech in check, Marc Beierschoder wonders if it will come back to haunt them.?
  3. Speaking of which, Christian Klein, chief executive of SAP, said Europe would be a “massive disadvantage” if testing AI models was easier in the US.?
  4. “So when I read an article on financial crime in the public sector in Norway, which states that “with just a few keystrokes,” criminals can “siphon billions from the state treasury,” I can’t help but worry about the attention this crime is—or isn’t—getting,” writes Marit R?devand .?


?? This Week's GRC Watch

Fed Governor Bowman calls for rethink of regulatory thresholds

After being the first Fed governor to dissent to a change in rate policy since 2005, Governor Michelle Bowman spoke this week about the impact of regulatory thresholds on community banks.?

There were three things she said worthy of attention:?

1.) Focusing on activities versus asset size in assessing risk for appropriate supervision,

2.) Rethinking how we look at bank mergers in rural communities, and,

3.) Ensuring someone on the Board of Governors has special supervisory responsibility and authority for community banks in order to encourage de novo banks and protect our economy from the continued expansion of “Too Big to Fail” and “Too Small to Survive.”

Watch the full speech here.?


?? What MBK Search is Talking About

Will cross-Atlantic differences matter when regulating AI?

As AI continues to reshape financial services, regulators on both sides of the Atlantic respond in distinct ways. Understanding these differences is key to staying compliant and managing risk for financial institutions operating across the U.S. and Europe. Kenneth Blanco, former COO for Financial Crimes at Citibank, has shared insights on how institutions should approach these divergent regulatory landscapes. Let’s break down the five key things financial institutions need to know.

The EU’s Comprehensive and Cautious Approach

The EU AI Act, now fully in effect as of August 1, 2024, sets a new regulatory benchmark. It categorizes AI systems by risk, with high-risk applications such as credit scoring and fraud detection facing strict oversight. According to Blanco, institutions must “prepare for the highest standards of transparency and data governance” to stay compliant. Non-compliance can result in fines of up to 7% of global turnover, making it one of the most stringent AI laws globally.

This risk-based framework is designed to prevent harmful outcomes from AI misuse, but it also means financial institutions must be more cautious when implementing high-risk AI. Human oversight, high-quality data, and detailed reporting are now non-negotiable for any institution using AI in a way that could impact customers’ financial health.

The U.S.: A More Flexible, Sectoral Approach

Unlike the EU, the U.S. has taken a less centralized approach to AI regulation. There’s no single law governing AI across sectors, but guidelines like the Blueprint for an AI Bill of Rights, introduced in 2022, are setting a non-binding framework. Regulatory bodies like the Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC) are developing sector-specific rules.

Blanco suggests that U.S. financial institutions will find more room for innovation but warns that “without clear regulations, you’ll need to stay alert for future legal developments”. The lack of uniform regulation gives businesses flexibility, but institutions should also prepare for potential changes as the U.S. government advances AI-specific legislation.

Navigating Data Privacy Across Borders

Data privacy is a critical issue on both sides of the Atlantic, but the EU and U.S. take very different approaches. The EU’s GDPR remains the gold standard, with strict data usage, consent, and minimization rules. The AI Act builds on this foundation by adding layers of compliance for AI systems handling personal data. Financial institutions must comply with both the GDPR and the AI Act’s additional rules for high-risk AI systems.

In contrast, the U.S. lacks a comprehensive national privacy law, relying on state-level regulations like California’s CCPA and sectoral rules like the Gramm-Leach-Bliley Act (GLBA) for financial services. This fragmented system offers more leeway for AI systems processing personal data, but it can also create compliance headaches for institutions operating across multiple states.

Explainability and Transparency: A Must in the EU

The AI Act emphasizes transparency, particularly for high-risk systems. AI models used in finance, such as those for credit scoring, must be explainable—financial institutions need to provide clear reasons for each decision made by their AI systems. Blanco pointed out that “transparency is the key to earning regulators’ trust”.

In the U.S., explainability is less formalized, though it is gaining attention. Financial institutions are still expected to prevent AI bias and demonstrate fairness, but they have more flexibility in implementing these safeguards. However, this could change as AI regulation in the U.S. evolves.

The Future: Moving Toward Global Standards?

Although the U.S. and EU are currently diverging in their regulatory approaches, a convergence of standards could be on the horizon. Global cooperation in areas like data sharing, cybersecurity, and AI accountability is already underway, driven by increasing pressure for uniform AI governance across borders.

Blanco emphasized that “institutions should start preparing now for potential harmonization in global AI standards”. Investing in flexible, adaptable governance frameworks will help institutions comply with regulations no matter how they evolve.



At MBK Search, we help firms find world-class talent to build champion teams across regulated markets. Let's start building — visit our website to find out how. www.mbksearch.com

要查看或添加评论,请登录

社区洞察

其他会员也浏览了