?? This Week in GRC: An FDA for AI?

?? This Week in GRC: An FDA for AI?

Welcome to Issue 78 of This Week in GRC, MBK Search 's weekly digest of the news and views in the world of governance, risk, and compliance.


?? This Week's Opening Bell

"The most glaring gap in our current approach to AI safety is the lack of mandatory, independent and rigorous testing to prevent AI from doing harm," writes Anja Manuel , who is the executive director of the Aspen Strategy Group.

While AI technologies offer significant benefits across various industries, they also come with huge national security risks. Manuel points that it's entirely feasible that AI tech could be used in the creation of biological or chemical weapons and facilitating cyberattacks.

But is a single global regulator the answer?

The UK government has taken commendable steps by hosting the Bletchley Park AI Safety Summit, establishing an AI Safety Institute, and screening leading large language models. Other countries are beginning to emulate this approach.

But even if a consensus can be reached, it won't be a quick one.


?? This Week's Issue

?? It's time for CCO's to put real plans in place for AI, says the DoJ

??♂? Why effective risk management means avoiding the "risk twat" trap

???? And how a jailed CEO rocked Sweden's banking sector


?? This Week's GRC Headlines

CCOs need to evaluate AI - DoJ

The Department of Justice has updated its guidance for prosecutors on evaluating corporate compliance programs, adding artificial intelligence (AI) to the list of areas compliance officers must address.

The revised guidance, announced by Nicole Argentieri, head of the DOJ's criminal division, emphasizes the importance of constantly assessing risks, learning from compliance failures, and adapting compliance programs accordingly. It pushes for more quality over simply "ticking the boxes."

Compliance officers are asked to balance leveraging AI's potential benefits and being aware of the risks it poses to their company's business and compliance program. Prosecutors will inquire about companies' approaches to governance and risk assessment regarding the use of AI.

The guidance also highlights the importance of using internal data to strengthen compliance programs, suggesting that prosecutors may view companies negatively if there is a disparity between the use of data and technology in compliance compared to other business areas.

Additionally, the revised guidance incorporates new questions about promoting a speak-up culture and dealing with employees who report misconduct internally, tied to the DOJ's recently launched whistleblower reward program. Prosecutors will scrutinize companies' anti-retaliation policies and treatment of employees who report misconduct.


SEC Fines 12 Firms $88 Million for Record-Keeping Violations

The Securities and Exchange Commission (SEC) has charged 12 firms, including broker-dealers, investment advisers, and one dually registered entity, with widespread and longstanding failures to maintain and preserve electronic communications as federal securities laws require.

The firms admitted to the facts in their respective orders and agreed to pay combined civil penalties of approximately $88.3 million. This action is part of the SEC's ongoing crackdown on financial firms' compliance with record-keeping rules, which has resulted in charges against 60 firms and over $1.7 billion in fines since 2021.

The SEC's investigations uncovered the longstanding use of unapproved communication methods, known as off-channel communications, by 11 of the 12 firms. Stifel, Nicolaus & Co., Invesco Distributors, and Invesco Advisers will each pay $35 million in penalties, while Canadian Imperial Bank of Commerce's CIBC World Markets and CIBC Private Wealth Advisors agreed to pay a $12 million penalty.

The firms were ordered to cease and desist from future violations and censured. Ten of them agreed to retain compliance consultants to review their policies and procedures related to preserving electronic communications.

Notably, Qatalyst Partners won't pay a penalty due to its self-reporting, cooperation, and demonstrated efforts at compliance, highlighting factors the SEC considers when determining penalties in such cases.


Binance CEO Prioritizes Compliance Investment to Move Past Mistakes

Binance, the world's largest cryptocurrency exchange, has invested heavily in its compliance programs over the past year. According to CEO Richard Teng, the company spent approximately $213 million in 2023, a 35% increase from 2022.

Speaking at an event in Singapore, Teng emphasized that the company's financial ability to invest in compliance, including increasing staff and expanding artificial intelligence, will be a competitive advantage as compliance costs continue to rise.

In November 2023, Binance pleaded guilty to violating U.S. anti-money-laundering laws and sanctions, agreeing to pay $4.3 billion in fines. Teng, who took over as CEO following the guilty plea, views the heavy penalty as part of the company's journey to maturity and institutionalization.

Binance is working closely with two independent corporate compliance monitors appointed as part of the criminal settlement and is seeking more regulatory licenses worldwide, having obtained approvals in Indonesia, India, and Thailand earlier this year.

Teng has instituted a new seven-person board of directors and engages in global discussions with law enforcement and policymakers to navigate the varying crypto regulations. He hopes that other players in the industry will learn from Binance's lessons and work together to keep out bad actors.


?? This Week's GRC Hot Takes

1) "Effective risk and assurance relies on not being a risk twat," writes Risk Director Stefan Gershater.

2) Slightly concerning read that says none of the UK's 25 legal and accounting industry bodies "fully" meet AML standards.

3) "Keep in mind: we're in an age where the average life of a company on the S&P 500 is 18 years, down from 50+. There are consequences for deploying antiquated management techniques," writes Josh Oliveira.

4) The always thoughtful Matt Kelly shares his thoughts on the Society of Corporate Compliance & Ethics conference in Dallas this week.


?? This Week's GRC Podcast


How a jailed CEO rocked Sweden's banking sector

Former Swedbank chief executive, Birgitte Bonnesen, is facing 15 months in prison after being convicted of spreading misleading information about the bank’s money laundering problems in Estonia.

The conviction ties into one of Europe’s biggest money-laundering scandals where both Swedbank and Denmark’s Danske Bank were alleged to have allowed Russian oligarchs and criminals to move money through their Baltic branches and into the western financial system.

So with an incredibly senior figure from Sweden’s oldest bank now potentially facing prison time – is this crisis averted?

The brilliant team at The Laundry take a deeper look.

Listen to the episode here


?? What MBK Search is Talking About

Will cross-Atlantic differences matter when regulating AI?

As AI continues to reshape financial services, regulators on both sides of the Atlantic respond in distinct ways. Understanding these differences is key to staying compliant and managing risk for financial institutions operating across the U.S. and Europe. Kenneth Blanco, former COO for Financial Crimes at Citibank, has shared insights on how institutions should approach these divergent regulatory landscapes. Let’s break down the five key things financial institutions need to know.

The EU’s Comprehensive and Cautious Approach

The EU AI Act, now fully in effect as of August 1, 2024, sets a new regulatory benchmark. It categorizes AI systems by risk, with high-risk applications such as credit scoring and fraud detection facing strict oversight. According to Blanco, institutions must “prepare for the highest standards of transparency and data governance” to stay compliant. Non-compliance can result in fines of up to 7% of global turnover, making it one of the most stringent AI laws globally.

This risk-based framework is designed to prevent harmful outcomes from AI misuse, but it also means financial institutions must be more cautious when implementing high-risk AI. Human oversight, high-quality data, and detailed reporting are now non-negotiable for any institution using AI in a way that could impact customers’ financial health.

The U.S.: A More Flexible, Sectoral Approach

Unlike the EU, the U.S. has taken a less centralized approach to AI regulation. There’s no single law governing AI across sectors, but guidelines like the Blueprint for an AI Bill of Rights, introduced in 2022, are setting a non-binding framework. Regulatory bodies like the Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC) are developing sector-specific rules.

Blanco suggests that U.S. financial institutions will find more room for innovation but warns that “without clear regulations, you’ll need to stay alert for future legal developments”. The lack of uniform regulation gives businesses flexibility, but institutions should also prepare for potential changes as the U.S. government advances AI-specific legislation.

Navigating Data Privacy Across Borders

Data privacy is a critical issue on both sides of the Atlantic, but the EU and U.S. take very different approaches. The EU’s GDPR remains the gold standard, with strict data usage, consent, and minimization rules. The AI Act builds on this foundation by adding layers of compliance for AI systems handling personal data. Financial institutions must comply with both the GDPR and the AI Act’s additional rules for high-risk AI systems.

In contrast, the U.S. lacks a comprehensive national privacy law, relying on state-level regulations like California’s CCPA and sectoral rules like the Gramm-Leach-Bliley Act (GLBA) for financial services. This fragmented system offers more leeway for AI systems processing personal data, but it can also create compliance headaches for institutions operating across multiple states.

Explainability and Transparency: A Must in the EU

The AI Act emphasizes transparency, particularly for high-risk systems. AI models used in finance, such as those for credit scoring, must be explainable—financial institutions need to provide clear reasons for each decision made by their AI systems. Blanco pointed out that “transparency is the key to earning regulators’ trust”.

In the U.S., explainability is less formalized, though it is gaining attention. Financial institutions are still expected to prevent AI bias and demonstrate fairness, but they have more flexibility in implementing these safeguards. However, this could change as AI regulation in the U.S. evolves.

The Future: Moving Toward Global Standards?

Although the U.S. and EU are currently diverging in their regulatory approaches, a convergence of standards could be on the horizon. Global cooperation in areas like data sharing, cybersecurity, and AI accountability is already underway, driven by increasing pressure for uniform AI governance across borders.

Blanco emphasized that “institutions should start preparing now for potential harmonization in global AI standards”. Investing in flexible, adaptable governance frameworks will help institutions comply with regulations no matter how they evolve.


At MBK Search, we help firms find world-class talent to build champion teams across regulated markets. Let's start building — visit our website to find out how. www.mbksearch.com


要查看或添加评论,请登录

社区洞察

其他会员也浏览了