Artificial Intelligence, Risks and Governance: Lessons for Tech from Finance Industry

**Opinions are all mine- not reflective of my employer**

A few month ago, the leaders of five dominant digital social media companies (Meta, X, TikTok, Snap and Discord) appeared before the U.S. Senate Judiciary Committee to respond to the lawmakers' questions on how platforms are working to protect children online. The hearing is the latest stage in government's move to put in place regulations to create new safeguards for the youth and children online experience. While there have been repeated calls for safety, privacy, and competition regulations including the testimony given by the Meta's whistle blower Frances Haugen to the Senate in 2021, in which she criticized the social media giant for prioritizing profits over the well-being of its users- what stood out from the recent hearing was the misalignment-to say mildly- between the lawmakers and the Tech CEOs' view of what the regulation in this space should encompass. At the center of some of heated exchanges between the lawmakers and the Tech leaders was the statistics regarding the extent to which young social media users have been exposed to inappropriate content.

Models, broadly defined as mathematical methods used to produce quantitative predictions (definition inferred from SR 11-7, Federal Reserve's Guidance on Model Risk Management in financial services), are at the core of decision-making in big Tech. Due to their enormous scale, these companies use models to automate billions of daily decisions: what apps to display, what to show in the news feed, and what content to flag as harmful. Hence, a central pillar of examining how content is being mediated on the social media platforms is probing the objectives and inner workings of the models being deployed for this purpose.

Generative AI & risks of exponential progress:

Harris and Raskin, in their AI Dilemma opinion piece, consider the kind of models that have been used in big Tech social media companies in the past decade, as the ‘First Contact’ moment with AI. In their view, the regulatory efforts so far has not been able to fix the misalignment between the business models of the Tech companies deploying AI and the downstream societal effects. They call the recent advent of Generative AI (GenAI) and the Large Language Models powering the new generation of AI chat-bots (chatGPT, Gemini, etc.) as the ‘Second Contact’ moment and they warn about making the similar mistakes. More concretely, they warn about AI companies racing to deploy their GenAI-based products to the public to achieve market dominance as fast as possible, instead of testing it safely over time.

There are currently critical conceptual gaps in defining the scope of legal and ethical accountability in AI technology. The recent appointment of the director of the newly formed U.S. Artificial Intelligence Safety Institute (USAISI) and formation of a new consortium within USAISI for AI safety is a move towards filling those gaps.

With current ongoing efforts to create a AI technology regulation framework, we observe that an important pillar of regulation in this space is the regulator's ability to conduct thorough end-to-end algorithmic audit and oversight. This note emphasizes the need for a procedural approach to algorithmic audit, as a logistical component of AI regulation. Our proposal, in some aspects outlines an already existing construct in the financial industry; Model Risk Management (MRM). As the AI industry grapples with the challenges of ensuring safety, the lessons learned from managing model risk in the financial sector may prove useful.

Tensions at the core of AI self-regulation:

Should the evaluation of AI technologies for risk be left to the Tech companies themselves? Large Tech companies are obviously considerably more advanced (than academia, and for sure the regulatory bodies) in developing and advancing the AI technology. While the expertise and the core mass of talent needed for such an undertaking already exists in Tech companies, the main question is about the efficacy of the approach given the tensions between the business profit objectives and the resulting pressures to push the cutting edge of the AI technology to build the dominant product. In Haugen's testimony in 2021, she indicates that “…but, [Facebook's] own research is showing that content that is hateful, that is divisive. that is polarizing, it is easier to inspire people to anger than it is to other emotions”.

Recent dramatic events at the leadership echelons of OpenAI has been interpreted by some observers as the triumph of Silicon Valley AI accelerationsist; those who seek swift productionization of cutting-edge advances in AI (See this WSJ article; ‘Accelerationists’ Come Out Ahead With Sam Altman’s Return to OpenAI.). While such power struggles are not unheard of within the Silicon Valley Tech environment, what may be unique about the OpenAI story is perhaps the ideological drivers of the clashing groups. In their opinion piece, Lostri et al, describe the camps as those driven primarily by science and those driven primarily by bringing products to market. In their view, these tensions within OpenAI between “boosters,” who want to accelerate the deployment of powerful AI systems, and “doomers,” who worry about the systems’ existential dangers, had been brewing long before the flare-up in public.

They also emphasize that the events at OpenAI, importantly point the reality that self-regulation (i.e., leaving it to the AI company to arrive at an estimate of risk and profit in developing/deploying cutting edge AI products), even when enforced through belt and suspender corporate governance structures, may fail: "The power struggle within the company and the ultimate failure of the nonprofit board to maintain control of an increasingly commercialization-minded company is a reminder that, if society wants to slow down the rollout of this potentially epochal technology, it will have to do it the old-fashioned way: through top-down government regulation."

The activist approach is not sufficient:

While Tech firms have recognized and expressed the need for increased oversight, their proposals are unlikely to work. Consider the recent development where Elon Musk-owned X released the source code for its recommendation algorithm on the GitHub platform. It could be argued that this would increase transparency in company's use of AI. By allowing researchers and interested parties to examine the code, they would be able to gain a better understanding of how X's recommender system works.

While the code may provide insight into X's recommender system's inner working, it would not necessarily provide context into the reasoning behind certain recommendations. For example, the code may show that a particular tweet was recommended because it was popular or had similar keywords to a user's previous interests, but it would not necessarily explain why X believed this tweet would be relevant or interesting to the user.

On the other hand, code is just one component of a recommender system. The algorithms and models used by the system rely on vast amounts of user data, including their browsing history, search queries, and engagement with content. Without access to this data, researchers would not be able to fully understand how the system is making recommendations or what biases may be present.

In addition, AI systems constantly evolve, based on user behavior and feedback. While releasing the code for the current version of X's recommender system may be helpful, it would not necessarily provide insight into how the system will be updated in the future.

On the other side of the table, consider Mark Zuckerberg’s proposal to regulate harmful content, election integrity, privacy, and data portability falls short because, as the Haugen's testimony has demonstrated, it leaves all the AI models used by the company out of the scope; recall that AI is the conduit for implementing the Tech company's business objectives at the user level for potentially billions of users.

The fundamental flaw in these proposals is that they would create a framework to tell Tech companies what to do without creating structures for external and internal subject matter experts to critique the company's practices in a way that would be fully transparent to the regulators. Policymakers trying to address this by asking Tech companies to share more information with external researchers are oversimplifying the complex privacy and security problems that would be avoided by establishing internal functions responsible for regulatory compliance.

In her 2021 testimony, Frances Haugen's ideas for reducing harm focus on reflecting inward at Meta: “we need more employees to come forward through legitimate channels like the SEC or congress to make sure that the public has the information they need to have technology be human-centric, not computer-centric."

While there is a place for whistleblowers and the need for the public forum, this approach can only address the issue from a reactive standpoint. Following the same approach as financial institutions, deploying a formal process- fully visible to the regulator at any time- for internal, independent validation of all models before production would provide a more effective and holistic solution.

Model Risk Management in finance

Going back to the 2008 financial crisis, a central villain risen from the wold of financial engineering wizardry was the mortgage-backed securities (MBS). Financial institutions created and sold bundles of mortgages as safe and profitable securities, which proved entirely misleading, as it turned out that these products contain significantly more subprime loans with a higher risk of default. As housing prices rose excessively, these products became highly profitable, but when the U.S. real estate market finally crashed, the subprime-infested MBS became nearly worthless and brought several financial institutions to their knees.

A sophisticated but poorly scrutinized modeling approach was among the many institutional/regulatory failings underpinning the crisis. The models in question made the invalid assumption that housing prices would not fall simultaneously in different regions of the country and when they did, few were prepared to deal with the consequences. Inadequate oversight of complex models was by no means the only failure leading to the financial crisis, but it was one of the enabling factors that allowed financial institutions to manipulate the system to pursue profits at the expense of societal well-being (is this starting to sound familiar?). In retrospect, it became clear that some of the largest financial institutions optimized for short-term profits without considering the downstream impacts on their customers. All of which was and is against the backdrop of little to no regulatory oversight concerning the testing of the models involved in the decision-making processes.

After the postmortem analysis the financial crisis by the law makers and the regulatory bodies in the U.S., In April 2011, the Feds published the Supervisory Guidance on Model Risk Management (SR 11-7). This document provided an early definition of model risk that subsequently became standard in the industry: “The use of models invariably presents model risk, which is the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.” SR 11-7 explicitly addresses incorrect model outputs, taking account of all errors at any point from design through implementation. It also requires that decision makers understand the limitations of a model and avoid using it in ways inconsistent with the original intent.

There are two critical pieces to Model Risk Management (MRM) in financial institutions:

  • the three lines of defense, and
  • principle-based regulations enforced by a government body.

The three lines of defense strategy increase oversight and accountability across an organization. The three lines of defense applied to models are model development, validation, and internal audit. The validation team is responsible for scrutinizing the development team's work while the internal audit checks both to ensure they fulfilled their respective responsibilities. The benefits of this strategy are not only having three sets of eyes on each model but empowering independent teams across the organization to scrutinize their inner workings. The further removed you are from the business, easier it becomes to evaluate any decision objectively.

Critically, MRM in financial institutions does not stop there. The Federal Reserve and the Office of the Comptroller of Currency (OCC) have issued guidance describing model risk management principles by which banks must abide. Models must be inventoried, validated, and monitored across their life cycle. Any problems discovered by the validation team must be transparent to the board. The regulators do not tell the banks how to achieve this, leaving the banks free to run their own business, but importantly, they are subject to scrutiny from the regulators that the job is done correctly. Banks face heavy scrutiny and heavier penalties for failing to demonstrate compliance.

Conclusion:

Since the financial crisis, we have observed financial institutions become more responsible due to regulation while continuing to innovate and remain profitable. By applying similar principles such as the three lines of defense, independent, internal reviewers, and a principles-based regulatory framework, Big Tech can begin to regain the trust of its users and the public.But as with anything, there is no silver bullet that will entirely transform Big Tech into better corporate citizens. Even the most stringent regulation frameworks can not prevent all the adverse outcomes (I note as an example, the stress caused in the (regional) banking industry in 2023 as a result of certain risk management and regulatory oversight failings in SVB). Nonetheless, pursuing better risk management of the use of AI models in Big Tech will improve products for customers, and through regulatory oversight, provides the space for public discourse, increased accountability, and a better chance of catching harmful models early via a more proactive risk measurement system.

Zain Zaidi

Founder & Chief Executive Officer at TransCrypts

3 个月

Great points being made

回复
Treebel Solimani

Building Focused AI at TriangleAI.

3 个月

we can definitely build better with less regulation.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了