How Would a Systems Thinker Shape the Development of Artificial Intelligence?

How Would a Systems Thinker Shape the Development of Artificial Intelligence?

Sometimes, the questions matter. Sometimes more than the answers. During the early days of social media, regulators were asking: Should we regulate social media? Today, as the use of artificial intelligence (AI) grows rapidly, we are asking: How do we regulate AI?

At the May 23 hearing of the U.S. Senate Judiciary Subcommittee, the chair, Senator Richard Blumenthal, set the tone in his opening remarks: “Congress has failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”

“Congress has failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.” - Senator Richard Blumenthal, Chair of the May 23rd hearing of the U.S. Senate Judiciary Subcommittee

Why do we see the need for regulation now? How hard is it going to be? Most importantly, how on earth could we know if the set of rules we?write today will lead to the most desirable future?

As I thought about the complexity of this task, it brought me back to my days at boarding school. Specifically, the equally complex task of dealing with our old building’s shower system.

Controlling Showers, Controlling the World

When the shower system was updated, the pipes between the heater and the showers had to be long—very long. ?We had to learn how to adjust the knobs with mathematical precision, avoiding burns and water waste.

At peak hours, controlling temperature and getting clean was a science and a game! The most competitive and skillful shower takers would memorize the best starting point by season, calibrate slowly, and be patient. The masters, who fully understood the mechanics, were sure to ace their system dynamics exams.

For those not familiar with the discipline, system dynamics provides a general framework for analyzing and influencing the behavior of individual and intersecting parts of complex systems to achieve a desired outcome. The?principles are the same regardless of the systems’ size or nature. That is, regulating shower temperature with knobs is no different than calibrating inflation with interest rates, or promoting safe use of a technology (such as AI) with a set of rules and legislation.

Because the link between the input and output are often difficult to grasp, our management of complex systems (a shower, an economy, a technology) is prone to errors, overshoots, and unintended consequences. Maintaining control of outcomes becomes harder the more variables there are and the more those variables are connected or hard to measure. The long pipes in our showers meant waiting for a long time before knowing if we had moved the knobs accurately.

Even when we achieve the outcome we want, its sustainability is uncertain. For example, two decades ago, when central banks succeeded in stimulating economic growth with lower interest rates, it took years to expose the unintended consequences of unchecked high-risk mortgages that created a housing market bubble and then the 2008 global financial crisis.

Today, our attempts to control?technology pose similar systemic challenges. Whether it is regulating social media platforms, data privacy or AI, control must start by recognizing the complexity of how any rules translate into outcomes. For example, there were very good reasons for Section 230 in the 1990s, until we saw the limits of content moderation in the face of organized attacks and the virality of harmful content. There are good reasons today to regulate powerful algorithms. But how do we design rules that anticipate outcomes years down the road? ?

A Systems Approach to Technology Regulation

Systems thinking can help us aim in the right direction (adjust the knobs). The first step is to understand the chain of cause-to-effect relationships that explain the advancement of AI, and its impact on the human society: how does the field evolve? What increases or decreases AI and related technology adoption by individuals and organizations? And how will AI impact the economy, elections, the environment… ?

It is very hard to draw a comprehensive causal diagram (i.e., all the cause-to-effect relationships) of this. The chart I sketched here (I call it the butterfly, because... well... it looks like a butterfly, doesn't it?) is a simplified starting point to help assess the impact of regulatory interventions. You can look at the dynamics of this systems from three perspectives, three chains of reaction:

No alt text provided for this image
The butterfly - Simplified cause-to-effect relationships linked to AI systems

1.???Reinforcing loops

The performance of today’s AI solutions increases with three ingredients: ?data,?algorithms and computing power (the green wing in the butterfly). Better performing AI tends to be used more. Increased usage enables the collection of more data to train future algorithms and leads to more experience and better science to craft the next generation of algorithms. ?Similarly, better AI can lead to more efficient computing systems, that in turn can train even bigger models. ?In summary, these green arrows are all system reinforcing loops where more AI leads to more AI. Left unchecked, those reinforcing loops will lead to a perpetual increase of machines’ capabilities; maybe – as some have predicted – to the point where computer intelligence surpasses human intelligence.?

A simplistic approach to regulation might be to stop some of those reinforcing loops. For example, limit the data collection abilities so that future algorithms learn less, or ration the amount of computing power any institution can buy each year. This will slow down the risks of intelligent systems spreading harmful (inaccurate, false) content but it will also slow down the systems’ development and positive contribution.

2.???Balancing loops

In reality, however, to balance reinforcing loops, there are already three natural forces (rivalry, ethics, and energy cost), that are slowing down the potential for progress in both usage and performance (see the red wing of the butterfly.) As adoption of AI solutions increases, and users (citizens, companies, governments) realize the competitive edge the technology brings, rivalries increase. That means there will be less willingness to share, and less of the collaboration that has contributed to some of the recent progress in the field. Second, the more we use AI solutions, the more we will see fringe cases that will threaten our established ethical rules and preferences. This will trigger a normal, albeit temporary, decrease in usage. Third, if the current trend of exponentially increasing the computing power needed for state-of-the-art algorithms continues, the energy cost will become prohibitive. In fact, the Center for Security and Emerging Technology of Georgetown University has projected that if current trends continue, “by 2026, the training cost of the largest AI model […] would cost more than the total U.S. GDP!”?If we only had those balancing loops, the adoption of AI solutions would naturally dwindle. That is why we need interventions to support innovation by, for example, steering government budgets to support cutting-edge research when private sector activity is not enough.

3.???Impact loops with companies and society

The coexistence of reinforcing and balancing loops is normal and expected. To complete this (again, simplified) picture, we need to add two other types of impacts: (i) the impact on AI companies of increasing usage, which increases their ability to fund future research and development (in blue), and (ii) the impact on society from using AI in various fields such as medical research, law enforcement, climate action, content generation and so on (in yellow).?

Those last two types of impacts are not naturally correlated. For example, you might be able to create an algorithm that has huge social benefits, without eliminating the potential to create harm in a different field or at a different time. You might also be able to create a harmful algorithm without hindering your ability to generate profit from it, at least in the short term. This is exactly why we need regulation! It is not a philosophical stance of more government or less government. It is about a system that does not naturally align its incentives and reactions to what we as a human society think are desirable outcomes.?

If you have been following congressional hearings in the U.S., legislative work in the E.U., and tweets and letters from technologists, philosophers, and concerned citizens, you know there is no shortage of ideas: a moratorium, mandatory audits, data transparency, ?explainability of algorithms, a dedicated agency, sectoral regulation, and more.

In the next few articles, I would like to take us through questions that help to think through each of these ideas applying a systems lens: (i) What, at best, doesn’t hurt, and, at worst, doesn’t work? (ii) What might work and how can we make it stick? And, as a bonus, we can talk about (iii) Structural traps and prerequisites for this long journey.

I am thinking of these articles as a conversation. If you have ideas, objections, thoughts, please share.

Jake Hoban

Enabler for purposeful organisations

1 年

Great post and great to stimulate discussion on this. I think it's arguably not just like twiddling shower knobs because we're talking about social systems as well as purely technical ones, requiring other forms of systems thinking beyond systems dynamics. AI is an accelerator for all kinds of long tail and existential risks that are already inherent in the current metacrisis - check out the work of Daniel Schmachtenberger on this.

回复
Alexander Pizzorni

Founder - XATOXI & BANGYBANG

1 年

1) Regulation: Governments should require a license to run or train AI models. Forcing entities to disclose openly the type of training, regulators should test these models using arguments with one absolute truth, keeping interpretation limited. There is only ONE truth and "the truth" can be manipulated into less truthful arguments. Arguments that carry elements of truth, forcing models to move away from the binary truth. Leading to less truthful branches as the data travels recursively through these models. These exposes society to a potential adversary that can move the threshold and re-define right and wrong. Big tech and Big Government would have a good reason to use these systems, to spread propaganda and control the mainstream conversation, as a result the philosophy of democracy seizes to exist as a mechanism designed to augment the voice of the people. Computers outsmarting humans would become a potential problem when quantum computers become mainstream. Hopefully by then regulation is in place to protect society from itself. How can the finite contain the infinite? https://www.realclearscience.com/blog/2016/03/how_infinity_can_fit_inside_the_finite.html

回复
Ann Graham

Journalist, Author, Editor, Writing Collaborator

1 年

Great post and great visual!

要查看或添加评论,请登录

Amane Dannouni的更多文章

社区洞察

其他会员也浏览了