Intelligence for Governing Intelligence

Intelligence for Governing Intelligence

Sir Geoff Mulgan posted Why the world needs a Global AI Observatory on Mon (26 June).


What even is such a thing, you might wonder?


He says it would draw on collaborations between people at MIT, Oxford, the Collective Intelligence Project, Metagov and the Cooperative AI Foundation. He hopes that it might serve the growing need for scientifically informed knowledge about AI. He wisely believes that it is "impossible to sensibly regulate what governments don’t understand. GAIO could help fill this gap."


GAIO is a proposal for what some Americans might crassly call a "head shed." On it's face, more smart people working on hard problems sounds like a good thing. Maybe. Maybe not. Apart from whatever its intellectual virtues and bona fides might be, the most important question about such a proposal is: would it be effective?


Spending money, especially public money, on a new institution that may have little to no impact on regulators (and, as the piece mentions, there is, at present, no globally recognized regulatory body to impact), let alone on the behavior of commercial research and development, sounds like it might be both expensive and ineffective.


Mulgan offers up the Intergovernmental Panel on Climate Change (IPCC) as a model and this is where he really starts to lose me. Using facts and information to create meaningful action to change policy and the global economic drivers of carbon emissions has been a dismal failure. This is neither the fault of climate scientists or the IPCC per se, but they have become part of a system of inaction that hardly inspires confidence, let alone recommends them as a model.


In fact, the failure to act effectively on a global scale has its most important exemplar in the steadily mounting impacts of climate change. We may argue about whether the trajectory of that change is already irreversible and existentially catastrophic (whether the world is going to end), but there seems little room for serious scientific disagreement that these changes have and will continue to have catastrophic impacts. Let's call the history of failures of the global effort to change the direction of a warming planet, the negative example.


By contrast, the global response to the COVID-19 pandemic saw the relatively swift action of all levels of government, coordination and agreement among virtually all the world's countries, and the near universal adoption of the core pillars of a common playbook. Too be fair, this was messy as hell, and very far from disciplined, consistent and orderly, but, nonetheless, the effort was successfully adaptive. One of the things I think is most worthy?of our attention is that relatively little new institutional infrastructure was built to deal with COVID. There simply wasn't time. Urgency and the credible imminence of existential threat of the pandemic created a tangible sense of danger sufficient to move the world to action. There has only ever been one other historical precedent for a collective action of such scope: global war. Let's call the global COVID-19 response, the positive example.


The short course for those wading into the challenges of global technology governance is available in Nassim Nicholas Taleb 's two important books: The Black Swan : The Impact of the Highly Improbable (2007) and Antifragile : Things That Gain From Disorder (2012). The first suggests that we face the greatest threat from things we don't see coming because we discount them as too improbable. The second is about the design of systems that actually get stronger the more stress they are put under.


What kind of approach to governance might be equally capable of foresight and of becoming stronger the more it is tested? This is, I think, a useful framing question for our AI governance dilemmas.


Jennifer Pahlka offer us an important clue in her new book, Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better . Pahlka calls out a failure in government that is the product of a divide between people who think and people who act. She borrows a description from the British policy making universe, that of "intellectuals and mechanicals." The core of her case for doing better policy and delivering more effective government services is that we need to bridge this divide. But we need to do better still, we need to find ways to bring into conversation and productive tension not only governments, scientists and technologists, corporates, and transnational gatherings and bodies (as Mulgan describes the NGO+ sector), but also citizens. And all this on a global scale.


How might this become possible?


In responding to OpenAI 's call for proposals for designs for Democratic Inputs into AI , Alex Ryan and I, along with a small but mighty team, have proposed an answer. Not a solution to the problem entire, to be sure, but an experiment suggested by having to work within some of the devilish constraints of the proposal. To put it simply, the forcing function of the proposal was that the prototype had to model a scalable process. The vision of scale was, implicitly, global.


Our answer is that we need to build a new kind of intelligence. We called it a "constitutive intelligence."


As we think about our AI future, we all wonder who will watch the watchers? Ironically, in figuring out how to govern AI, we'll need to build a system of intelligence that is equal to the task of regulating or governing the intelligent systems we are currently building. The problems we face cannot be solved simply by making more facts available (though having more and better ones is great, for sure). There are no facts about the future. We need a new and powerful capability for reasoning about complexity that is simultaneous both ethical and technical in nature. Thinking about AI and its consequences we must first face how early we still are in the game.


Talking with a friend about this yesterday, he used the example of the advent of the car and the paved motorways that would eventually become the federal Interstate highway system, an effort which began in 1921. When it did, no one had any capacity for imagining traffic or the myriad other downstream effects of the broad adoption of the automobile. Now we are entering an era in which the autonomy of vehicles will once again bring a new cascade of unknown unknowns. But in the analogy to AI, it is still 1908 (the year Ford's Model T went into mass production) and we still haven't even imagine the need for highways.


Think about one of the most daunting computing challenges we have so far faced in trying to reckon with complex systems: the weather. Weather is, in principle, a completely deterministic system. That is, its moving parts are all material and measurable. Given enough sensors and compute power, therefore, it might be possible to flawlessly predict the weather. Turns out, the problem of predicting weather is intractable, that no matter how good our models, how much compute power and how many sensors we had to measure system change down to the smallest scale, the way that small changes propagate means that micro-disturbances can become hurricanes. We know this phenomenon as the Butterfly Effect. For our purposes, the so-what of this example is that, despite all our science, technology, money and effort, the weather can only be reliably?predicted for a few days at a time.


That's the dilemma for complex systems that have foreseeable dynamics.


We can already sense how difficult it will be to foresee the effects of technology as dynamic as AI, made only the more so as it is connected to a system of pervasive and nearly ubiquitous networked computing infrastructure.


My friend Robin Uchida and I, inspired by the French economist, Frédéric Bastiat (and many others), used to ask our consulting clients: "what's more important, the problem you see or the problem you don't see"? Bastiat deserves to be quoted at length on the subject:


“In the economic sphere an act, a habit, an institution, a law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.


There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen.


Yet this difference is tremendous; for it almost always happens that when the immediate consequence is favorable, the later consequences are disastrous, and vice versa. Whence it follows that the bad economist pursues a small present good that will be followed by a great evil to come, while the good economist pursues a great good to come, at the risk of a small present evil.”


Our capacity to foresee AI's effects will be challenged by our need to understand and deal with emergent technology whose "causes" are multiple and effects will be shaped by, among other things, the very tools we use to reckon with them. We are entangled in these looping effects and only our collective intelligences can help us now. Our problem is not primarily one of the availability of knowledge, but of how to devise a means for bringing together the powers of hybrid intelligence and effective collective action.


To paraphrase Karl Marx, by way of Ray Bradbury: up to now we have only analyzed the coming technology of AI. The point is not to predict the future, but to develop the capacity to change it.


Thx to Alex Ryan Charles Finley Martin Ryan Dr Jennifer Boger Robin Uchida Mark Abbott, P.Eng. Richard Merrick Brian E A "Beam" Maue, PhD Rahmin Sarabi for your thoughtful conversations with me.

Sorry I missed the seminar- such an interesting topic. When are you holding the next one?

回复
Brian E A "Beam" Maue, PhD

Sense & Shape the Future | Strategy & Transformation Architect, Author & Speaker | Veteran

1 年

I appreciate the highways and automobiles explorations, Michael Anton Dila. To borrow from complexity theory, if you look in a dictionary at the word "automobile" and then page over to the word "road," nowhere within those two component parts will you find one of their emergent effects-- "traffic jam." Similarly, an atom of oxygen is not wet, and neither is an atom of hydrogen. However, when combined in a certain way, H20 emerges with a property of "wetness." We need the reflections and visions of our creative, collaborative minds to look at "A.I." and play with the possible forms of what might emerge. Thanks for helping break trail on this journey!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了