Safeguarding Us From AI: Prepare and Act Systematically
Ian Mitroff
Senior Research Affiliate, Center for Catastrophic Risk Management, UC Berkeley
I’m publishing this series of articles to share and discuss my ruminations on coping with a troubled and messy world. You can subscribe to never miss an article.
With each so-called advance, it’s become increasingly apparent that AI poses grave Existential Threats to Humankind. The oft stated goal of replicating, if not exceeding, the complete workings of the Human Mind and Society say it all. In short, “Anything Humans can do, AI can do better!”
While the fact that this poses a serious Threat has recently been acknowledged by over a thousand workers and researchers in the field, and that therefore they propose that work in AI needs to pause, is a welcome sign, it’s far from enough[i].
We need to accept that the crisis has been made worse by the fact that Tech companies cannot be trusted to live up their so-called promises to protect us. Thus, The New Times reported that several of the top companies have disbanded their Ethics Units thereby putting profits before the safety and well-being of us all[ii].
The Myers-Briggs Personality Type Inventory or MBPTI is one of the best ways of which I know of illustrating in a systematic fashion what needs to be done to manage AI for our benefit. The four major Personality Types provide valuable insight into the markedly different ways in which people approach any issue or problem.
The four Types are Sensing-Thinking or ST; Intuitive-Thinking or NT; Intuitive-Feeling or NF; and Sensing-Feeling or SF.
The ST approach is fundamentally rooted in detailed, specific processes and mechanisms to control and thereby prevent dangerous outcomes. With regard to this, European countries have taken the lead in passing Laws regulating AI. While I generally support such an approach for the U.S. as well, I believe that it needs to be supplemented by means of hiring outside Crisis Auditors, and thereby rigorous programs in Crisis Management. In brief, while necessary, Regulations are not enough.
Over the course of their lifetimes, from their inception to eventual retirement, all Technologies need to be monitored for their intended, and especially their unintended dangerous and harmful, consequences. NT and NF play vital roles by means of constructing Worse-Case Scenarios and helping to set up internal Crisis Units that cannot be disbanded or weakened.
The fundamental jobs of both NT and NF are Thinking the Unthinkable. Where NT helps to accomplish this mainly by using Technical Experts, NF sweeps in a broader array of outside experts in Ethics, Law, Psychology, and the Social Sciences. As such, their first job is to learn to work together as Inter and Trans-Disciplinary Groups.
Thus, as has already been done, NT would assemble as many experts in Tech as possible to oversee particular Technologies. It would encourage them to look as far-ranging and broadly as possible. It would focus on as many new and novel ways as possible to foresee unintended consequences.
领英推荐
Like NT, NF is interested in the Big Picture. But it’s especially concerned with all of the Stakeholders, intended and unintended, who will be affected by a Technology. Where ST focuses on the gains in efficiency and the money that is saved by the introduction of new Technologies, NF is concerned with those whose lives are disrupted by the loss of jobs. Indeed, who’s looking out for them? Should the developers of new Technologies be required to help provide new opportunities for those affected? I believe that they should.
Finally, SF plays a most crucial role. If NF is concerned with Stakeholders in general, SF is concerned with specific individuals. Why shouldn’t Technologists take it upon themselves to know a few particular persons who will be affected both positively and negatively by their Technologies? What’s it like to feel their joys and sorrows? How are their lives impacted?
While these do not of course exhaust all of the ways in safeguarding us, I hope that they illustrate the variety of approaches that are needed. And of course, they need to work closely together in as integrated a fashion as necessary. That’s the only way in which they will be Systemic.
Finally, I would be remiss if I didn’t at least mention that a big part of planning is how all the above will be resisted. That’s a topic for a whole blog in itself.
[i] Cade Metz, “He’s Not Worried, But He Knows You Are,” The New York Times, Sunday, April 2, 2023, PP BU1 and BU 4.
[ii] Nico Grant and Karen Weise, “A.I. Frenzy Leads Tech Giants To Take Risks on Ethics Rules,” The New York Times, Saturday, April 8, 2023, PP A1 and A11.
Ian I. Mitroff is credited as being one of the principal founders of the modern field of Crisis Management. He has a BS, MS, and a PhD in Engineering and the Philosophy of Social Systems Science from UC Berkeley. He Is Professor Emeritus from the Marshall School of Business and the Annenberg School of Communication at USC. Currently, he is a Senior Research Affiliate in the Center for Catastrophic Risk Management, UC Berkeley. He is a Fellow of the American Psychological Association, the American Association for the Advancement of Science, and the American Academy of Management. He has published 41 books. His latest is: The Socially Responsible Organization: Lessons from Covid, Springer, New York, 2022.
Image by Gerd Altmann from Pixabay