AI: The Risks are Real, Serious, and Here
Ian Mitroff
Senior Research Affiliate, Center for Catastrophic Risk Management, UC Berkeley
I’m publishing this series of articles to share and discuss my ruminations on coping with a troubled and messy world. You can subscribe to never miss an article.
Even though we’ve heard it before, one cannot overemphasize the dangers inherent attendant to AI. Indeed, they’re already here[i].
While the ability to incorporate Natural Language is a boon in helping writers in summarizing long, ponderous texts, it can also produce reams of biased, untruthful, unwanted, and toxic information. And, they do it with it with what seems like complete confidence thus adding to the deception thereby posing greater Risks.
Further, while it seems like the stuff of Science Fiction, the fear that AI will take over and subjugate Humans is not without foundation.
For instance, it’s estimated that 80 percent of the US workforce could have at least 10 percent of their worked affected. And, 19 percent might experience at least 50 percent.
One thing on which all agree is that Disinformation is no longer a mere possibility. In addition, a consensus is emerging that some form of Regulation is needed.
Thus, Lina M. Khan, Chair of the Federal Trade Commission, has written about the need for it[ii]. While she’s mainly concerned that AI companies could stifle Free-Market Competition, thus harming consumers, she recognizes many of the dangers outlined above. Nonetheless, her main concern is that of consumers.
True, over 1000 AI researchers have called for a six-month moratorium on AI until the Risks are better known. Since then, the original letter now has 27,000 signatures. The chief question of course is, “What does ‘better known’ entail and who determines it?”
领英推荐
As I’ve written repeatedly, I don’t trust Technologists to make such determinations on their own. It’s like asking Epidemiologists to forecast the Economic Risks due to Pandemics. They’re not equipped either by training or inclination to do so.
The problems of the world will not be addressed by any single Discipline acting by itself. The fact that many of the Tech companies have disbanded their internal Ethics Review Panels is cause for further alarm. It only adds to inability to foresee potential Risks.
[i] Cade Metz, “If Some Dangers Posed by A.I. Are Already Here, Then What Lies Ahead?”, The New York Times, Monday May 8, 2023, P B5.
[ii] Lina M. Khan, “The U.S. Needs to Regulate A.I.,” The New York Times, Saturday, May 6, 2023, P A21.
Ian I. Mitroff is credited as being one of the principal founders of the modern field of Crisis Management. He has a BS, MS, and a PhD in Engineering and the Philosophy of Social Systems Science from UC Berkeley. He is Professor Emeritus from the Marshall School of Business and the Annenberg School of Communication at USC. Currently, he is a Senior Research Affiliate in the Center for Catastrophic Risk Management, UC Berkeley. He is a Fellow of the American Psychological Association, the American Association for the Advancement of Science, and the American Academy of Management. He has published 41 books. His most recent are: Techlash: The Future of the Socially Responsible Tech Organization, Springer, New York, 2020. The Psychodynamics of Enlightened Leadership; Coping with Chaos, Co-authored with Ralph H. Kilmann, Springer, New York, 2021. His latest is: The Socially Responsible Organization: Lessons from Covid 19, Springer, New York, 2022.
Photo by h heyerlein on Unsplash
?????? & ?????????????? ???? ???? ???????????????????????????????? ????????????????. I am an expert at driving brand growth and visibility through personal branding, thought leadership, company brand building and PR.
1 年Excellent point, Ian Mitroff! Your thoughts on the limits of expertise and the need for interdisciplinary collaboration are absolutely spot on. Keep sharing!