AI is Everyone's Problem
Gregory P Nichols, MPH, CPH, CMQOE, ASP
Keeping people safe and well in a time of unprecedented technology development
Earlier this week, Sam Altman, CEO of OpenAI and creator of ChatGPT, testified in front of Congress about the need to regulate AI. While I totally agree with him, I also I think that there are some tendencies to focus either on how do we stop students from using AI to cheat on their homework or how do we stop Terminator from happening. And yes, these are things to worry about, but it's so much more.
Also this week, former Google CEO Eric Schmidt, had an interview with CNN where he discussed some of what he considers the real threats from AI, especially the ways in which it can be used as a true weapon of war. His two scenarios were: AI executing a simultaneously coordinated cyber attack on a whole country at once until everyone is dead; and asking an AI program to develop a biological pathway to kill 1 million people. I think this is where we really need to start focusing, and in fact, this was the premise of an article I wrote in 2018 - The Future of Destruction: Artificial Intelligence.
In the article I discuss three ways AI can be used as a weapon of mass destruction, now bolstered by Schmidt's argument as well:
Since AI is based on programming languages and chips, we often catagorize this as a cybersecurity or a technology problem, but it's WAY bigger than that. It's a national security problem, a homeland defense and security problem, a public health problem, a political science problem, and so forth...this is everybody's problem!
领英推荐
I think we tend to be naive in the United States and in Western Democracies in general regarding the development of new technologies and their potential threats. We tend to think that since "WE" would never do that, then others probably wouldn't either...and we just know that isn't true. When I talk about technology risk, an example I use often is the airplane. The Wright Brothers flew at Kittyhawk in 1903 and perfected the first arguably true airplane in 1905. Just a decade or so later, airplanes were use with brutal force as weapons of war in World War 1. Just 40 years later, the Enola Gay became the first aircraft to drop an atomic weapon, with horrifying effects, during the conduct of war.
My point is, humans will find a way to weaponize anything if given the time and the resources to do so. AI is no exception. We need to be ready, because AI just isn't a computer science problem...it's EVERYONE'S problem.
You can watch a snippet of Eric Schmidt's interview here: https://www.cnn.com/videos/business/2023/05/17/ai-fears-eric-schmidt-former-google-executive-sot-sidner-pt-vpx.cnn
And you can read my article from 2018 here: https://hdiac.org/articles/the-future-of-destruction-artificial-intelligence/
Redefining workplace culture & success | I launch innovative programs & supercharge existing programs for measurable results
1 年Very attention grabbing graphic! But the testimony was sobering
podcast host | Assistant in nursing (AIN), Certified nutritionist (CN), studying bachelors of public health (BPH), Artificial intelligence enthusiast
1 年April it’s a interesting topic for public health. But I think overall AI has more power to do way more good than bad.