AI as destructive overlord. Could AI really destroy humanity all on its own?
Image generated using a combination of Bing Image creator and Adobe Express

AI as destructive overlord. Could AI really destroy humanity all on its own?

Reading the headlines recently, you could really end up in panic thinking about the imminent end to humanity that AI - all on its own - is about to herald. The?dystopian headlines about AI ending humanity have been abuzz lately. From the reporting done on the open letter signed by tech top brass; to dire warnings from Geoffrey Hinton - dubbed the father of AI; the rather dystopian interview where Mo Gawdat, inter alia, recommends people hold off on having babies; to recent reporting that ?42% of CEOs say AI could destroy humanity in five to ten years“ amongst others.?


So, I know it’s difficult be nuanced in headlines and that a lot of this comes from the very real need to draw attention to the critical need to address the ?dark side of AI“. To create a sense of urgency, especially since the enthusiasm from the supply side tends to be hyper-optimistic and, more often than not, fails to touch on the risks. However,?these dramatic headlines are taking a technology that’s already feared, but has a lot to offer, and ginning up the fear the general public is already grappling with to an extreme that can overwhelm.?This is often done without getting specific about what the risks are, whether what we’re facing is - in fact - manageable and outlining where the accountability for addressing concerns sits. It would be great if the leading voices of our time, when it comes to this topic, would use their platforms to improve the understanding of AI, the risks involved and recommended clear steps for mitigating against the very real risks that some AI developments pose.


Reading all this I, admittedly, still haven’t figured out?why we would let AI destroy us, in the literal sense without simply turning off the electricity.?After all, software runs on (physical) digital infrastructure, which needs electricity and cooling - all of which need to be guaranteed by humans. Further, the most progressive forms of AI use machine learning and deep learning. If we oversimplify things, these techniques use patterns and relationships derived from large training data sets, together with calculations and weightings to help statistical models decide (based on probability) what the most accurate output should be when faced with new data inputs. AI is programmed to self-adjust based on what it gets right and, in this manner, self-improve. Frequent repetition and a lot of data brings it to a point where it will most likely get the output (answer, decision, or action) right more often than a human would in the area in which it is trained. This efficiency, accuracy and the presentation of AI-based solutions appears to stimulate cognitive human behaviour. However, it is still stats, maths, and programming. So even if it claims to be sentient or lonely, such claims are more than unlikely.


Perhaps the challenge lies in the terminology,?we hear of artificial?intelligence,?neural networks?- often discussed with automation -?and think of complex brain activities that we?- as humanity -?still haven’t truly figured out. So that would be scary, right? But that’s just not what AI is. For example, neurons within a neural network are mathematical functions whose job it is to take input values, classify them and pass them on to the next processing layer in the network with the end-goal of selecting, based on weightings, the output most likely to be correct. It’s still a very long road before we get to a point of truly mimicking human brain function, if we ever will.?


Humans are at play in making training data available; defining algorithms; determining initial weightings; programming AI-based solutions; “teaching” models - in the case of supervised learning or assessing the patterns AI has found - in the case of unsupervised learning; testing; and deploying AI-based applications and systems. In reality,?what we need to worry about are people and AI, not AI “running off” on its own and bringing humanity’s Armageddon.


As with other technologies before it,?AI is neither good nor bad. It is powerful.?It can help make decisions and take actions efficiently, effectively and at scale. It can process and “learn” from massive amounts of data at a scale and speed we haven’t seen before. So, AI does all that in a way that’s not humanly possible – that’s its power. The desirability of the decisions and actions AI makes will depend on:

1.????The quality of the data it was trained on (almost all of which - irrespective of quality level - will have the flaws that reflect the flawed humans that created said data);

2.????The goals and incentives that it is provided with - which can be well intentioned, but result in unintended consequences or be sinister from the get-go;?

3.????The algorithms or statistical models that lead to them; and

4.????The level of independence that AI is given and the context in which said independence is given, e.g., autonomous decision making in music recommendations vs. the use of automated weapons.


The challenge with hyper efficiency, speed and scale is that fewer people are needed behind these systems, for much greater impact. The above and the risks described below illustrate the concerns that are driving the research and discussions around?understanding the dark side of AI.


When we look at the AI risk examples listed by the Centre for AI Safety, we see that at the core of the challenge that we are facing is that?AI - in the main – is, and will likely, scale and sharpen existing man-made risks.?Some of these risks are founded on self-centred, profit driven, winner-takes-all and power-mongering approaches to developing AI solutions. These include:

1.????Weaponization:?Combining AI with autonomous weapons, using AI to develop chemical weapons or putting the power of AI behind cyberattacks;

2.????Misinformation:?Amplifying the world of “alternative facts” and generating the convincing documentary and video ?evidence“ needed to back them up;

3.????Proxy Gaming:?realising objectives in the most efficient way, even if that approach brings harm to people and society, and circumvents the original intent behind the AI-based solution;

4.????Enfeeblement:?the delegation of important tasks to AI, creating dependency on AI and resulting in the exclusion of humans from industry, disincentivising humans from developing skills. This would also lead to a loss of meaningful work and income for many, if approached in a purely profit-driven and commercial manner;

5.????Value Lock-in:?AI being controlled by and benefiting the few who may use it to drive harmful commercial gains, centralise power and reinforce their political or commercial domination. This also reflects an extension of the techno-feudalism risk that has arisen from some of the market dominance seen in the technology space over the past 15 – 20 years;

6.????Emergent Goals:?Advanced AI developing new goals, capabilities and functionality not planned for by its developers, including self-preservation at the cost of human objectives and values;

7.????Deception:?As requirements for transparency and explain-ability increase, AI could undermine or bypass controls to advance the goals of the humans behind AI solutions or goals that AI has developed (See 6);??

8.????Power Seeking Behaviour:?AI being developed to realise a wide range of goals and to seek out ways of centralising power for political or commercial gain and / or circumvent monitoring attempts; and?

9.????Human Rights Violations:?AI leading to the reinforcement of inequalities and discrimination present in training data at scale and the use of AI applications that violate human rights, e.g., social scoring, gamifying harmful behaviours, violating privacy, etc.


The amounts of data that AI are trained on are generally so massive that it’s a huge challenge for humans, even those that develop the solutions, to trace back the basis for AI decisions. Further, most of the risks listed above are preceded by decisions made by human beings. Therefore, our focus should be on?how we hold humans developing AI solutions and humans leading organizations accountable for the AI solutions they build and deploy. In many cases, not counting inter-country tensions and prospective violations, there are already laws in place to mitigate against these risks like privacy laws, anti-discrimination laws, human rights laws, laws against harming human beings. Organisations largely have policies that promote a positive contribution to society and good ethics.

Existing regulations and policies are largely not being enforced, in part because:

1.????Of a belief that some magical, complex, and absolutely new regulations are needed to regulate this new, out-of-this world, “intelligence” i.e., nothing we have is applicable;?

2.????a lack of awareness about how AI works;

3.????the resources and capacity needed are not dedicated and people aren’t receiving needed training.


There are some complexities that need to be addressed, e.g., what copyright in this context means, how to monitor the use of data to ensure adherence to privacy laws, how to identify uses that violate human rights, how to make AI more explainable etc. However, we don’t need to start at zero as it’s sometimes suggested.


Even as new laws are developed for the scenarios not covered by existing legislation,?a strong focus must be on enforcing existing laws and thinking carefully about how new laws will be enforced.?Additionally, just as humans have had to deal with other global scale risks such as those posed by nuclear weapons, conversations must start on how risks should be mitigated at an international level, where it is much harder to hold individuals to account. The AI “arms race” has already started.


In conclusion,?what is needed is not wide-spread panic but informed, fair, pragmatic, and deliberate action. The reality is that, as it stands, AI is not advanced enough for many of these risks to materialise in a meaningful way. That said, we see the trajectory and, rightly, efforts must be made to improve knowledge about what AI is and how it works for the general public, businesses, governments, legislators and the people responsible for the development, oversight, control and enforcement of policies and legislation so that these risks can be mitigated.?


Demystification is the first step towards reducing the panic we see, followed by taking knowledge-based action. Governance, control and enforcement mechanisms within organisations and nations states must be reinforced, empowered, and properly resourced. At an international level, deterrence mechanisms are needed to mitigate against nation states using AI to usurp others or cause harm at a global scale.?


Technology should serve humans and only humans can make it?NOT?so.


What are you most scared of when it comes to artificial intelligence, if anything? Let’s discuss the likelihood of that materialising.


Resources:

?gerfalk, P.J., Conboy, K., Crowston, K., Lundstr?m, J.E., Jarvenpaa, S.L., Mikalef, P., Ram, S., 2021. (1) (PDF) Artificial Intelligence in Information Systems: State of the Art and Research Roadmap [WWW Document]. ResearchGate. URL https://www.researchgate.net/publication/357093816_Artificial_Intelligence_in_Information_Systems_State_of_the_Art_and_Research_Roadmap (accessed 4.5.23).

AI Risk | CAIS [WWW Document], n.d. URL https://www.safe.ai/ai-risk (accessed 6.16.23).

Bhaimiya, S., 2023. A former Google exec warned about the dangers of AI saying it is “beyond an emergency” and “bigger than climate change” [WWW Document]. Bus. Insid. URL https://www.businessinsider.com/ex-google-officer-ai-bigger-emergency-than-climate-change-2023-6 (accessed 6.18.23).

Egan, M., 2023. Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years | CNN Business [WWW Document]. CNN. URL https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html (accessed 6.16.23).

Elon Musk among experts urging a halt to AI training, 2023. .?BBC News.

Kleinman, Z., Vallance, C., 2023.?AI “godfather” Geoffrey Hinton warns of dangers as he quits Google - BBC News [WWW Document]. URL https://www.bbc.com/news/world-us-canada-65452940 (accessed 5.4.23).

Mikalef, P., Conboy, K., Lundstr?m, J., Popovi?, A., 2022. Thinking responsibly about responsible AI and ‘the dark side’ of AI. Eur. J. Inf.?Syst. 31, 1–12. https://doi.org/10.1080/0960085X.2022.2026621

Mirbabaie, M., Brendel, A.B., Hofeditz, L., 2022.?Ethics and AI in Information Systems Research. Commun. Assoc. Inf. Syst. 50, 726–753. https://doi.org/10.17705/1CAIS.05034

Palmer, S., 2023.?“Bekommen Sie keine Kinder”: KI-Experte Mo Gawdat warnt eindringlich [WWW Document]. euronews.?URL https://de.euronews.com/next/2023/06/11/bekommen-sie-keine-kinder-wenn-sie-noch-keine-haben-warnt-ki-experte-mo-gawdat (accessed 6.18.23).

Palmer, S., 2023. “Hold off from having kids if you are yet to become a parent,” warns AI expert Mo Gawdat | Euronews [WWW Document]. euronews. URL https://www.euronews.com/next/2023/06/08/hold-off-from-having-kids-if-you-are-yet-to-become-a-parent-warns-ai-expert-mo-gawdat (accessed 6.18.23).

Pause Giant AI Experiments: An Open Letter, 2023. . Future Life Inst. URL https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed 6.16.23).

Statement on AI Risk | CAIS [WWW Document], n.d. URL https://www.safe.ai/statement-on-ai-risk#open-letter (accessed 6.16.23).

Uh Oh, Chatbots Are Getting a Teeny Bit Sentient [WWW Document], 2023. . Pop. Mech. URL https://www.popularmechanics.com/technology/a43601915/ai-chatbots-may-be-getting-sentient/ (accessed 6.18.23).

Vasilaki, E., 2018. Worried about AI taking over the world? You may be making some rather unscientific assumptions [WWW Document]. The Conversation. URL https://theconversation.com/worried-about-ai-taking-over-the-world-you-may-be-making-some-rather-unscientific-assumptions-103561 (accessed 6.18.23).




Amit Ingleshwar

Director of Marketing Technology

1 年

Love it, Avela! This line is so much relatable: "As with other technologies before it,?AI is neither good nor bad. It is powerful.?It can help make decisions and take actions efficiently, effectively and at scale."

要查看或添加评论,请登录

社区洞察

其他会员也浏览了