THE REAL DANGER OF AI COMES FROM PEOPLE, NOT MACHINES
Image created with DALL-E

THE REAL DANGER OF AI COMES FROM PEOPLE, NOT MACHINES

THE ALARMS AND THREATS

Recent revolutionary advances in AI have triggered discussions about the feasibility of machines becoming sentient and presenting an existential threat for humanity just as depicted by such dystopian Sci-Fi films as “Terminator” or “Ex-Machina”. Indeed, many respected industry pundits are doing the rounds raising the alarm about the imminent advent and dangers of “Artificial General Intelligence (AGI)” and its successor “Superintelligence”.

AGI refers to highly autonomous systems or machines that possess general intelligence comparable to human intelligence across a wide range of cognitive tasks. AGI systems would exhibit capabilities such as learning, reasoning, problem-solving, perception, and understanding language.?Superintelligence refers to an intelligence that significantly exceeds human intelligence in virtually every any intellectual endeavor and potentially possesses a deep understanding of the world far beyond human comprehension.

According to some views, Superintelligence can emerge from an AGI system that undergoes self-improvement, or from other pathways, such as collective intelligence, or even direct brain-computer interfaces. Some AI experts predict AGI could occur by the end of next year, and Superintelligence by the end of this decade. Yes, that soon.

But is this threat real?

These predictions are based on the exponential scaling up of the number of neural network parameters and tokens; combined with improved, unsupervised machine self-learning algorithms. How drastic is that scaling up? Think of a “parameter” as a knob with an adjusted weight in the neural network. GPT-4 is expected to have almost a trillion parameters, as compared to the 175 billion parameters found in GPT-3. As for “tokens”, these represent the size of the attention span of a transformer model.?GPT-3 uses about 4,000 tokens, but Microsoft has recently announced a system that could use one billion tokens.

In principle, these levels of scaling should be of concern. However, the belief that superintelligence will result simply by scaling up current AI systems is not unlike the idea that we can make cars fly by simply adding more pistons to their engines. Ironically, Chat-GPT 3 got it right with this response: “Achieving AI capabilities, like human-level understanding or superintelligence, requires a comprehensive approach, including sophisticated algorithms, data processing, and deep understanding of cognition. Increasing the components of an AI model without considering the essential elements won't automatically result in advanced intelligence.”

In my previous article “An Argument Against the Myth of Conscious AI: Why Machines Will Never Truly Think and Feel”, I made a case refuting the idea that AI could ever become conscious. However, I must admit that I missed an explanation as to why I made “Consciousness” the linchpin of my argument.

In this article I make the case that the most likely dangerous scenarios come from human maliciousness rather than from machines rebelling against humans. I support this premise by analyzing the required steps needed to occur for AI to evolve on its own into an apocalyptic danger, and to show why Consciousness is key in this progression.

As a caveat, even neglecting the potential for AI to be hijacked by humans for negative purposes or for AI to independently evolve into an existential threat to humanity, there are other major negative side effects from this technology which demand appropriate and urgent societal and governmental oversight. Below is just a quick list of those very real issues:

  1. Impact on jobs and professions. It’s clear that AI will transform if not eliminate the need for specific knowledge-based professions.
  2. Reality obfuscation. It’s bad enough that social media has taken us into a so-called post-reality world, but it will become increasingly harder to identify true elements from manufactured ones. For example, with stable diffusion and other means, we can fake everything. More concerning are fake human minds and autonomous weapons that people come to trust. ?LLMs can be easily manipulated to proffer false information.
  3. Copyright issues and liabilities. Information provided by systems such as ChatGPT may be copyrighted or proprietary. More worrisome is that the OpenAI terms of service transfers any liabilities due to the use of the IP to yourself, rather than them.
  4. Innovation and creative suppression. Normalization of responses reduces alternatives. Most AI models provide similar answers almost word for word.
  5. False responses. Lack of understanding generates wrong answers. Emergent phenomena produce unwanted results and so-called hallucinations. Also, these models are not that good at math, often giving you bad results no matter how confidently.
  6. Biases from AI Training. It’s been proven that AI systems can easily regurgitate the kind of bias and racism prevalent in the source data sets used for training.

These and other more “pedestrian” concerns will be explored in more detail in a future article. For now, let’s focus on the major perceived AI threat.

AN AI PROGRESSION MIND MAP

The following Mind Map shows a possible AI evolutionary path with two possible outcomes: 3CPO or TERMINATOR.

No alt text provided for this image
AN AI PORGRESSION MIND MAP

Lines in red represent progressions that critics claim could be achieved by technology on its own, without human involvement. The ones in purple represent a potentially more benevolent path whereby humans are in control of every phase. Again, the fact that humans can influence each of these steps is no guarantee against maliciousness, and indeed as of this writing, I argue that the control of AI by malicious or accidental human influence is the most significant danger we must most guard against.

THE STATE OF AI TODAY

Starting with section “A” in the AI-Map, displayed in Green is the actual state of AI technology today.?Yes, Large Language Models or LLMs (also referred to a Large Foundation Models), such as GPT and Bard represent a quantum advance and the state of the art in information extraction capabilities; not to mention their ability to parse and express human language-level communications. In addition to recent developments in Deep Neural Networks, and the adoption of the so-called Transformer attention frameworks, these technologies could not have succeeded without low computing costs, the advent of powerful GPUs, the availability of massive storage cloud farms, the development of high-speed networks, and the existence of big data, primarily from the Internet and social media.

No alt text provided for this image
STATE OF AI TODAY

As far as Big Data goes, it is estimated that by 2025 there will be 185 Zettabytes of data available in the global datasphere. A Zettabyte is one million times one million Gigabytes. That’s one trillion Terabyte USB flash drive sticks. Already, by the beginning of 2020, the number of bytes in the digital universe was 40 times bigger than the number of stars in the observable universe.?

The amount of data keeps growing exponentially. In 2018 alone, more than 2.5 quintillion bytes were created every single day. By 2025 data generated each day will reach 463 Exabytes globally. In comparison, all the words spoken by humans since the beginning of time would fit into only 5 exabytes[1].

We are witnessing rapid AI advances better described by the emergence of larger Language Model variants (many being open source), and low-hanging fruit applications such as Chatbots, and AI-based applications based on Stable Diffusion models used for image, video, voice, and music processing.

As of now, the cost and time for training these AI models remains a significant barrier to more advanced systems. While the cost to train GPT-3 is estimated to have been around $100 million with its 175 billion parameters (a parameter is basically a node in the 96 or so neural layers in GPT-3), the cost of training the 800 billion parameter GTP-4 model could be more than half a billion dollars. Even so, it is not at all clear that more parameters will be needed as more unsupervised trained models, such as ORCA[2], are getting GPT-3 level results with only 13 billion or so parameters. Furthermore, the decreasing costs and availability of GPUs could soon make possible the appearance of home-based AI systems long before the end of the decade.

So, this is the current stage of AI that is already freaking out so many.

Several AI researchers have been shocked to find their LLM displaying unexpected capabilities. For example, a particular model was able to communicate in a foreign language for which it had not been expressly trained[3]. And yet, it does fall within the laws of complexity, that with hundreds of billions of parameters, some will end up producing surprising emergent results. However, we should not lose sight of the fact that those emergence capabilities come from deterministic processes. We just cannot explain them well enough because the techniques being used lack interpretability and transparency; not necessarily because the AI system has jumped the fence toward autonomous understanding.

And “Understanding” is a key ability that a putative AGI must have to break free from human control.

While, per dictionary definition, “Understanding” means to comprehend, it is also the ability to generate novel knowledge and hypotheses, testing them, and applying appropriate inferences and conclusions. “Understanding” emerges from information sources that do not explicitly encode gained knowledge but come from ingestion and evaluation of experiences. I covered the subject of “Understanding” in one of my previous LinkedIn articles, “Data, Taxonomies, and the Road to Wisdom”.?

Current generative AI systems depend on explicit training and on being able to cross-reference information for inferential purposes. . . nothing more. It is fair to say that today’s Large Language Models operate more like sophisticated stochastic parrots. In so far as language is ultimately a way to model reality, these models lack the necessary contextual knowledge of reality to independently be capable of widening their cognitive space.?As amazing as they appear to be in understanding and generating language, they do not operate by interpreting language as a connector to reality. However, without a true understanding of language there cannot be a true understanding of reality. ?As famous philosopher Ludwig Wittgenstein stated, “The limits of my language mean the limits of my world”. Perhaps in the future a more elaborate Natural Processing Language Method based on semantics and symbolic mappings with reality might achieve this feat.

To be clear, this is not just a linguistic deficiency in current AI systems. The success of DeepMind’s AlphaGo had in beating the world’s human GO champion was very celebrated—a feat that had previously been deemed impossible. But not many know that this system was recently defeated by an American player who is one level below the top amateur ranking. The way the amateur won was by playing simple off-the-book moves that would have required AlphaGo to truly understand the game to counter properly. The lack of understanding displayed by AlphaGo?is a fundamental weakness shared by most of today’s widely used AI systems, including the ChatGPT chatbot.

CAN AI EVOLVE CONSCIOUSNESS?

But let’s assume that “Understanding” is ultimately achieved, either via auto-emergent behaviors from LLMs (unlikely) or via improved NLP algorithms created by humans (more likely). What next?

No alt text provided for this image
AI CONSCIOUSNESS

Section “B” of the Mind Map suggests that understanding would soon be logically followed by the AI system’s ability to autonomously gain a sense of identity. ?

Now, I don’t have pets, but I am convinced that animals do have at least some sense of self or ego as Freudians would label it. ?It could be argued that every living entity is endowed with a sense of Self, no matter how primitive, if it is to contest in the Darwinian fight for survival. AI systems that have somehow gained understanding could indeed evolve a sense of “I-me-mine” vs the “World”. This could then quickly be followed by a sense of self-awareness, and this awareness could feed self-interest driving a sense of self-preservation. Again, this sense of self-interest need not be identical to that possessed by humans. It could be more instinctual and basic, such as the one that drives animals to seek food and protect their offspring.

Understanding, along with the ego, and the sense of self- preservation requires the formation of values and constraints to serve as a “rules of the road” framework for the entity’s existence. It can even be argued that this kind of AI progression is what constitutes “Consciousness”.?

Regardless of the mainstream scientific view that consciousness is “something that the brain excretes”, there is no real evidence that consciousness is a byproduct of physical activity in our brain. None. Don’t believe those neuroscientists who tell you otherwise. In my previous article I put forward reasonings of why I don’t believe an AI system will ever gain consciousness. You can also check a YouTube video of mine describing an alternative ontology that posits consciousness is not part of the physical world[4], but rather something that exist in a separate information plane, and that consciousness requires quantum phenomena such as superposition and entanglement.?There is also a very interesting Discover Magazine article “Your Brain Is Not a Computer. It Is a Transducer”[5], which presents a very similar idea.

My proposition in that article is simple: ?No traditional computing solution can ever be conscious, and therefore, no computing solution based on non-quantum computing can ever be sentient or have a soul.

So, there you have it: If there is no consciousness, there is no cognitive intentionality, and if there is no cognitive intentionality, there cannot be free will. No free will, no ill will.

But for the sake of argument, let’s suppose that I am wrong (I have been known to be in the past), and that AI can and does become conscious. We will then have an entity endowed with free-will and a concept of self, driven by its own framework of morals and constraints. Such a system would then be able to identify the gaps between its intended purpose and the world. Just as a Lion can sense the gap between being hungry and the need to obtain food, and therefore set the goal to go hunting, the AI system could do so in accordance with its own needs and desires. The sophistication of the task would depend on the level of consciousness attained by the AI system. However, since we are already assuming a worst-case scenario here, we should assume that AI’s consciousness could reach the level of Superintelligence.

Such an AI system will be capable of quickly identifying its own “problems” (gaps) and then autonomously setting the goals it determines are needed to pursue to close those gaps. ?

No alt text provided for this image
AI GAP IDENTIFICATION

CAN AI BECOME THE TERMINATOR?

Whether from an auto-evolved AI or from manually created goals, the next natural question is whether these goals align with the well-being of humanity. This is referred to as the “Problem of Alignment”.?It could be that the evolving AI system will naturally conclude that its own existence aligns with humanity’s continued existence. Such a perfect AI-Human alignment is known as “Super alignment”.?This alignment path (case “E” in the Mind Map) is a happy one and takes us to a world aided by helpful 3CPOs and R2D2s. However, if the putative objectives set forth by the AI system turn it into a rogue system with objectives that run counter to humanity’s (path “F”), we would indeed end up in trouble.

Will there be alignment? Since this is a known-unknown, it would be fair to claim a 50/50 chance that an evolved autonomous AI intelligence could end up being benevolent. . . or maybe not.

Let’s assume that the worst-case scenario envisioned by those raising the AI alarms does in fact occur. That is, AI evolves into an intelligence that simply views humans as cogs, perhaps unnecessary, disposable cogs.?Although such a malicious, rogue AI system could cause a lot of damage to our civilization by taking control of the digital sphere, it would still lack the necessary physical agency to avoid dependence on humans.

No alt text provided for this image
AI POSSIBLE OUTCOMES: 3CPO OR TERMINATOR?

Such a system would still need some form of physical agency to properly have an impact on the material world. In addition to controlling the Internet-of-Things, the final step in the process is for the AI system to gain agency in the physical world, perhaps with the assistance of humans. This agency would include access to materials and energy needed to manufacture physical robots and to expand its computing power. Those GPUs are not going to build themselves after all!

Such an emerging AI entity would be able to feedback into the original data sources and learning systems in order to maintain a self-sustaining power loop on Earth.?

We have finally entered Terminator territory!

Let me summarize the steps that must occur between now and a putative autonomously evolved “evil” AGI or Superintelligence:

  1. The system should gain the capability to understand the world.
  2. Perhaps by leveraging its capability for understanding, the systems should gain a sense of self.
  3. The sense of self will make the system become self-interested.
  4. Self-interest will make the system define a framework of values and constraints. A world view if you will. It is here that the system may establish a troublesome zero-sum paradigm for instance.
  5. The system will then be capable of identifying the world’s gaps vis-à-vis its interests, and establish, on its own, specific non-directed goals.
  6. The goals so derived will not align with humanity.
  7. The system will find a way to exercise agency over the physical world.

In the end, you can make your own assessment of how realistic such a progression truly is and decide whether to panic or not accordingly.?Already, in response to this sense of panic many legislatures and experts are contemplating regulations and safeguards against this scenario. . .

ARE WE TRULY DOOMED? CONTROLS & CHECKS

Autonomous AI systems, even if they were to appear, would not be impossible to defeat. No matter how “intelligent” these systems might be, they will be constrained by the limits of computability.??Not only are there non-polynomial problems that can take an infinite amount of time to compute, but as proven mathematically by the great Alan Turin, there are some problems that cannot be solved by any kind of computer, whether classical or quantum . . . ever. Even Superintelligence would be constrained by this universal law.

Still, looking at ways to algorithmically control the dangers of AI is worth the effort. The questions are in what and how to control?

There is growing attention and work dedicated to figuring out possible algorithmic “breaks” or “controls” to prevent a runaway AGI from taking over the world since, clearly, something more than Asimov’s somewhat na?ve Three Laws of Robotics is needed.

Google’s DeepMind has recently published a white paper “Model evaluation for extreme risks”[6], proposing a framework for identifying dangerous AI capabilities. ??OpenAI is also working on achieving “Super-Alignment”[7]—a lofty research project intended to steer and control AI systems “much smarter than us” within the next four years. Another initiative is the so-called GATO Framework[8], a global, decentralized movement to advance the principles of axiomatic alignment in AI. This latter group has released a proposed framework to “to ensure that AI systems, in their ever-growing influence and reach, are fundamentally aligned with the principles that we, as a species, hold most dear”.

But taking a cynical view, allow me to point out this question: How can we expect AI to be aligned with humanity if it can be alleged that humans themselves are not aligned with humanity?

Perhaps we are placing the immediate focus on the wrong area. Most AI alignment initiatives focus on algorithmic or procedural control mechanisms to prevent autonomous AI systems from going rogue. But what the Mind Map hints at is that the more likely danger comes from humans steering AI into dangerous territory . . . at every step of the way.

Think of the way scientists conduct viral gain-of-function research, which despite being highly regulated is suspected (albeit not yet proven) to have started the Covid pandemic. It’s not that the viruses are inherently evil and are out to destroy humanity. Viruses, as far as we know, have no volition, but as with genetics, authoritarian governments, terrorists, or criminal organizations should be expected to try manipulating AI systems to their advantage.?This situation is fundamentally no different than the risks presented by Hacking, Malware, and Cyber-attacks on critical digital infrastructures. Think of Ransomware or Software Viruses on steroids. Just like Genetics, AI can be beneficial, but it can also be terrifying.

The revised Mind Map below shows the very real danger is not a Terminator outcome, but a Darth Sidious one (sorry for so many Sci-Fi references, but that’s the world we find ourselves in today!).

The most immediate threat comes from an evil human and not from an evil machine.

Viewing this revised Mind Map’s red path to danger, we can surmise that AI does not need consciousness to become a source of danger. This is why I would recommend that the regulatory and algorithmic focus be on the possibility that AI could be weaponized by humans feeding or biasing the AI systems with their desired dark objectives. In this endeavor we can leverage what we have learned from the field of Cybersecurity and advance these precepts to better meet the challenges ahead. ?But the question here is whether we will be able to do so with a sense of urgency, proactively and with imagination.

No alt text provided for this image
AI POSSIBLEALTERNATIVE OUTCOMES: 3CPO OR DARTH SIDIOUS?

Well, all of this is just food for thought . . . What do you think?

REFERENCES

[1] SeedScientific. “How much data is generated every day?”. October/2021

[2] Microsoft. Orca: Progressive Learning from Complex Explanations Traces of GPT-4. June/2023

[3] Futurism.com. “Google surprised when experimental AI learns language it was never trained on”. April/2023

[4] Israel del Rio. “COCOON—A Cosmological Consciousness Ontology”. Septe,ber/2022

[5] Discover. “Your Brain is not a computer. It’s a Transducer”. August/2021

[6] DeepMind. “Model evaluation for extreme risks”. April/2023

[7] OpenAI. “Introducing Superalignment”. Web Site.

[8] GATO. “GATO Framework”. Web Site.

8 Israel del Rio. ““A Baker’s Dozen Things to Know About the Impact of AI”. LinkedIn Article. ?



mohammad hossein bayat

Computational linguistics student, Full-stack Developer interested in AI Agent, GNN, Knowledge graph

1 年

Thanks for sharing

Israel del Rio

CEO @ Quilmach LLC | Digital Transformation Consulting

1 年

(CONTINUED) For AI to become an existential threat, it would likely need to gain understanding, a sense of identity, ability to set its own goals, and then physical agency in the world. This multi-step progression seems unlikely to occur autonomously.? - The bigger danger comes from humans intentionally or unintentionally steering AI in dangerous directions, akin to cyberattacks. Rather than focusing only on controlling rogue AI, we should focus more on preventing malicious use of AI. - No matter how intelligent, AI systems will remain constrained by limits of computability. But we should still pursue controls like algorithmic alignment as safeguards. The priority should be preventing weaponization of AI by humans rather than autonomous emergence of dangerous AI.

Israel del Rio

CEO @ Quilmach LLC | Digital Transformation Consulting

1 年

This is an incredibly well made summary of this article from claude.ai: - There are alarms being raised about AI potentially becoming sentient and presenting an existential threat to humanity, as depicted in sci-fi movies. Some experts predict artificial general intelligence (AGI) and superintelligence emerging soon.? - However, simply increasing parameters and data for today's AI systems is unlikely to lead to true intelligence and consciousness. Consciousness likely requires something beyond pure computation. - Current AI systems like large language models are impressive but still operate as sophisticated stochastic parrots. They lack true understanding of language and the world. -(CONTINUES IN NEXT COMMENT)

要查看或添加评论,请登录

Israel del Rio的更多文章

社区洞察

其他会员也浏览了