Can AI be compared to human intelligence – and does it pose a threat to us because it could become "superintelligent"??

Can AI be compared to human intelligence – and does it pose a threat to us because it could become "superintelligent"?

Many articles and books discuss the danger of artificial intelligence. They postulate that an artificial intelligence comparable to human intelligence will be achieved in the foreseeable future. They go so far as to claim that this AI will then create ever more intelligent versions of itself, until we will be confronted with a "superintelligence".

Some argue that we need to protect ourselves from and prepare for this inevitable development.

And of course, dealing with AI, like dealing with any data set, is associated with risks if one does not consider some essential points (more on this later). Nevertheless, it is important for me to emphasize at this point that the hypotheses described above have no scientific basis.

In his book "The myth of artificial intelligence - why computers can't think the way we do", Larson (2021) summarizes the current situation well:

“The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time – that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations. Yet the inevitability of AI is so ingrained in popular discussion – promoted by media pundits, thought leaders like Elon Musk, and even many AI scientists (though certainly not all) – that arguing against it is often taken as a form of Luddism, or at the very least a shortsighted view of the future of technology and a dangerous failure to prepare for a world of intelligent machines.” (...)

“All evidence suggests that human and machine intelligence are radically different. [However,] the myth of AI insists that the differences are only temporary, and that more powerful systems will eventually erase them” (Larson, 2021, p. 1).

As a psychologist who has intensively studied the complex functioning of the human brain in her training and who understands the technical possibilities from her work together with technical experts at least in its basic features, I have always wondered about statements comparing artificial to human intelligence.

Even today, we have only a rudimentary understanding of our intelligence, as well as the overall functioning of the human brain. There is not necessarily agreement among scientists about many theories on human intelligence. There is still a lot we are just beginning to understand and many things we thought we already understood, were disproved again.

As Eysenck stated in 1988 (p. 1):

?“There has perhaps been more controversy concerning the nature and existence of intelligence than of any other psychological concept.”

This statement has lost nothing of its true essence to this day.

We are not even close to completely understand the functioning of the human brain, nor are we able to artificially reproduce it.

- Artificial intelligence and human intelligence –

One problem in the current debate on artificial intelligence is that the ability to "solve problems" is compared with human intelligence in total, and even equated with it in part.

To believe that all of human thought could be understood, in effect, as the “breaking” of “codes” – the solving of puzzles – and the playing of games like chess or go – is a very simplified view of intelligence.

Even though problem solving is certainly an important part of human intelligence, this approach is abridged and does not do justice to the complex biochemical processes that make up the functioning of our brain.

Indeed, analogical problem-solving performance correlates highly with IQ. This correlation led Lovett and Forbus (2017, p. 60) to argue that “Analogy is perhaps the cornerstone of human intelligence”. More precisely, there are close links between analogical problem solving and fluid intelligence, which “refers to the ability to reason through and solve novel problems” (Shipstead et al., 2016, p. 771).

However, Schlinger (2003) for example argues in his paper “The myth of intelligence” that there is no general intelligence underlying all skills and that a concept of intelligence as anything more than a label for various behaviors in their contexts is a myth and that a truly scientific understanding of the behaviors said to reflect intelligence can come only from a functional analysis of those behaviors in the contexts in which they are observed. Schlinger (2003) argues that the conceptualization of general intelligence was based on logical errors of reification and circular reasoning.

In line with this, some psychologists have gone beyond the concept of a unitary general intelligence and have suggested many different types of intelligence, some operating autonomously, from three (Sternberg, 1984) to seven (Gardner, 1983).

Sternberg's (1984) triarchic theory includes three types of intelligence – analytical, creative, and practical. Gardner (1983) has postulated no fewer than six intelligences, including linguistic and musical intelligence, both of which are aural-auditory; logical-mathematical and spatial intelligence, which are visual; and bodily kinesthetic and personal intelligence.

Daniel Goleman popularized the phrase “Emotional Intelligence” with the publication of his book by the same title in 1995.

Wigglesworth (2004) introduced four different types of intelligences as physical, cognitive, emotional, and spiritual intelligences.

A report on intelligence issued by the Task Force established by the American Psychological Association concluded: "Because there are many ways to be intelligent, there are also many conceptualizations of intelligence" (Neisser et aI., 1996, p. 95).

These examples of theories of human intelligence are not intended to be, nor can they be, a comprehensive or even conclusive consideration of this topic at this point. Nevertheless, they illustrate the complexity of this topic and that the biochemical processes that constitute our thinking go far beyond rational problem-solving processes. In fact, one must even ask whether human thinking always proceeds rationally.

- Are humans even rational? –

This question is too complex to answer here, but the following explanation from Eysenck and Keane (2020, pp. 702 - 703) illustrates an important facet of the problem:

“Historically, an important approach (championed by Piaget, Wason and many others) claimed rational thought is governed by logic. It follows that deductive reasoning (which many have thought requires logical thinking) is very relevant for assessing human rationality.

Sadly, most people perform poorly on complex deductive-reasoning tasks. Thus, humans are irrational if we define rationality as logical reasoning.

The above approach exemplifies normativism. Normativism “is the idea that human thinking reflects a normative system against which it should be measured and judged” (Elqayam & Evans, 2011, p. 233). For example, human thinking is “correct” only if it conforms to classical logic.

Logic or deductive reasoning does not provide a suitable normative system for evaluating human thinking. Why is that? As Sternberg (2011, p. 270) pointed out, “Few problems of consequence in our lives had a deductive or even any meaningful kind of ‘correct’ solution. Try to think of three, or even one!””

- How does AI work and what is the difference to human intelligence? –

Game playing has been a source of constant inspiration for the development of advanced AI techniques. However, games are simplifications of life that reward simplified views of intelligence (Larson, 2021). A chess program plays chess, but does rather poorly driving a car.

Treating intelligence as problem solving gives us narrow applications. If machines would learn to become general, we would witness a transition from specific applications to general thinking beings – we would have AI (Larson, 2021).

But, as Larson (2021, pp. 28 - 29) explains further: “What we now know, however, argues strongly against the learning approach suggested early on by Turing. To accomplish their goals, what are now called machine learning systems must each learn something specific. Researchers call this giving the machine a “bias”. (...) A bias in machine learning means that the system is designed and tuned to learn something. But this is, of course, just the problem of producing narrow problem-solving applications. (This is why, for example, the deep learning systems used by Facebook to recognize human faces haven’t also learned to calculate your taxes.)

Even worse, researchers have realized that giving a machine learning system a bias to learn a particular application or task means it will do more poorly on other tasks. There is an inverse correlation between a machine’s success in learning some one thing, and its success in learning some other thing. (...)

But bias is actually necessary in machine learning – it’s part of learning itself.

A well-known theorem called the “no free lunch” theorem proves exactly what we anecdotally witness when designing and building learning systems. The theorem states that any bias-free learning system will perform no better than chance when applied to arbitrary problems.”

“Success and narrowness are two sides of the same coin. This fact alone casts serious doubt on any expectation of a smooth progression from today’s AI to tomorrow’s human-level AI. People who assume that extensions of modern machine learning methods like deep learning will somehow “train up”, or learn to be intelligent like humans, do not understand the fundamental limitations that are already known” (Larson, 2021, p. 30).

These explanations by Larson are, of course, only a small part, albeit an essential part, of the complex question of whether AI can be compared to human intelligence. For a more detailed consideration, I recommend Larson's book.

Nevertheless, these statements in connection with the state of research on human intelligence show how unrealistic horror scenarios including a superintelligence are and that we are still very far away from understanding all the processes and interrelationships of human intelligence. Not to mention being able to reproduce them.

Quite independently of the question of whether we are threatened by a superintelligence, a careless handling of AI is nevertheless associated with risks.

- Actual risk of AI –

For anyone who has ever worked in research, this problem is well known: If we collect data sloppily, based on the wrong sample, or using the wrong methodological approaches, then these errors naturally carry over into the evaluation, analysis, and interpretation of the data, and the study is worthless - and in the worst cases, misleading and dangerous.

The same applies to AI, of course. Only high-quality data, without bias or other sources of error, can lead to valuable AI solutions. And only if we set up the AI models in such a way that we can explain them and they remain transparent for us, we know that the data is really used in our sense.

Real danger comes from non-transparent and non-explainable AI that is based on biased data sets. For example, racism and sexism, which have been consciously or unconsciously transferred from human thinking to the data basis, continue in the decisions and recommendations of AI.

AI must be transparent and explainable. Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations.

Years of collecting biased data means that AI will be biased. Ethical AI starts with a thorough examination of the datasets used.

A recent example of what happens when we don't do this is described in this article: Medical Robots conform to racism and sexism due to biased AI, proves experiment (stealthoptional.com)

In the study (Hundt et al., 2022), evidence shows that medical robots will show bias for operational procedures. Much like surgeons deciding who to operate on for success rates, a robot will pick between people simply after looking at their faces.

The article quotes from the study: “A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples' jobs after a glance at their face."

"The robot has learned toxic stereotypes through flawed neural network models,” Hundt explained in the article. "We're at risk of creating a generation of racist and sexist robots but people and organizations have decided it's OK to create these products without addressing the issues."

One of the reasons AI bias is making its way into more and more medical robots is the data sets companies are willing to use. Just like any industry, many companies are looking for the cheapest R&D possible.

So, in conclusion, we can say that AI itself is not a threat. We are not on the verge of superintelligence.

The danger of AI lies in the wrong way of dealing with it. Now it is up to us to demand and implement a transparent and ethical approach to data and AI.

?

Elqayam, S. & Evans, J. St. B. T. (2011). Subtracting “ought” from “is”: Descriptivism versus normatism in the study of human thinking. Behavioral and Brain Sciences, 34, 233–248.

Eysenck, H. J. (1988). The concept of “intelligence”: Useful or useless?.?Intelligence,?12(1), 1-16.

Eysenck, M. W., & Keane, M. T. (2020).?Cognitive psychology: A student’s handbook. Psychology press.

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

Goleman, D. (1996).?Emotional intelligence: Why it can matter more than IQ. Bloomsbury Publishing.

Hundt, A., Agnew, W., Zeng, V., Kacianka, S., & Gombolay, M. (2022). Robots Enact Malignant Stereotypes. In?2022 ACM Conference on Fairness, Accountability, and Transparency?(pp. 743-756).

Larson, E. J. (2021). The Myth of Artificial Intelligence. In?The Myth of Artificial Intelligence. Harvard University Press.

Lovett, A. & Forbus, K. (2017). Modelling visual problem solving as analogical reasoning. Psychological Bulletin, 124, 60–90.

Neisser, U., Boodoo, G., Bouchard Jr, T. J., Boykin, A. W., Brody, N., Ceci, S. J., ... & Urbina, S. (1996). Intelligence: knowns and unknowns.?American psychologist,?51(2), 77.

Schlinger, H. D. (2003). The myth of intelligence.?Psychological Record,?53(1), 15-32.

Shipstead, Z., Harrison, T.L. & Engle, R.W. (2016). Working memory capacity and fluid intelligence: Maintenance and disengagement. Perspectives on Psychological Science, 11, 771–799.

Sternberg, R. J. (1984). Toward a triarchic theory of human intelligence.?Behavioral and Brain Sciences,?7(2), 269-287.

Sternberg, R.J. (2011). Understanding reasoning: Let’s describe what we really think about. Behavioral and Brain Sciences, 34, 269–270.

Wigglesworth, C. (2004). Spiritual intelligence and why it matters.?Kosmos Journal, spring/summer.

Gary Olson

Uncontrolled and unorganized information is no longer a resource in an information society.

2 年

Well Said

Jack Krupansky

Retired Technologist, Freelance Consultant, and Software Developer

2 年

Five years ago I took a stab at trying to sort out some of these definitional issues: "Untangling the Definitions of Artificial Intelligence, Machine Intelligence, and Machine Learning" https://jackkrupansky.medium.com/untangling-the-definitions-of-artificial-intelligence-machine-intelligence-and-machine-learning-7244882f04c7 And a shorter version: "What Is AI (Artificial Intelligence)?" https://jackkrupansky.medium.com/what-is-ai-artificial-intelligence-3f784f7b7c29 I've been considering doing an update, especially focusing on defining human intelligence a little more carefully, especially how to segment it since not every application really needs to replicate all aspects on human-level intelligence.

回复
Walter C.

Founder & CEO @ Tipalo - COGNITIVE EDGE AI acting in real-time will usher in a new era of philosophy, logical thinking & space technology

2 年

"Real danger comes from non-transparent and non-explainable AI that is based on biased data sets. AI must be transparent and explainable. Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations." So many demands, however, where are the feasible solutions? Data AI is definitely not part of any AI solution, as it represents only a tool at best, such as translators or similar. Where is the incentive to create a cognitive AI, self-learning and self-associating? And if there is one, than nobody cares or worse, shouts for help assuming this type of AI endangers the human society or takes away the jobs. Welcome to the human condition.

回复
Dagmar Monett

Director of Computer Science Dept., Prof. Dr. Computer Science (Artificial Intelligence, Software Engineering)

2 年

Great post, by the way!

要查看或添加评论,请登录

Dr. Lisa Precht的更多文章

社区洞察

其他会员也浏览了