The question of whether a computer can think is no more interesting than whether a submarine can swim. Honestly, what is #intelligence?

The question of whether a computer can think is no more interesting than whether a submarine can swim. Honestly, what is #intelligence?

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” This quote from Edsger W. Dijkstra is a fantastic illustration of the question that surrounds the world of artificial intelligence. These technologies have become so accessible in today’s society that we do not really think of, or consider, where we should draw the line of imposing human characteristics onto artificial beings. Now it is not to say that we only do this with technologies; in fact, the human race has the bad habit of trying to even impose our experiences on animals as well. Is it our fault that we question the relationship aspects that surround artificial intelligence? Better yet, at what point do we consider things human? Is it our abilities to form relationships through thought and passion, or the way we can hold conversation? At some point, humans have found the sweet spot where we can “recognize” and “determine” what is artificial, but chances are, we are wrong. Aren't we?

We often think and deliberate about intelligence with an anthropocentric conception of our own intelligence in mind as an obvious and unambiguous reference. We tend to use this conception as a basis for reasoning about other, less familiar phenomena of intelligence, such as other forms of biological and artificial intelligence. This may lead to fascinating questions and ideas. An example is the discussion about how and when the point of “intelligence at human level” will be achieved. For instance,?Ackermann writes: “Before reaching superintelligence, general AI means that a machine will have the same cognitive capabilities as a human being”. So, researchers deliberate extensively about the point in time when we will reach general AI. Well... let's suppose that these kinds of questions are not quite on target. Actually, let's even consider whether they make any sense at all...

There are (in principle) many different possible types of (general) intelligence conceivable of which human-like intelligence is just one of those. This means, for example that the development of AI is determined by the constraint of physics and technology, and not by those of biological evolution. So, just as the intelligence of a hypothetical extraterrestrial visitor of our planet earth is likely to have a different (in-)organic structure with different characteristics, strengths, and weaknesses, than the human residents this will also apply to artificial forms of (general) intelligence.

Below let's briefly summarize a few fundamental differences between human and artificial intelligence, taken from a perspective shared by Bostrom:

Basic structure: Biological (carbon) intelligence is based on neural “wetware” which is fundamentally different from artificial (silicon-based) intelligence. As opposed to biological wetware, in silicon, or digital, systems “hardware” and “software” are independent of each other. When a biological system has learned a new skill, this will be bounded to the system itself. In contrast, if an AI system has learned a certain skill then the constituting algorithms can be directly copied to all other similar digital systems.

Speed: Signals from AI systems propagate with almost the speed of light. In humans, the conduction velocity of nerves proceeds with a speed of at most 120?m/s, which is extremely slow in the time scale of computers.

Connectivity and communication: People cannot directly communicate with each other. They communicate via language and gestures with limited bandwidth. This is slower and more difficult than the communication of AI systems that can be connected directly to each other. Thanks to this direct connection, they can also collaborate on the basis of integrated algorithms.

Updatability and scalability: AI systems have almost no constraints with regard to keep them up to date or to upscale and/or re-configure them, so that they have the right algorithms and the data processing and storage capacities necessary for the tasks they have to carry out. This capacity for rapid, structural expansion and immediate improvement hardly applies to people.

Efficiency: biology does a lot with a little: organic brains are millions of times more efficient in energy consumption than computers. The human brain consumes less energy than a lightbulb, whereas a supercomputer with comparable computational performance uses enough electricity to power quite a village.

The one above are just a few examples of the many huge kinds of differences between human and artificial intelligences. To them, we may add the complete difference between cause-and-effect reasoning (human intelligence), correlations and statistics (main artificial intelligence strength), critical thinking (human intelligence), ability to analyze huge amount of data (artificial intelligence)... and the list may be endless, sincerely!

In addition, our (human) response speed to simple stimuli is, for example, many thousands of times slower than that of artificial systems, while computers can very easily be connected directly to each other and as such can be part of one integrated system. This means that AI systems do not have to be seen as individual entities that can easily work alongside each other or have mutual misunderstandings. And if two AI systems are engaged in a task then they run a minimal risk to make a mistake because of miscommunications (think of autonomous vehicles approaching a crossroad). After all, they are intrinsically connected parts of the same system and the same algorithm. Well, this may not always be true for humans... correct? ??

I look forward to having your perspectives and feedbacks on this, both if you are a human, an AI, or any other form of intelligence... ??

Alex Armasu

Founder & CEO, Group 8 Security Solutions Inc. DBA Machine Learning Intelligence

9 个月

Your post is much appreciated!

Olivier Gomez (????)

Top Voice | Automation & AI Expert & Advisor | CEO & Co-Founder | Speaker | Author | Influencer | Delivered over $100M P&L Impact to clients

11 个月

I love this - fully OG approved !

回复
Michael Vermeersch

Accessibility Go To Market Manager @ Microsoft | Driving Disability Inclusion

2 年

Actually the cartoon is progress... I remember that when I was 12 years old my grandparents gave me book with fairy-tales. What I really wanted was a book on genetic engineering and I had expressed that wish via my own reading habits and experiments I was doing after school. Humans often make assumptions based on their experience; making them based on the lived experience of others as the cartoon shows is "progress" ??

Mya Lin Maung

Bachelor of Engineering Science graduate

2 年

Ab joco nahi artificiallly lalu hosatar hae mind my Languages

Giovanni Mocchi

Vice President at Zucchetti Group, TEDxSpeaker

2 年

Thank you Fabio for your article, in reading that I understood that there is not one single form of intelligence but there are different intelligences with different strengths and weaknesses. We need to recognize this so that we can take the opportunity in combining them to create the best possible outcomes for evolving our species.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了