The Unbearable Consciousness of Machines

The Unbearable Consciousness of Machines

The original definition of #AI from 1956 was meant to be the study of #intelligence by implanting its essential features on a computer, thereby equating our basic biology with hardware/software. Fast forward more than half a century, and we frequently stack cognitive science against computer science by making the assertion that all behaviorism can be reduced and modeled quantitatively. Aside from mimicking basic biology – even with deep learning – AGI (Artificial General Intelligence) remains elusive to us in our attempts to go beyond narrow/weak AI accomplishing narrow reasoning or problem-solving tasks.

It is the cognitive versus computational research approach see-saw that makes the notion of machine consciousness, as Hod Lipson defines it, sound like overreach in this Quanta article. He says that he wants to meet “something that is intelligent and not human.” My Vizsla knew the exact window of opportunity to exploit when a thawing steak disappeared off the counter in less than a couple of minutes that I was gone and then offered a “who, me?” look. But he frequently could not tell that the dog barking on TV was not really present. Perhaps AI can mimic that kind of intelligence well but is that any better than the controversial Skinner behaviorist paradigm (an action-reaction type model) that Noam Chomsky rightly characterizes as “statistics heavy” data analysis offered as explanatory insight to behavior, language, and consciousness?

Before attributing intelligence to machines consider that what we are attempting is reconstruction engineering of highly complex biological systems whose structure and logic we do not really understand. Is that a massive data problem or basic science? Perhaps AI research should take a revised (e.g., McClamrock re-evaluation) approach to David Marr’s Tri-Level Hypothesis for biological systems in general: computational, representational (or algorithmic) and physical. I contend that AI today is still at the computational level. It can produce Bach like music after learning Bach but without knowing what or how it moves us?

Lipson claims that Convolutional Neural Networks (CNN) brought “new wind into this research.” But isn’t the example of Fukushima’s Neocognitron self-organizing/learning network just performing an approximation of visual matching without truly understanding vision in the brain. True vision is a lot more than a “matter of degree” compared to machine pattern recognition and matching. Just because a robotic brain has basic proprioception it does not make become aware of its vision as sensing. How would the said robot’s brain adapt to “loss of sight” if all its sensors are fried? Will it feel joy once its “sight” is restored (sensors replaced)?

Just because Lipson asserts that “philosophers have not made a lot of progress in a thousand years” it does not make a reductionist implementation of self-awareness become true consciousness in the broader cognitive sense. Merging physical self-simulation (robot survives loss of a limb) and self-simulation of its brain (knowing why it needs to adapt to a missing limb) will not produce a sentient being. At least an AI that knows how and why it produced an original painting or a composition, for example, and what it means to the observer who is moved by it or another who hates it. However, it will make a good guard robot dog that will know better than to steal my steak.

I saw an excellent performance of Tom Stoppard’s The Hard Problem in New York. The play is inspired by a dialog on consciousness between a philosopher (John Searle) and a cognitive scientist (David Chalmers). I’ll quote this section:

Leo: But can a computer do what a brain can do?

Amal: Are you kidding? — A brain doesn’t come close!

Leo: (to Hilary) Do you want to jump in?

Hilary: Not much.

Leo: Really? Why?

Hilary: It’s not deep. If that’s thinking. An adding machine on speed. A two-way switch with a memory. Why wouldn’t it play chess? But when it’s me to move, is the computer thoughtful or is it sitting there like a toaster? It’s sitting there like a toaster.

Leo: So, what would be your idea of deep?

Hilary: A computer that minds losing

Leo: (takes a moment to reconsider her): Amal If I made a computer simulating a human brain neuron by neuron, it would mind losing.

Leo: (to Hilary) Do you agree?


The average brain has almost 100 billion neurons with over 400-500 trillion synaptic junctions. Let us build a machine like that and find out if it minds losing a game of chess?

Carl Benda

Senior Vice President, Senior Architect Leader at Bank of America

5 年

That and even if you did build a computer simulating a brain neuron by neuron, you've only achieved the infrastructure of the current state snapshot.? Remembering that the wiring itself adapts and re-wires based on all? of the sensors, experiences, dreams and so on.? Identical twins do not think identical thoughts, or compute the same results when asked identical questions.

Mike Duke MBA

Founder of the ISOVERSE

5 年

#WhyaiHallucinates

回复
Rameshchandra (????? ?????) Ketharaju (???????)

DigiTech Inventor - Researcher - Sādhaka

5 年

Interesting that man wants to create something similar to him. Now some pointers from the Sanatana Gyaan, in one such narrations Lord Brahma created the Kumaras to populate the Vishwam with life forms but as the creation of Kumaras by Lord Brahma did not have any bias, hence the Kumaras were in unison with the Brahma Gyaan (Vidya) and refused to obey Lord Brahmas idea of creation. Now Lord Brahma wanted his idea to be implemented so he created a bias that is Aaidya along with Vidya and then created the Sapta Rishis. So, AI as we speak has some parallels in the past too. Now coming to the Consciousness which has been defined variously in terms of qualia, subjectivity, the ability to experience or to feel, wakefulness, having a sense of selfhood or soul, the fact that there is something 'that it is like' to 'have' or 'be' it, and the executive control system of the mind.?? Interestingly we have another narration in Sanatana Gyaan which proves that mind does not hold Consciousness. The narration of Rishi Dadhichi, Indra and The Ashvinsis is an apt example. And to mention there is a narration of Dva Suparna in the Sanatana Gayana reflecting to the unbiased observer of the actions which is a very important concept to research on, which answers many questions.?

Rashmi Datt

Helping Leaders Overcome Ego Loops & Build Emotionally Intelligent Teams | Leadership Coach | Psychodrama Trainer | Facilitator

5 年

Interesting write up. I wonder if AI will be able to incorporate emotions in its programming one day! Emotions are what make us human, and also the cause for suffering, as emotions come from 'Maya'; Except for shantam the ninth of the navrasas.(natyashastra lists the 9 emotions which all humans experience as fear, anger, sadness, disgust, love, courage, curiosity, joy, and calmness). But we have to navigate through all the other 8... Including the unpleasant ones... before reaching shantam (or absence of emotions) But it is in the emotions that the rasas exist ...

要查看或添加评论,请登录

Sandeep Mehta的更多文章

  • Pilots versus Co-pilots: The Human Factors Challenge in AI-based Systems

    Pilots versus Co-pilots: The Human Factors Challenge in AI-based Systems

    With the resurgence of AI applications including Generative AI methods, we are perhaps being too glib about the risks…

    10 条评论
  • The promise and pitfalls of AI coding assistants

    The promise and pitfalls of AI coding assistants

    One of the many hopes and aspirations of the current wave of developments in software and AI is that it can help…

    10 条评论
  • And the bloat goes on

    And the bloat goes on

    Niklaus Wirth and Tony Hoare were both computer scientists who decried #software #complexity and foresaw the bloated…

    6 条评论
  • Life by Video

    Life by Video

    We are all mostly capable of #empathy. After almost a year of life-as-videoconference for all, admitting that #zoom (or…

  • Culture, Transformation, and Velocity

    Culture, Transformation, and Velocity

    A decade after the phrase “software is eating the world” came into the tech lexicon it remains startling that two…

    3 条评论
  • What a piece of work is a (hu)man!

    What a piece of work is a (hu)man!

    Hamlet’s monologue wondering if humans are “noble in reason” and “infinite in faculty” is routinely challenged by…

    1 条评论
  • Robotics and Racial Bias in AI

    Robotics and Racial Bias in AI

    That algorithms, especially in AI, exhibit bias is not news. The manifestation of racial bias in physically embodied…

    6 条评论
  • Getting to Chaos Meaningfully

    Getting to Chaos Meaningfully

    First discovering reliability formally -- after taking an eponymous course in applied stats in 1987 and speeding…

    3 条评论
  • Practically Regulating AI

    Practically Regulating AI

    We have been hovering around the topic of “regulating AI” for quite a while. Aside from polemic pieces from tech…

    3 条评论
  • On the importance of Formally Verifying Software

    On the importance of Formally Verifying Software

    Finally catching up on an interesting paper in this month’s CACM that touches on a critical issue left out in…

    1 条评论

社区洞察

其他会员也浏览了