The Unbearable Consciousness of Machines
Sandeep Mehta
Award-winning Global CIO & CTO | Innovator | Team Builder | Engineering | Infrastructure | Artificial Intelligence | Technology Risk Management | Financial Services | Board member
The original definition of #AI from 1956 was meant to be the study of #intelligence by implanting its essential features on a computer, thereby equating our basic biology with hardware/software. Fast forward more than half a century, and we frequently stack cognitive science against computer science by making the assertion that all behaviorism can be reduced and modeled quantitatively. Aside from mimicking basic biology – even with deep learning – AGI (Artificial General Intelligence) remains elusive to us in our attempts to go beyond narrow/weak AI accomplishing narrow reasoning or problem-solving tasks.
It is the cognitive versus computational research approach see-saw that makes the notion of machine consciousness, as Hod Lipson defines it, sound like overreach in this Quanta article. He says that he wants to meet “something that is intelligent and not human.” My Vizsla knew the exact window of opportunity to exploit when a thawing steak disappeared off the counter in less than a couple of minutes that I was gone and then offered a “who, me?” look. But he frequently could not tell that the dog barking on TV was not really present. Perhaps AI can mimic that kind of intelligence well but is that any better than the controversial Skinner behaviorist paradigm (an action-reaction type model) that Noam Chomsky rightly characterizes as “statistics heavy” data analysis offered as explanatory insight to behavior, language, and consciousness?
Before attributing intelligence to machines consider that what we are attempting is reconstruction engineering of highly complex biological systems whose structure and logic we do not really understand. Is that a massive data problem or basic science? Perhaps AI research should take a revised (e.g., McClamrock re-evaluation) approach to David Marr’s Tri-Level Hypothesis for biological systems in general: computational, representational (or algorithmic) and physical. I contend that AI today is still at the computational level. It can produce Bach like music after learning Bach but without knowing what or how it moves us?
Lipson claims that Convolutional Neural Networks (CNN) brought “new wind into this research.” But isn’t the example of Fukushima’s Neocognitron self-organizing/learning network just performing an approximation of visual matching without truly understanding vision in the brain. True vision is a lot more than a “matter of degree” compared to machine pattern recognition and matching. Just because a robotic brain has basic proprioception it does not make become aware of its vision as sensing. How would the said robot’s brain adapt to “loss of sight” if all its sensors are fried? Will it feel joy once its “sight” is restored (sensors replaced)?
Just because Lipson asserts that “philosophers have not made a lot of progress in a thousand years” it does not make a reductionist implementation of self-awareness become true consciousness in the broader cognitive sense. Merging physical self-simulation (robot survives loss of a limb) and self-simulation of its brain (knowing why it needs to adapt to a missing limb) will not produce a sentient being. At least an AI that knows how and why it produced an original painting or a composition, for example, and what it means to the observer who is moved by it or another who hates it. However, it will make a good guard robot dog that will know better than to steal my steak.
I saw an excellent performance of Tom Stoppard’s The Hard Problem in New York. The play is inspired by a dialog on consciousness between a philosopher (John Searle) and a cognitive scientist (David Chalmers). I’ll quote this section:
Leo: But can a computer do what a brain can do?
Amal: Are you kidding? — A brain doesn’t come close!
Leo: (to Hilary) Do you want to jump in?
Hilary: Not much.
Leo: Really? Why?
Hilary: It’s not deep. If that’s thinking. An adding machine on speed. A two-way switch with a memory. Why wouldn’t it play chess? But when it’s me to move, is the computer thoughtful or is it sitting there like a toaster? It’s sitting there like a toaster.
Leo: So, what would be your idea of deep?
Hilary: A computer that minds losing
Leo: (takes a moment to reconsider her): Amal If I made a computer simulating a human brain neuron by neuron, it would mind losing.
Leo: (to Hilary) Do you agree?
The average brain has almost 100 billion neurons with over 400-500 trillion synaptic junctions. Let us build a machine like that and find out if it minds losing a game of chess?
Senior Vice President, Senior Architect Leader at Bank of America
5 年That and even if you did build a computer simulating a brain neuron by neuron, you've only achieved the infrastructure of the current state snapshot.? Remembering that the wiring itself adapts and re-wires based on all? of the sensors, experiences, dreams and so on.? Identical twins do not think identical thoughts, or compute the same results when asked identical questions.
Founder of the ISOVERSE
5 年#WhyaiHallucinates
DigiTech Inventor - Researcher - Sādhaka
5 年Interesting that man wants to create something similar to him. Now some pointers from the Sanatana Gyaan, in one such narrations Lord Brahma created the Kumaras to populate the Vishwam with life forms but as the creation of Kumaras by Lord Brahma did not have any bias, hence the Kumaras were in unison with the Brahma Gyaan (Vidya) and refused to obey Lord Brahmas idea of creation. Now Lord Brahma wanted his idea to be implemented so he created a bias that is Aaidya along with Vidya and then created the Sapta Rishis. So, AI as we speak has some parallels in the past too. Now coming to the Consciousness which has been defined variously in terms of qualia, subjectivity, the ability to experience or to feel, wakefulness, having a sense of selfhood or soul, the fact that there is something 'that it is like' to 'have' or 'be' it, and the executive control system of the mind.?? Interestingly we have another narration in Sanatana Gyaan which proves that mind does not hold Consciousness. The narration of Rishi Dadhichi, Indra and The Ashvinsis is an apt example. And to mention there is a narration of Dva Suparna in the Sanatana Gayana reflecting to the unbiased observer of the actions which is a very important concept to research on, which answers many questions.?
Helping Leaders Overcome Ego Loops & Build Emotionally Intelligent Teams | Leadership Coach | Psychodrama Trainer | Facilitator
5 年Interesting write up. I wonder if AI will be able to incorporate emotions in its programming one day! Emotions are what make us human, and also the cause for suffering, as emotions come from 'Maya'; Except for shantam the ninth of the navrasas.(natyashastra lists the 9 emotions which all humans experience as fear, anger, sadness, disgust, love, courage, curiosity, joy, and calmness). But we have to navigate through all the other 8... Including the unpleasant ones... before reaching shantam (or absence of emotions) But it is in the emotions that the rasas exist ...