Part Two: Artificial Intelligence: What’s Wrong, What’s Missing, What’s Next?
Bruce Amberden
Co-Founder, CTO, Architect, Engineer, Mentor: Vision +Code +Data +Net +Ability +Teamwork =Success! Let’s Work Smarter!
This is the second part of a serial three-part blog on Artificial Intelligence. Part one is What’s Wrong?, part two is What’s Missing?, part three is What’s Next?. This is Part Two:
Artificial Intelligence: What’s Missing?
The demand to productize AI is exceeding the reach of current AI capabilities.
In the gold rush to capitalize on AI for Big Data Science and Big Automation, the original objective of human-level intelligent AI has been set aside. The AI rush to market has neglected essential core concepts and technologies required to build intelligent AI. Context and semantics are required for AI to deliver correct and meaningful results – results meaningful to the AIs and to us. AI architectures need to better emulate biological brain structures as the starting point for better, faster, learning methods. Brains operate autonomously, causally associating and merging events in analog real time – not in dissociated digital time like current AI. We can learn amazing things from biological brains.
Cross-discipline work is required to solve core AI problems. Here’s some of what’s missing.
What’s Missing?
The most essential missing piece is semantic understanding.
Current AI lacks semantics and meaning. AIs are excellent classifiers, but they have no way to associate meaning with their classifications. They have no context. People have deep connections between observation and meaning. We have 86 billion neurons with more than 150 trillion synapses implementing extensive connections that provide deep context. Current AIs are less than trivial by comparison; they don’t have the connection depth for context, and they don’t have semantics and meaning. They simply cannot know whether their classification results are correct or not, meaningful or not; they can’t even provide believable reliability estimates.
As illustrated by the Chinese Room thought experiment, Dr. Searle shows that semantics is beyond computer science, beyond Turing Machine computation, hence beyond AIs running on computers. Computers are simple symbol shuffling syntax machines. They do not know what they are doing, they’re only tools. It’s people who bring meaning to the computer; it’s people who take computer output as meaningful. No one knows how the brain understands semantics other than it does. We don’t have a solid definition of semantics or know how brains acquire meaning. We need a better understanding of the relationship between computation, semantics, context, and meaning. True Artificial Intelligence requires deciphering these issues. Is semantics beyond computation as Dr. Searle suggests? Is semantics more fundamental than computation as I think? Serious studies of semantics, context, and meaning are important.
The key problem is that AI has no context or understanding of its source training data or the classifications it delivers. Semantic relations are NOT in the source data. Core learning is needed to build context; situational training to build the operational neural network. Meaning is in the relationships between context, input data, and results. Meaning is a complex semantic network. Context enables trained neural networks to know when they are right, when they are not, and when to keep looking. Without context and semantics, AIs can train on source data to 100% accuracy and still perform poorly in real situations. This is because real situations are imperfectly represented in the source training data. Real world data is nearly infinitely variable; the source data is not, limited by its sub-sample of the real world. Without context to support generalization from old training to real situations, AI performance and reliability suffers. Simple context permits basic self-correction and generalization to new situations.
Computational neural networks need to better emulate biological neural nets.
Another serious, more technical, issue is that computational neurons, networks, and back-propagation only faintly resemble their prototypes in the brain. Biological neurons have 10s of thousands of connections and live in a bath of hormones that drive their behavior and performance from millisecond to millisecond. Biological neurons are always firing at a low level, and fire faster, louder when they have input. They fire differently with different input, with different hormones. Different kinds of synapses probably filter neural signals differently; modulating results transfer in important ways – blocking some, passing others. What do inhibitory neurons do? I think that inhibitory neurons sculpt virtual neural networks that run on the base physical biological neural nets – and that virtual neural nets fluctuate from one configuration to another in milliseconds according to control from up-net neural modules and input from down-net neurons. I think biological neural modules running ephemeral virtual networks make neural signaling in the brain far more complex than is generally understood. Computational neurons need to emulate more of these features of biological neurons.
Brains do not train on oceans of data; brains can learn from a single happenstance. Brains have 600 million years of evolution behind their structure and basic behavior. Every neural sub-system has been battled hardened for survival by those 600 million years of brutal competition. Rapid learning is essential to survival. Creatures need to survive from the moment they are born. Brains are born pre-trained by evolution; they are born with basic operating behaviors – and learn what’s what very quickly – or else. Brains have evolved to be good enough. When a new survival problem comes along, brains have to get better to survive. I think that hormones evolved to enhance learning – speeding learning by giving that dopamine kick to light-up success, and pain to punish failure. Computational neural nets need to better emulate biological neural architectures for fast learning.
Biological neural nets are autonomous and associate events faster than real-time.
Brains operate faster than real time; life flows by in real time. Brains respond in milliseconds to real life events that take seconds, otherwise the brain-body system could not react fast enough to sidestep the charging elephant. Brains learn in analog time, where conscious learning is continuously being submerged into fast-reaction pre-conscious brain modules. Brains have memory and context that is always folded into processing new input – this is how new input becomes meaningful. Brains are hugely parallel, enabling real-time relative coordination between multiple signals that is essential to fast learning by association. In fact, brain networks operate fast enough that feedback from up-net neural modules can modify associations, meaning, and down-net neural behavior in milliseconds before the input stimulus can change. Computational AI runs on serial computers and does not have associative learning – back-propagation is NOT associative learning. Computational neural nets need to be re-architected to operate in real time as autonomous systems that incorporate some of these brain features.
Our future AI-driven abundant smart economy is waiting for us.
This is a sampling of what’s missing from current AI. Without progress on these and other important concepts and techniques, AI will continue to be severely limited. Autonomous robo-cars, robots, smart personal assistants, and super-intelligent AI will never be possible without this critical work.
This is the second part of a serial three-part blog on Artificial Intelligence. Part one is What’s Wrong?, part two is What’s Missing, part three is What’s Next?
About the Author:
Bruce Amberden is a visionary, inventor, designer, architect, startup founder, CTO, and Engineering VP. He has founded startups and worked with leading technology companies in Silicon Valley to create amazing software products. Bruce has over 20 years of experience as a software engineer and as a leader inspiring tiger teams to do brilliant innovative work. He has a Masters of Science in Physics and is an Armchair Astronomer. Bruce is working on a personal project building breakthrough semantic AI technology. Bruce is seeking new employment and is available for startup and technology consulting.
Very well written Bruce.?