Part Three: Artificial Intelligence: What’s Wrong, What’s Missing, What’s Next?
Copyright : Worawut Prasuwan

Part Three: Artificial Intelligence: What’s Wrong, What’s Missing, What’s Next?

This is the third part of a serial three-part blog on Artificial Intelligence. Part one is What’s Wrong?, part two is What’s Missing?, part three is What’s Next?. This is Part Three:

Artificial Intelligence: What’s Next?

What if AI didn’t need human supervision – that’s what’s next!

Contemporary AIs are amazing classifiers, but they can’t safely drive a car, nor hold up their end of a conversation, nor translate poetry from one language to another. They don’t know that sheep are vegetarian and that cats are not. They can’t do any of these things because semantic relationships are NOT in their training data. Try talking with Alexa or Siri. AI progress is stuck. The many dozens of back-propagation flavors on feed-forward neural nets is as good as this class of AI is going to get. Clearly, a new approach incorporating semantic context is required – along with new biological brain inspired architectures.

As advanced classification tools, current AI makes wonderfully advanced new levels of business and operations process automation possible. This ridiculously reduces the cost of doing business. AI is rapidly finding more and more applications in the modern economy. This is what’s driving the business gold rush to build and push Big Data and Big Automation AI systems into the market. For safety though, let’s not neglect the need for appropriate human supervision. Big witless AI running stupidly amok will not be pretty. Big witless AI will need lots of human nannies – lots of new jobs for people. But, what if AI didn’t need full-time human supervision – that’s what’s next.

The answers are in connecting-the-dots.

There is an astounding array of ongoing research activities into the brain sciences, philosophy of mind, computer science, mathematics, physics, and artificial intelligence. So much so, that the cross-disciplinary connect-the-dots game is unexpectedly effective in advancing the state-of-the-art in artificial intelligence. Reading papers across these, and other, science disciplines spark flashes of insight when results in one area synergistically combine with results in others. Fundamental questions discover new, better answers, which can lead to AI breakthroughs. The answers are in connecting-the-dots.

What’s Next?

AI needs a brain.

Inspired by biological brains, simplified computational AI brains will be pre-designed, pre-wired, and pre-optimized with semantic context architectures and fast semantic learning methods. AI brains will have simple semantic context and behavioral neural modules enabling them to associate basic meaning with input and deliver meaningful classification results. Semantic context modules implement dynamic semantic networks that will always associate input with best match semantic content. Learning in AI brains performs double duty: learning new semantic content and training real time behavior. AI brains will learn to deliver meaningful classification results. But, even though AI brains will start out being laughably trivial compared to biological brains, they will still be startlingly useful.

Neural machinery in human brains is beyond the current state of the art.

So let’s start small. The world is full of wonderfully evolved compact brains. Brains are everywhere; every bug has one. A bee’s brain has less than a million neurons and is the size of a pinhead. Bees have semantics, meaning, knowledge, understanding, a very simple butt-wiggling language and culture, awareness, consciousness, and emotions. Every living thing smarter than a slime mold knows when bees are angry, and knows what happens when they are, and are anxious to get out of the bees’ way when they are angry. Bees can read symbols (as indexes) and make decisions based on those symbols. Bees can recognize individual human faces. A bee’s brain has a full 3D flight model plus flight control and navigation systems - all this in less than a million neurons. Surely we can figure out how such a tiny brain works. Well the C. Elegans, a minuscule roundworm (1mm long), has only 302 neurons with about 7,000 connections and we do not yet fully understand its neural system. And it doesn’t even have a brain. Decoding bee brains is a difficult but worthy project. A bee brain equivalent on a chip would give IoT devices real intelligence, not to mention intelligently fly autonomous drones. Bee brains have many features that could improve the AI state of the art many times over, making simplified computational AI brains possible.

AI brains will always be intentionally smaller and simpler than biological brains.

Computational AI brains will be intentionally vastly simpler and smaller than biological brains. Biological brains are hugely complex having evolved through random bottoms-up trial and error. We will build simple AI brains from scratch via top-down research and development using what we learn from biological brains. We will take astute shortcuts and make inspired simplifying design choices.  AI brains will be smaller, simpler, faster, more reliable, and will be ridiculously useful and valuable – and it won’t take us 600 million years.

Computational semantic AI brains will have a biologically inspired three-phase assembly process. First is the architectural building, setup, and configuring phase that predefines, prewires, prebuilds, and preloads AI brains with basic context and rapid learning behaviors. A new AI brain evolves then is born – and it can be endlessly copied. Next is semantic learning to load the new AI brain with application-specific semantic content and fine-tune its architecture and behavior. Then the AI brain is deployed into autonomous operation in real world situations, where it learns through baby-steps, then rapidly expands its context and capabilities to the limit of its primary design. Most importantly, once an AI brain is deployed into operation, all three assembly phases continue to run in real time simultaneously.

Can Turing Machine computation handle semantics?

But first, there are subtle details and important questions. Can Turing Machine computation handle semantics? My research tells me the answer is yes, so I’m confident the answer is yes. If the answer is no, then AI Brains and human-level AI cannot be built with computers – entirely new hardware will have be developed. This would set back the development of AI Brains for decades. To definitively answer this question we need a solid definition of semantics. For that we need to answer other more fundamental questions, like: How is it possible in a physical material universe to know anything? Well, we know things, so there must be an answer, a real physical answer. There must be some physical spark that makes knowing possible. And that spark somehow informs the definition of semantics and meaning. So exactly, what is a Turing Machine? What, exactly, is Turing Computation? Connect-the-dots!

Vastly simplified computational AI brains inspired by biological brains. 

AI Brain is an over-inflated name for an exceptionally simple neural network design –when compared to the outrageous complexity and capabilities of biological brains. The name AI Brain is an inspiration – because AI brains will be vastly inferior to biological brains for years to come. But Gordon Moore’s exponential law will drive and accelerate them into our economic infrastructure very quickly in ways that biological brains could never be. The first AI brains may be vastly simpler and inferior to any biological brain, but these first AI brains will drive business into a new AI frenzy. AI brains will arrive with features optimized for business operations, able to safely drive cars for the first time, converse simply, yet sensibly, and will know the name of your cat and will remind you of your significant other's birthdate. New AI brain capabilities will fuel a new AI gold rush, driving newer, smarter AI products into the market. Semantically smarter AI brain products will create a multi-trillion dollar industry and trillion dollar companies.

Summary

Current AI technology is blocked precisely where this new semantic neural net, AI brain approach will lead. Semantic neural nets in AI brains with basic associative learning can train in real time from limited observation – potentially reducing training data volume, computational load, and training time by orders of magnitude. Neurons in semantic AI brains need to implement more of the features and capabilities of biological neurons. Training semantic neural nets will grow AI brain networks that reflect the architecture of semantic learning. Products with smart semantic AI brains will change our world.

AI brain-driven robots are not coming for anyone’s job any time soon. By the time AI brains become as smart as us, they will have become one of us. They will have virtual minds just like us, and our culture will be their frame of reference. So we better make our culture better!

This is the third part of a serial three-part blog on Artificial Intelligence. Part one is What’s Wrong?, part two is What’s Missing?, part three is What’s Next.

About the Author:

Bruce Amberden is a visionary, inventor, designer, architect, startup founder, CTO, and engineering VP. He has founded startups and worked with leading technology companies in Silicon Valley to create amazing software products. Bruce has over 20 years of experience as a software engineer and as a leader inspiring tiger teams to do brilliant innovative work. He has a Masters of Science in Physics and is an Armchair Astronomer. Bruce is working on a personal project building breakthrough semantic AI technology. Bruce is seeking new employment and is available for startup and technology consulting.

Dean Norris

total privacy == total security

5 年

Good comments and conclusions about AI, which is more Machine Learning. You know I disagree with what you have revealed of your semantic AI definitions, but good writing and great conclusions.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了