AI SERIES: Looking for a “Cognitive Operating System”
Michele Vaccaro
Senior Director @OpenText - Leading people and Inspiring digital transformation curiosity and culture
Orchestrating different techniques for a brand-new generation of AI
“I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannh?user Gate. All those moments will be lost in time, like tears in rain. Time to die.” (Blade Runner, 1982)
Impossible to avoid goose bumps on my skin every time I hear the soliloquy delivered by replicant Roy Batty (portrayed by Rutger Hauer) in Ridley Scott’s masterpiece.
But while Ridley Scott reached perfection with his movie, our current Artificial Intelligence related technologies and evolutional level are still very far from being able to produce replicants of any kind.
The film industry together with the big hype created by the latest achievements in Deep Learning developments have created in the people strong emotional responses that resulted in a commonly distorted perception and expectations on what AI currently is and can do for us.
AI is a field of study that seeks to understand, develop and implement intelligent behavior into hardware and software systems to mimic and expand human-like abilities.
To deliver its promise, AI implements various techniques in the field of Machine Learning (ML), which is a subset of studies that focus on developing software systems with the ability to learn new skills from experience, by trial and error or by applying known rules. Deep Learning (DL), is so far, the technique in Machine Learning that, by a wide margin, has delivered the most exciting results and practical use cases in domains such as speech and image recognition, language translation and plays a role in a wide range of current AI applications.
In speech recognition, for example, Deep Learning has led to personal voice assistant devices like Apple Siri or Alexa from Amazon. In object recognition, it can detect and recognize images of different objects powering a wide variety of application in many fields ranging from video surveillance and security to healthcare and agriculture. By combining these elementary abilities it’s possible to achieve even more challenging objectives like in the case of self-driving cars.
Deep Learning is so popular that its name is often used interchangeably with Machine Learning and sometime even with Artificial Intelligence, but essentially, it’s a statistical technique for identifying and classifying patterns, after training sessions based on big sample data sets.
And it’s exactly here that all the limitations of Deep Learning arise: to learn how to correctly classify patterns, DL systems generally require an extraordinary amount of previously cleaned and labelled data to be used during a long period of training sessions.
But real-world learning offers data much more sporadically and problems aren’t so neatly encapsulated. The lack of a proper amount of qualitative training data reduces the effectiveness of DL-related techniques and limits their ability to generalize the learned patterns beyond the space of known training examples.
For example, with its Alpha programs, Google's DeepMind has pushed deep learning to its apotheosis. In 2016, AlphaGo beat a human champion at Go, a classic Chinese strategy game. But even with state-of-the-art systems like Alpha, it is evident that Deep Learning is not able to extract the lessons that lead to common sense: to play Go on a 21-by-21 board instead of the standard 19-by-19 board, the AI would have to learn the game anew.
Deep learning currently lacks a mechanism for learning abstractions. Yes, it may identify previously unidentified patterns or problems to be solved. But it is not autonomously creative and it will not spontaneously develop new hypotheses from facts (data) not in evidence or in novel situations which offers data that greatly differ from the data used during the training phases, resulting in a limited ability to manage open-ended inferences based on real-world knowledge.
There is a wide range of initiatives in machine learning that are working to expand the capabilities of Deep Learning and overcome its current limitations. Interestingly enough they often come directly from very accurate observations of how our natural brain works as it is basically the only example that we can refer to when trying to produce AI and therefore many progresses we make come directly from ideas generated by new discoveries and progresses in the neuroscience world.
For example, one very fascinating research area is trying to combine the traditional Deep Learning approach with an artificial version of a foundational component that exists in our natural brain: the memory.
This machine learning model (called differentiable neural computer or DNC), can learn to use its external memory to successfully answer questions designed to emulate reasoning and inference problems in natural language.
But still, even considering the many, significant achievements, problems that have less to do with categorization and more to do with commonsense reasoning essentially lie outside the scope of what deep learning is appropriate for. Humans integrate knowledge across vastly disparate sources and as such, are a long way from the sweet spot of deep learning-style perceptual classification.
This does not mean in any way, that Deep Learning is failing its promise. As of today, it is basically powering all the smart technology around us adding that ‘magic’ touch that resembles so much human intelligence. Deep Learning today is at the heart of what is referred to as ‘Pragmatic AI” which in fact narrows the scope of the learning to the very specific fields where it can excel and delivers ‘pieces of intelligence’ that can be used alone or combined like Lego pieces to produce even more amazing results.
The real issue around Deep Learning is the misunderstanding on what it actually is and can do for us, and instead of looking at Deep Learning as the technology that can bring us the ‘Pure AI’ that the replicants in Blade Runner were equipped with, it should be though as one of the component of a much broader architecture which include not only the ability to learn but also the ability to access and use long-term memories as well as the many rules that govern core knowledges and instincts that humans inherited from millions of years of evolution, gifting us with our extraordinary cognitive intelligence, flexibility and power.
I like to think that what we are missing is a sort of operating system that brings together all the different techniques, orchestrating the deriving abilities to deliver something closer to ‘Pure AI’.
A Cognitive Operating System for a brand-new generation of Artificial Intelligence.
I cannot tell if by the time we’ll get there, humans will be travelling around the shoulder of Orion or playing with C-beams near the Tannh?user Gate, but I’m sure that if it will be the case, none of those memories will be lost like tears in rain.
Engineer and Accountant
6 年Thank you for mentioning this topic. Human cognition is indeed the future of computing and technology developments. Developing better human-computer interfaces has been an area of ongoing research for several years. However, what becomes a challenge in this area is the fragility and distractability of the human mind. Here, I differentiate between the mind and the brain. The brain being an organ, however, the mind being a brain+ entity possible including emotions and reflexes. To operate a computer or any human integrated machine, it is more a connection with the mind than it is with the brain. So what physiologically might be possible by the brain could be escaped by the mind in its distractions and fluidities. So theoretically, a cognitive operating system might actually be full of bugs directly attributable to the mind! The old adage of GIGO in computers still holds - and we know how much of that we can take in :)
CEO & Co-Founder of Frugé Psychological Associates, Inc.
6 年This is an interesting article that casts a spotlight on the old question grappled by psychologists who design and administer IQ tests. Deep Learning is obviously associated with IQ. However, software engineers in this field of study must struggle to answer the same question as psychologists: “What is intelligence, and how do we appropriately measure it?” Many psychologists believe that IQ is culturally bound. Cognitive abilities that are highly valued by one group are not as important in others. Professionals who are developing standards for AI may be imposing their own values. Also, the article speaks to integration of memories into leaning processes. There are several different types of memory that contribute to learning such as emotional and episodic. However, thus far the AI field has primarily focused on declarative, semantic, and procedural memory. Regardless of the volume of training, it is likely that AI will never rise to the level of abstraction without episodic and emotional memory. It’s similar to only seeing half a picture. Based on the work of psychologist, Dr. Jean Piaget, AI is presently arrested at the concrete operation stage. Culturally informed psychologists are needed to define IQ, and prog. emotional parts.
The Philippines Recruitment Company - Solving Skills Shortages ?? Chefs ?? Restaurant Managers ?? Kitchen Operations ?? Banquet Operations ?? Front Office ?? Housekeeping
6 年I’ve always been impartial to AI, but you’ve got me thinking now…
Vice President & GM, EMEA Emerging India at PTC
6 年Great Job!!!
Digital Transformation as a passion. Advisoring customers?during their Journey. Industry expertise on Healthcare and Life Sciences - Central/Local Government
6 年Fantastic view of AI