Artificial Intelligence: #9 AI’s parrot problem
Welcome to edition #9
I saw this image on David knickerbocker’s LinkedIn posting
I adapted it slightly
While its funny it is also very insightful
We are in the ‘parrot’ era of evolution of artificial intelligence (like Jurassic Triassic etc)
Both the benefits and the limitations of the AI, as we know it, come from this 'parrot like' data driven architecture
Specifically
a)?AI learns from data
b)?Data lacks meaning, content and rational deliberation which needs sequential thinking based on concepts. Hence, the parrot like gibberish in some cases
c)?The distribution of the Data changes over time and hence the system needs constant retraining
Much of the current issues with AI like bias come from this direct relationship to data.
And to create better models, we train with more data – effectively the whole of the Web in case of recent language models
That was also the concern raised by Timnit Gebru both on environmental grounds but also on bias (aka when we train a model on the whole Web, we learn all the bad elements from that data as well)
Effectively, we have an arms race of sorts based on raw computation and more data with China now claiming to have a model that outperforms other large language based models
The good thing about the current structure is it scales well.
Performance improves simply by more computation or more data.
Hence, we see ever larger models which are better (because they are larger) ex GPT-3 has 175 billion parameters while GPT-2 has 1.5 billion parameters.
But scaling alone does not overcome the fundamental limitations of the current deep learning architecture.
Specifically,
?A recent paper, Deep Learning for AI by Yoshua Bengio, Yann Lecun, Geoffrey Hinton charts the history and the evolution of deep learning
To summarise some key ideas from the paper
There is however an alternate paradigm
The logic based or Symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic and search. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the late 1980s.
John Haugeland gave the name GOFAI ("Good Old-Fashioned Artificial Intelligence based on four key ideas:
1)????Essence of intelligence is thought ie rational deliberation which is necessarily sequential
2)????Ideal model of thought is logical inference based on concepts
3)????Perceptions is at a lower level of thought
4)????Intelligence is based on ontology
The problem with the GOFAI/ AGI approach is that it needs an ontology to be useful
See this link for a description of ontology in this context esp the example of Tolkein’s middle earth
In other words, if a Terminator movie character were to function in the human world autonomously, that AI would need to know everything about our world first. Hence, a symbolic AI system can be realized as a microworld and the idea never took off
However, there are elements of the symbolic AI idea (AGI) which are clearly needed with the current AI and the lack of which leads us to many of the shortcomings if AI today.
I am actually working on a paper using hybrid cognitive architectures (which combine the neural and symbolic models) to address problems in psychology (with Rachel Sava and Dr Amita Kapoor).
Specifically, we are working with the CLARION cognitive architecture which combines the neural and the symbolic approaches. CLARION was created by Prof Ron Sun
If you are working in this area, please contact me. We are happy to share a draft copy of our paper in late July for feedback
I read a book recommended by the Berlin based researcher Dr Dagmar Monnett on this subject. The book is The promise of artificial intelligence - reckoning and judgement by Brian Cantwell Smith
Finally, there is only one job to post today via Lee Stott?- but not everyday could you get to work with the creator of Python to manage the cPython team
?
Senior Project Manager R&D at Fraunhofer FOKUS | Co-Chair W3C Second Screen WG | Lecturer at TU Berlin
3 年Thanks Ajit for again a new great article ????
EU Affairs, Vision & Values
3 年Oh yes, it would be great. Let's do it when we can.
EU Affairs, Vision & Values
3 年I love the analogy between AI and the parrot. We are at the start of the AI experience
Data & AI Leader | Mentor
3 年Sami Alsindi, PhD
It's not true that parrots don't understand the sounds they mimic. See the famous Irene Pepperberg's lifelong experiment with parrot Alex. "Be good!" :-) And birds in general are intelligent, which has been scientifically proven in many ways...