Artificial Intelligence: #9 AI’s parrot problem
Source: David Knickerbocker (adapted slightly)

Artificial Intelligence: #9 AI’s parrot problem

Welcome to edition #9

I saw this image on David knickerbocker’s LinkedIn posting

I adapted it slightly

While its funny it is also very insightful

We are in the ‘parrot’ era of evolution of artificial intelligence (like Jurassic Triassic etc)

Both the benefits and the limitations of the AI, as we know it, come from this 'parrot like' data driven architecture

Specifically

a)?AI learns from data

b)?Data lacks meaning, content and rational deliberation which needs sequential thinking based on concepts. Hence, the parrot like gibberish in some cases

c)?The distribution of the Data changes over time and hence the system needs constant retraining

Much of the current issues with AI like bias come from this direct relationship to data.

And to create better models, we train with more data – effectively the whole of the Web in case of recent language models

That was also the concern raised by Timnit Gebru both on environmental grounds but also on bias (aka when we train a model on the whole Web, we learn all the bad elements from that data as well)

Effectively, we have an arms race of sorts based on raw computation and more data with China now claiming to have a model that outperforms other large language based models

The good thing about the current structure is it scales well.

Performance improves simply by more computation or more data.

Hence, we see ever larger models which are better (because they are larger) ex GPT-3 has 175 billion parameters while GPT-2 has 1.5 billion parameters.

But scaling alone does not overcome the fundamental limitations of the current deep learning architecture.

Specifically,

  • Supervised learning needs a lot of data
  • Systems are tied to the distribution of the data and when that distribution changes, the system need to be retrained
  • Systems are good at perception but not at rational thought which needs an understanding of the concepts and a rational, sequential deliberation on these concepts (much like a human way of rational thought)

?A recent paper, Deep Learning for AI by Yoshua Bengio, Yann Lecun, Geoffrey Hinton charts the history and the evolution of deep learning

To summarise some key ideas from the paper

  • Current work on AI is motivated by the observation that human intelligence emerges from highly parallel networks of relatively simple, non-linear neurons that learn by adjusting the strengths of their connections.
  • The relatively simple networks comprising layers of neurons are expected to learn complicated internal and hierarchical representations ex object detection or natural language processing. We achieve this using deep learning through many layers of activity vectors as representations. We learn the connection strengths that give rise to these vectors by following the stochastic gradient of an objective function that measures how well the network is performing. This relatively simple mechanism has represented many complex functions well leading to current uptake in AI (which is based on this deep learning approach).
  • Thus, the brain-inspired paradigm views learning representations from data as the essence of intelligence and aims to implement learning by hand-designing or evolving rules for modifying the connection strengths in simulated networks of artificial neurons.

There is however an alternate paradigm

  • The logic-inspired paradigm views sequential reasoning as the essence of intelligence and aims to implement reasoning in computers using hand-designed rules of inference that operate on hand-designed symbolic expressions that formalize knowledge.
  • In the logic-inspired paradigm, a symbol has no meaningful internal structure: Its meaning resides in its relationships to other symbols which can be represented by a set of symbolic expressions or by a relational graph.
  • The main advantage of using vectors of neural activity to represent concepts and weight matrices to capture relationships between concepts is that this leads to automatic generalization.
  • In other words, you do not need to ‘hand craft’ concepts
  • For example, If Tuesday and Thursday are represented by very similar vectors, they will have very similar causal effects on other vectors of neural activity.

The logic based or Symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic and search. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the late 1980s.

John Haugeland gave the name GOFAI ("Good Old-Fashioned Artificial Intelligence based on four key ideas:

1)????Essence of intelligence is thought ie rational deliberation which is necessarily sequential

2)????Ideal model of thought is logical inference based on concepts

3)????Perceptions is at a lower level of thought

4)????Intelligence is based on ontology

The problem with the GOFAI/ AGI approach is that it needs an ontology to be useful

See this link for a description of ontology in this context esp the example of Tolkein’s middle earth

In other words, if a Terminator movie character were to function in the human world autonomously, that AI would need to know everything about our world first. Hence, a symbolic AI system can be realized as a microworld and the idea never took off

However, there are elements of the symbolic AI idea (AGI) which are clearly needed with the current AI and the lack of which leads us to many of the shortcomings if AI today.

I am actually working on a paper using hybrid cognitive architectures (which combine the neural and symbolic models) to address problems in psychology (with Rachel Sava and Dr Amita Kapoor).

Specifically, we are working with the CLARION cognitive architecture which combines the neural and the symbolic approaches. CLARION was created by Prof Ron Sun

If you are working in this area, please contact me. We are happy to share a draft copy of our paper in late July for feedback

I read a book recommended by the Berlin based researcher Dr Dagmar Monnett on this subject. The book is The promise of artificial intelligence - reckoning and judgement by Brian Cantwell Smith

No alt text provided for this image


Finally, there is only one job to post today via Lee Stott?- but not everyday could you get to work with the creator of Python to manage the cPython team

?

Dr. Louay Bassbouss

Senior Project Manager R&D at Fraunhofer FOKUS | Co-Chair W3C Second Screen WG | Lecturer at TU Berlin

3 年

Thanks Ajit for again a new great article ????

Edit Herczog

EU Affairs, Vision & Values

3 年

Oh yes, it would be great. Let's do it when we can.

Edit Herczog

EU Affairs, Vision & Values

3 年

I love the analogy between AI and the parrot. We are at the start of the AI experience

回复

It's not true that parrots don't understand the sounds they mimic. See the famous Irene Pepperberg's lifelong experiment with parrot Alex. "Be good!" :-) And birds in general are intelligent, which has been scientifically proven in many ways...

要查看或添加评论,请登录

Ajit Jaokar的更多文章

社区洞察