Limbic thought and artificial intelligence
Pixabay Free Images

Limbic thought and artificial intelligence

It will be eons before AI thinks with a limbic brain, let alone has consciousness

An artificial intelligence (AI) computer programme can change itself to take actions that maximize its chances of success at performing a task. It uses immense amounts of manually labelled data for its base algorithms before these algorithms can be of any use. In a vital twist that has occurred in machine learning in the last 48 months, these AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.

The problem though, is that these algorithms are fixed, and no amount of machine learning-induced fine-tuning can allow them to change their base nature of analytical pattern recognition. A completely new algorithm would have to be written for the AI programme to work on anything else, even if the problem it is now trying to solve is similar. This is a phenomenon called “catastrophic forgetting”. Researchers have been dealing with the science behind it for a long time. An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.

Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”. In a blog post on DeepMind, and in a related research paper, she talks of the idea of “compositionality” which is at the core of human learning—but does not yet figure in machine learning. She says that when humans are equipped with just a few familiar conceptual building blocks, our brains allow us to create a large number of new ones, which we do by placing those few concepts in hierarchies which run from specific to general. We can recombine them in different and novel ways. For instance, a child can be taught the colour white by looking at a white wall, and then recognize a white shirt as white even though it does not yet know what a shirt is. While the child would not need to learn all objects and all colours before it can make such associations, an AI programme would not be able to make the leap without first being fed a large spectrum of data on colours, and then again, on objects. Higgins and team claim to have cracked this code so that their programmes can more closely mimic the child’s cognition. That said, Higgins says that her team has only scratched the surface.

I, for one, believe that the true roots of learning among human beings actually goes back to the days when our ancestors were still reptilian. We have inherited vestigial areas in our cerebral cortex that provide such limbic thought; many of our survival-based thoughts arise there. Limbic thinking makes us emotionally react to a domineering boss as an existential threat, much like our primitive ancestors would have reacted to a predator. Our “compositional thinking” building blocks are buried as deep as that limbic level. Paradoxically, as sentient beings, we have also developed “consciousness”, which at least for the more self-aware among us, can be used to “view” a limbic thought and distance ourselves from it. Sadly, I am far from having reached the nirvana of being a witness to my thoughts.

As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.

Siddharth Pai has led over $20 billion in technology outsourcing transactions. He is the founder of Siana Capital, a venture fund management company focused on deep science and tech in India.

*This article first appeared in print in the Mint and online at www.livemint.com

For this and more, see:



要查看或添加评论,请登录

Siddharth Pai的更多文章

社区洞察

其他会员也浏览了