Beyond Backward Propagation - Silicological replica of biological brain!!

Beyond Backward Propagation - Silicological replica of biological brain!!

Human Brain is a biological miracle – it’s a huge mystery? . It’s still not very clear to scientific community how brain actually learns .

One of the most? well know? understanding?? revolves around the rule introduced by the Canadian psychologist Donald Hebb – Neurons which fire together wire together? .

Hebb stated it as follows

Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability. ... When an?axon?of cell?A?is near enough to excite a cell?B?and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that?A’s efficiency, as one of the cells firing?B, is increased.

This rule formed the basis of first artificial neural network developed way back in 1950 Each artificial neuron in these networks receives multiple inputs and produces an output, like its biological counterpart. The neuron multiplies each input with a so-called “synaptic” weight — a number signifying the importance assigned to that input — and then sums up the weighted inputs. This sum is the neuron’s output.

It ?is very ?clear that such neurons can ?be organized into a network with an input layer and an output layer, and the artificial neural network could be trained to solve a certain class of simple problems.? These were simple fully connected networks with input and output layer ( M*N architecture) . ?

The hebian learning (feed forward) ?needs to comply to following fundamental principles ?

·???????? Locality

·???????? Co operativity

·???????? Synaptic depression

·???????? Boundedness

·???????? Competition

·???????? Long-term stability

?

Around ?1960’s it became very ?clear to scientific community ?that more complicated problems will require additional layer of neurons between input and output , we call it as a hidden layer an d such networks as DNNs . As soon as we introduce a hidden layer the Hebian learning does not apply as we deviate from a principle of locality.


Multi layer network

How to train such a network with multiple layers remained ?as a mystery for next 25 years and then came the major breakthrough in ?1986 when Hinton, the late David Rumelhart and?Ronald Williams ?(now of Northeastern University) published?the back propagation algorithm . It has two passes , the forward pass and the backward pass. Forward pass is used to predict the output and calculate the error in prediction , the backward pass is used to adjust the weights and biases to reduce the error in output ( minimize the loss function)

At that time many Neuron scientist? claimed? that the invention of back propogation was the first deviation of the? working principle of? artificial neural networks with respect to human neural networks .

Backprop is considered biologically implausible for several major reasons.

  • The first is that while computers can easily implement the algorithm in two phases, doing so for biological neural networks is not trivial.
  • The second is the weight transport problem: In a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output.

In simple words- even though the human neuron can read inputs from other neurons , process them in the? nucleus and generate output , which acts as an input for neighboring neurons?, its basically a feed forward network !

This was the understanding of 1980’s ?of human brain which was making back propagation to look impossible , this is not really true though – courtesy pyramidal neuron – which has a feedback mechanism !!

Gradient Descent


Some new ?methods have been worked out since then to do the things more brain like

1.?????? Feedback alignment : one of the strangest solutions to the weight transport problem, was designed by timothy lillicrap of Google DeepMind in London and his colleagues in 2016. Their algorithm, instead of relying on a matrix of weights recorded from the forward pass, used a matrix initialized with random values for the backward pass. Once assigned, these values never change, so no weights need to be transported for each backward pass. to everyone’s surprise, the network learned. Because the forward weights used for inference are updated with each backward pass, the network still descends the gradient of the loss function, but by a different path. However it was found much inferior? compared to performance of? Back Prop in? case of complex architectures of deep neural networks

2.?????? Equilibrium Propagation: ?In 2017 , a concept of recurrent connection network ?was proposed by Benigo and his team . ??That is,?? if a neuron A activates neuron B, then neuron B in turn activates neuron A . If such a network is given some input, it sets the network reverberating, as each neuron responds to the push and pull of its immediate neighbours.? Eventually, the network reaches a state in which the neurons are in equilibrium with the input and each other, and it produces an output, which can be erroneous. The algorithm then nudges the output neurons toward the desired result. This sets another signal propagating backward through the network, setting off similar dynamics. The network finds a new equilibrium. This is an absolute gem of a technique and closely resembles to how human brain works

Last but not the least- Pyramidal cells which appears to be responsible for processing input from the primary visual cortex. These cells might be playing a critical role in complex object recognition within the visual processing areas of the cortex. How these cells , their way of working? might result into advancements in visual recognition networks is a topic of research

?

The story is not over yet , given the rapid advances happening in both neuro science? as well as computer science , who knows in another 10 years we will have a complete understanding of how human brain works breaking the paradox of human brain trying to uncover working of human brain ) ?and create a silicological replica of human brain !


Ashish Deshpande

SVP || SW HW EE eMobility || SMIEEE

1 年

Starkenn solutions would have interesting AI deployment, hope to see more details in your upcoming posts after this interesting learnings.

要查看或添加评论,请登录

Koustubh Tilak的更多文章

  • " Lock the Malloc"

    " Lock the Malloc"

    Malloc is a popular function because it offers several advantages like optimized usage of RAM ( unlike array which is…

  • Demystifying the concept of static error recovery mechanism as specified in ISO 26262 Part-6.

    Demystifying the concept of static error recovery mechanism as specified in ISO 26262 Part-6.

    Static recovery mechanism refers to actions that are decided before the software runs. For example the software reset…

  • Security is no more an option - It’s a Mandate – UN ECER - 155

    Security is no more an option - It’s a Mandate – UN ECER - 155

    This article is an illustration of possible Responsibility matrix ( between OEM & Suppliers) for meeting requirements…

  • RADAR Chirp Configuration

    RADAR Chirp Configuration

    One of the key aspects of RADAR based application design is configuration of chirp parameters . Before diving deep into…

  • Significance of Cramer Rao Lower Bound (CRLB) in RADAR signal Processing

    Significance of Cramer Rao Lower Bound (CRLB) in RADAR signal Processing

    This post is to describe Significance of Cramer Rao Lower Bound (CRLB) in RADAR signal Processing . Please note that…

  • Zero is the real Hero !

    Zero is the real Hero !

    This article is dedicated to a mathematical concept called zero and its importance in modern era. Let's take a moment…

    1 条评论
  • Road Hypnosis – Kalman Filter What’s the link

    Road Hypnosis – Kalman Filter What’s the link

    Hypnosis is a human condition involving focused attention , reduced peripheral awareness, and an enhanced capacity to…

    1 条评论
  • Johari Windows of ADAS - SOTIF

    Johari Windows of ADAS - SOTIF

    The Johari Window is a framework for understanding conscious and unconscious bias that can help increase self-awareness…

    1 条评论
  • Safety and Security beyond coding standards (1/3)

    Safety and Security beyond coding standards (1/3)

    In this article we will deep dive into buffer overflow and stack overflow attacks and the ways to mitigate those Let’s…

  • Castle and Moat to Zero Trust

    Castle and Moat to Zero Trust

    If you consider your car as a castle , the ECUs inside the car as chambers inside the castle and CAN network as a…

社区洞察

其他会员也浏览了