Beyond Backward Propagation - Silicological replica of biological brain!!
Koustubh Tilak
Techno rebellion |CTO| Leadership | Autonomous driving | ISO 26262 | Cybersecurity | Vehicle Dynamics Expert | ADAS | AI/ML | Safety Critical Software | RADAR | UWB / HADM/NFC/BLE | MBD-HIL | Owner - Joshi Sweets Baner
Human Brain is a biological miracle – it’s a huge mystery? . It’s still not very clear to scientific community how brain actually learns .
One of the most? well know? understanding?? revolves around the rule introduced by the Canadian psychologist Donald Hebb – Neurons which fire together wire together? .
Hebb stated it as follows
Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability. ... When an?axon?of cell?A?is near enough to excite a cell?B?and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that?A’s efficiency, as one of the cells firing?B, is increased.
This rule formed the basis of first artificial neural network developed way back in 1950 Each artificial neuron in these networks receives multiple inputs and produces an output, like its biological counterpart. The neuron multiplies each input with a so-called “synaptic” weight — a number signifying the importance assigned to that input — and then sums up the weighted inputs. This sum is the neuron’s output.
It ?is very ?clear that such neurons can ?be organized into a network with an input layer and an output layer, and the artificial neural network could be trained to solve a certain class of simple problems.? These were simple fully connected networks with input and output layer ( M*N architecture) . ?
The hebian learning (feed forward) ?needs to comply to following fundamental principles ?
·???????? Locality
·???????? Co operativity
·???????? Synaptic depression
·???????? Boundedness
·???????? Competition
·???????? Long-term stability
?
Around ?1960’s it became very ?clear to scientific community ?that more complicated problems will require additional layer of neurons between input and output , we call it as a hidden layer an d such networks as DNNs . As soon as we introduce a hidden layer the Hebian learning does not apply as we deviate from a principle of locality.
领英推荐
How to train such a network with multiple layers remained ?as a mystery for next 25 years and then came the major breakthrough in ?1986 when Hinton, the late David Rumelhart and?Ronald Williams ?(now of Northeastern University) published?the back propagation algorithm . It has two passes , the forward pass and the backward pass. Forward pass is used to predict the output and calculate the error in prediction , the backward pass is used to adjust the weights and biases to reduce the error in output ( minimize the loss function)
At that time many Neuron scientist? claimed? that the invention of back propogation was the first deviation of the? working principle of? artificial neural networks with respect to human neural networks .
Backprop is considered biologically implausible for several major reasons.
In simple words- even though the human neuron can read inputs from other neurons , process them in the? nucleus and generate output , which acts as an input for neighboring neurons?, its basically a feed forward network !
This was the understanding of 1980’s ?of human brain which was making back propagation to look impossible , this is not really true though – courtesy pyramidal neuron – which has a feedback mechanism !!
Some new ?methods have been worked out since then to do the things more brain like
1.?????? Feedback alignment : one of the strangest solutions to the weight transport problem, was designed by timothy lillicrap of Google DeepMind in London and his colleagues in 2016. Their algorithm, instead of relying on a matrix of weights recorded from the forward pass, used a matrix initialized with random values for the backward pass. Once assigned, these values never change, so no weights need to be transported for each backward pass. to everyone’s surprise, the network learned. Because the forward weights used for inference are updated with each backward pass, the network still descends the gradient of the loss function, but by a different path. However it was found much inferior? compared to performance of? Back Prop in? case of complex architectures of deep neural networks
2.?????? Equilibrium Propagation: ?In 2017 , a concept of recurrent connection network ?was proposed by Benigo and his team . ??That is,?? if a neuron A activates neuron B, then neuron B in turn activates neuron A . If such a network is given some input, it sets the network reverberating, as each neuron responds to the push and pull of its immediate neighbours.? Eventually, the network reaches a state in which the neurons are in equilibrium with the input and each other, and it produces an output, which can be erroneous. The algorithm then nudges the output neurons toward the desired result. This sets another signal propagating backward through the network, setting off similar dynamics. The network finds a new equilibrium. This is an absolute gem of a technique and closely resembles to how human brain works
Last but not the least- Pyramidal cells which appears to be responsible for processing input from the primary visual cortex. These cells might be playing a critical role in complex object recognition within the visual processing areas of the cortex. How these cells , their way of working? might result into advancements in visual recognition networks is a topic of research
?
The story is not over yet , given the rapid advances happening in both neuro science? as well as computer science , who knows in another 10 years we will have a complete understanding of how human brain works breaking the paradox of human brain trying to uncover working of human brain ) ?and create a silicological replica of human brain !
SVP || SW HW EE eMobility || SMIEEE
1 年Starkenn solutions would have interesting AI deployment, hope to see more details in your upcoming posts after this interesting learnings.