What is the difference between artificial neural networks and biological brains

What is the difference between artificial neural networks and biological brains

No alt text provided for this image

What is master Algorithm that allows humans to be so efficients at Learning things ?that is a question that has perplexed artficial intelligence Scientists and researchers who , for the past decades, have tried to replicate the thinking and problem-solving capabilities of the human brain. The dream of creating thinking machines has spurred many innovations in the field of AI, and has most recently contributed to the rise of deep learning, AI algorithms that roughly mimic the learning functions of the brain.

But as some scientists argue, brute-force learning is not what gives humans and animals the ability to interact the world shortly after birth. The key is the structure and innate capabilities of the organic brain, an argument that is mostly dismissed in today’s AI community, which is dominated by artificial neural networks

No alt text provided for this image


The brain is principally composed of about 10 billion neurons, each connected to about 10,000 other neurons. Each of the yellow blobs in the picture above are neuronal cell bodies (soma), and the lines are the input and output channels (dendrites and axons) which connect them.Each neuron receives electrochemical inputs from other neurons at the dendrites. If the sum of these electrical inputs is sufficiently powerful to activate the neuron, it transmits an electrochemical signal along the axon, and passes this signal to the other neurons whose dendrites are attached at any of the axon terminals.  These attached neurons may then fire.

It is important to note that a neuron fires only if the total signal received at the cell body exceeds a certain level. The neuron either fires or it doesn't, there aren't different grades of firing.

So, our entire brain is composed of these interconnected electro-chemical transmitting neurons. From a very large number of extremely simple processing units (each performing a weighted sum of its inputs, and then firing a binary signal if the total input exceeds a certain level) the brain manages to perform extremely complex tasks.

This is the model on which artificial neural networks are based. Thus far, artificial neural networks haven't even come close to modeling the complexity of the brain, but they have shown to be good at problems which are easy for a human but difficult for a traditional computer, such as image recognition and predictions based on past knowledge.

No alt text provided for this image

Historical background

The idea behind perceptrons (the predecessors to artificial neurons) is that it is possible to mimic certain parts of neurons, such as dendrites, cell bodies and axons using simplified mathematical models of what limited knowledge we have on their inner workings: signals can be received from dendrites, and sent down the axon once enough signals were received. This outgoing signal can then be used as another input for other neurons, repeating the process. Some signals are more important than others and can trigger some neurons to fire easier. Connections can become stronger or weaker, new connections can appear while others can cease to exist. We can mimic most of this process by coming up with a function that receives a list of weighted input signals and outputs some kind of signal if the sum of these weighted inputs reach a certain bias. Note that this simplified model does not mimic neither the creation nor the destruction of connections (dendrites or axons) between neurons, and ignores signal timing. However, this restricted model alone is powerful enough to work with simple classification tasks.

Let’s now compare it with an artificial neural network. The most obvious similarity between a neural network and the brain is the presence of neurons as the most basic unit of the nervous system. But the manner in which neurons take input in both cases is different. In our understanding of the biological neural network, we know that input is taken in from dendrites and output through the axon. These have significantly different ways of processing input. Research shows that dendrites themselves apply a non-linear function on the input before it is passed to the nucleus. On the other hand, in an artificial neural network, the input is directly passed to a neuron and output is also directly taken from the neuron, both in the same manner.

No alt text provided for this image

While a neuron in an artificial neural network has the capability to give a continuous set of outputs, a neuron in an artificial neural network can give only a binary output which is of the order of a few tens of millivolts. The manner in which the signal is passed and aggregated is like this. The resting potential of the neuron’s membrane is around -70mV. If the signals aggregated by the nucleus hit a certain threshold, the axon transmits a certain high voltage known as the action potential (hence the binary). For instance, if the threshold is -55mV, and each neuron gives out 5mV, a minimum of 3 neurons will have to activate this neuron for it to pass information forward. This is demonstrated by the graph below. After the threshold has been hit and the signal is passed by the axon, the cell may not be stimulated for a brief period of time known as the absolute refractory period. So there is a significant difference in the type of output that can be expected from a biological neuron and an artificial neuron.

One major point of difference between an artificial neural network and the brain is that for the same input the neural network will give the same output but the brain may falter. It may not always give the same response to the same input and this is commonly known in the business language as human error.Many variations have been introduced in artificial neural networks now. A convolutional neural network is one which is used to process images and each layer applies a convolution process followed by other operations on images which reduces or expands the dimensions of the image, leading the network to capture only the details that matter. The main features and computations done by convolutional neural networks were directly inspired by some of the early findings about the visual system. It was discovered that neurons in the primary visual cortex respond to specific features in the environment, like edges. Two kinds of cells were discovered: simple cells and complex cells. Simple cells responded only in a particular orientation and complex cells responded in more orientations. It was concluded that complex cells pooled over inputs from simple cells due to which there was spatial invariance in complex cells. This inspired the idea of a convolutional neural network.


Thank you for reading my post. Here at LinkedIn  To read my future posts simply join my network here or click 'Follow'. Also feel free to join me

要查看或添加评论,请登录

VISHAL DHANURE的更多文章

社区洞察

其他会员也浏览了