Neuroplasticity and Neuromorphic Computing: Emulating Brain with the advancements in AI
Vivek Mehrotra
Chief Business Development Officer - Digital & AI | Circular Economy | GenAI
Language is one of the fundamental abilities that underlie human cognition. Language and cognition are deeply interconnected, with language acting as a crucial tool for expressing and shaping thought in human brains. The first step towards making machines learn the human language using Large Language Models (LLMs) is a step closer to developing biological brain like cognition in machines, however there are challenges which we will discuss in the article later.
Biological Evolution vs. Technological Advancement
There is a similarity between technology and biological evolution. It's quite easy to understand how biological evolution works: from the basic life, you have mutations and natural selection, where the beneficial traits are retained, leading to increasingly complex organisms.
I believe machine learning, particularly deep learning, is in a similar state. We have a simple rule that extracts information from data and incorporates it into the model, repeating this process continuously. Through this repetition, the complexity of the data is transferred into the complexity of the model. Consequently, the resulting model becomes highly complex, eventually we don't exactly know how it works, but the underlying algorithm is quite simple; like basic life.
The Progress of Large Language Models (LLMs)
When LLMs first emerged, the main goal was predicting the next word or token, however LLMs have come a long way now. Claiming that LLMs are merely next token predictor is a gross misunderstanding. LLMs are more than capable of reasoning, understanding context and drawing analogies between seemingly unrelated concepts, showcasing creativity and deeper comprehension. This ability to encode common structures and analogies into their weights allows LLMs to exceed mere repetition, potentially advancing human knowledge and creativity as they evolve.
I gave GPT this prompt “write me a short story using the four words here son, egg, travel, blue.“ notice, how unrelated those four words are. ChatGPT came up with 3 paragraph story which had the elements of curiosity, excitement, hope, determination etc. This is not just next token prediction or autocomplete. This shows how complex these models have become so that they are able to bring cognition and context together. See at the end what story ChatGPT generated.
At the moment, the computation needed for Large Language Models (LLMs) like GPT is immense, because they operate by converting text into numerical representations called vectors, which are then processed through matrix operations within a hidden layer of neural network. While this appears closer to how human brain process the language, this is a serious compute intensive task.
The Efficiency of Biological Brain vs ANNs
The energy consumption of current Artificial Neural Networks (ANNs) is unsustainable. For instance, training GPT-4, which requires significant computational power, consumes about 7,200 MWh over five to six months.
In contrast, the human brain, during intense mental activity, consumes just about 20-25 food calories per hour. Not to mention the footprint in the terms of the size vs. the amount of analysis it can do. If we assume an average requirement of 2,000 calories per day, human brain consumes around 20% of this, averaging to 400 calories per day. That would be around 16 calories per hour and equate to merely around 18 watts. This is absolutely true: neurons in the brain communicate among each other by means of spikes, which are short voltage pulses that propagate from one neuron to the other. The average spiking activity is estimated to be around 10Hz (i.e. a spike every 100ms). This yields very low processing power consumption, since the activity in the brain results to be really sparse.
While we want to achieve Artificial General Intelligence (AGI) like in biological beings but achieving it at the efficient way is far from there. In order for AGI to reach closer to human brain, we would have to do quite a lot on the efficiency aspects. Human brain does not consume megawatts of energy to perform intense tasks. This stark contrast highlights the need for more energy-efficient AI models and energy efficient compute.
Moving Towards Energy-Efficient AI, Third Generation Neural Networks
The current development of second-generation artificial neural networks (ANNs) is primarily focused on deep learning, often overlooking the energy requirements involved. Ongoing research into energy-efficient third-generation networks is exploring Spiking Neural Networks (SNNs), which more closely mimic biological systems. SNNs communicate through discrete spikes or pulses, with the timing of these spikes carrying information, unlike today's ANNs that use continuous activation functions to compute outputs based on the weighted sum of inputs. This spiking method of information transmission mirrors how biological neurons operate, responding to discrete events that occur at specific times.
One of the significant advantages of SNNs is their energy efficiency. Current ANNs need to recalculate the entire network whenever there are changes, making them extremely energy-intensive when processing new information. In contrast, SNNs, like biological systems, generate spikes only when necessary, leading to sparse activity and significantly reduced energy overhead.
领英推荐
Neuroplasticity & Neuromorphic Computing, An Alternative
Training Spiking Neural Networks (SNNs) presents unique challenges compared to traditional Artificial Neural Networks (ANNs) due to their fundamentally different nature. Unlike ANNs, which often rely on supervised training with labelled data, SNNs require specialised training methods. One such method is Spike-Timing-Dependent Plasticity (STDP), a biological learning rule inspired by Neuroplasticity. STDP adjusts the weights of synapses based on the precise timing of spikes from pre- and post-synaptic neurons. This concept of neuroplasticity—the brain's ability to adapt and form new neural connections—has inspired advancements in AI, leading to the development of more adaptable and efficient learning algorithms.
Additionally, SNNs can be trained through reward-based learning (Reinforcement Learning), similar to how the brain learns from rewards and punishments to optimise behaviours. Other techniques, such as pruning unused neurons, mimic the brain’s natural optimisation processes.
Training SNNs on traditional, energy-intensive GPU architectures is often impractical due to their unique operational characteristics. To address these challenges, Neuromorphic Computing has emerged, creating hardware designed to replicate the properties of biological neurons. Neuromorphic chips, such as IBM's TrueNorth and Intel's Loihi, represent a significant shift towards localised memory and signal processing, promising major advancements in AI efficiency by enhancing processing capabilities while reducing energy consumption.
What Next?
The convergence of AI and neuroscience holds exciting possibilities, such as personalised learning, advanced robotics, and sophisticated brain-computer interfaces (BCIs). However, challenges remain, including limited understanding of neuroplasticity, ethical considerations, and computational complexity.
In conclusion, the tech industry's pursuit of more advanced AI is significantly hindered by several challenges including power consumption. While research into Spiking Neural Networks (SNNs), Neuromorphic Computing, and similar fields is promising, achieving the efficiency of biological brains—capable of complex tasks with minimal energy from food calories—remains a formidable goal. We may get closer to this ideal, but without overcoming these hurdles, achieving Artificial General Intelligence (AGI) as we envision it, will be on paper.
? -- end --
ChatGPT prompt: “write me a short story using the four words here son, egg, travel, blue.“
Output:
Once upon a time, in a small village surrounded by vast, blue mountains, there lived a young boy named Arlo. Arlo was an adventurous son of a humble farmer, and he always dreamed of exploring the world beyond the mountains. One sunny morning, while collecting eggs from the chicken coop, Arlo discovered a peculiar egg, unlike any he had ever seen. It was larger, with a faint blue tint that shimmered in the sunlight.
Curiosity piqued; Arlo decided that this egg was a sign to embark on the journey he had always dreamed of. With his father’s reluctant blessing, Arlo packed his belongings and set out to travel across the blue mountains.
As he travelled, he encountered various wonders and challenges, but he kept the blue egg close, believing it to be his good luck charm. Days turned into weeks, and Arlo ventured farther than he ever imagined possible.
One evening, as the sun dipped below the horizon, the blue egg began to glow. Astonished, Arlo watched as it cracked open, revealing a tiny, magical creature with iridescent blue feathers. The creature, a mythical bird known as a Phoenix, chirped happily and nuzzled Arlo.
Realizing the Phoenix was a gift of companionship and guidance, Arlo felt a newfound sense of purpose. Together, they continued their journey, exploring the world and discovering the many marvels it had to offer. Through their travels, Arlo learned not only about the vastness of the world but also about the strength and courage that lay within him.
And so, with his Phoenix companion by his side, Arlo’s dream of adventure became a reality, filled with wonder and endless possibilities, all sparked by a single, extraordinary blue egg.
Executive Director at TIFAC
8 个月Wud be interested to learn more about