Why Are We Growing Brains in Labs to Power AGI?

Why Are We Growing Brains in Labs to Power AGI?

Introduction

Imagine a world where computers don’t just follow commands but can think, adapt, and learn almost like a human brain. This is the dream of Artificial General Intelligence (AGI): a form of AI that doesn’t just simulate human intelligence but can achieve it, making decisions as naturally as we do. However, despite our rapid advancements in AI, traditional computers struggle with one crucial limitation: they process information in straightforward, linear ways, which limits their ability to mirror the flexible, adaptive processing of the human brain.

Enter the world of Biocomputing, where scientists have begun to look beyond silicon and circuits. In this pursuit, FinalSpark, a groundbreaking biotech company, is working on developing bioprocessors from lab-grown human brain organoids. These organoids – clusters of neuron-like cells cultivated in the lab – offer a unique ability to store, process, and learn from information organically. Imagine a computer that, instead of just calculating numbers, adapts like a brain. Biocomputing with organoids could provide the critical path forward, potentially unlocking AGI by allowing computers to evolve in their problem-solving, just as humans do. In this exciting journey, FinalSpark’s work may just be the breakthrough humankind has been waiting for, bridging biology and technology to create a future where computers think and feel.


Biocomputing

Defining Biocomputing in a few sentences remains a challenging task. This is because Biocomputing is an emerging field. And as it usually happens with new technological areas, it takes time to establish their definition and understanding.

So, how do we know if a given system combining computation and biological matter is biocomputing or not?

According to engineers at FinalSpark, the intention of the computing makes the difference. If the intention to use biological matter in combination with computer technology is to observe a biological process, then it’s not biocomputing. On the other hand, if the intention is to solve a mathematical problem (for example, to build logic gates), it’s biocomputing.

The development of biocomputers is essential for the further development of Artificial Narrow Intelligence(AI which is not as capable as human intelligence) into Artificial General Intelligence(AI which is equally capable to human intelligence). This is because the development of AI using digital processors is reaching a dead end.

From the beginning of the research in AI, say 50 years ago, the leading research strategy has fundamentally relied on digital computing. Very few alternate approaches have been actively pursued. Among those alternates, one could cite analog electronic circuits implementations of ANN, liquid state machine (or reservoir computing) with real water (or with biological neurons).


Three basic reasons can be given to explain why AI research mainly relies on digital computers:

  • Flexibility, ease of designs of experiments, actually an effective tool if the goal is to write publications.
  • Proven track record: Digital computing has proven to be effective in many automation tasks, and cognitive processes can be considered as yet another automation task.
  • Understandability: The computer engineer understands what is going on when he writes and uses a computer program. However, this particular point can be debated. Indeed, understandability has been lost for some particular classes of approaches. For instance, a deep learning artificial neural network is usually seen as a black box since its internal computing processes are too difficult to conceive for a human being. Another example is genetic programming where the algorithm itself is invented by the computer and can sometimes hardly be understood by a human being
  • This research provided a number of useful tools like Bayesian networks, Fuzzy logic or Artificial Neural Networks, to name a few, but these are just tools. Intelligence seems elusive: each time one successfully automates a cognitive process (like playing chess, go and recognising pictures), we realise we did not get any closer to intelligence.?


Let’s consider the most successful tool used in AI: Artificial Neural Networks. They do not appear to anyone serious in the field as a realistic approach to build a AGI. Artificial Neural Networks or ANN mimics the behaviour of the human brain on a software level. The digital processor technology which can only interpret information in the form of Binary(0s and 1s) consume a lot of energy and resources in this process of replicating the functions of the human brain. For example, to train a program to distinguish a dog from a cat, we would have to feed thousands of images into the training data set for AI to perform this simple task. On the other hand, our brain with the help of neurons, can distinguish between a dog and a cat with the help of a few images only. With the help of this example we can also interpret how far the AI we use is from AGI. This is one of the simplest examples of AI using a lot more resources to run and function properly when compared with the human brain.

At FinalSpark, they use living cells, specifically, human neurons obtained from iPS cells, to perform computations. The research aims to store data and perform logic operations using neurons as circuits.

They have been undertaking the approach of using neurospheres, which are artificially created 3D structures of living neurons. They do not resemble the human brain in any form except for the building blocks: neurons. The main objective of FinalSpark R&D is to make it possible to use neurons as computation units.

Neurosphere

The Working

Our brains are fascinatingly powerful. They’re quick, adaptable, and capable of learning without needing tons of power or complicated instructions. Neuromorphic processors, often called neuron-based processors, try to mimic these brain qualities. These processors are designed to work like biological neurons and synapses in the brain. So, how exactly do they work, and why are they so exciting? Let’s dive into the main ways these processors operate.

  1. Collocated Processing and Memory In most computers: The processor (which does the calculations) and the memory (which stores data) are in separate areas. This layout can slow things down because the computer constantly has to move data back and forth. Neuromorphic processors do things differently. Here, every neuron (tiny processing unit) both processes and stores data at the same time. This integration removes the “von Neumann bottleneck” seen in regular computers, allowing neuromorphic processors to run faster and with less delay.
  2. Spiking Neural Networks (SNNs): One of the biggest differences between neuromorphic processors and traditional processors is the use of Spiking Neural Networks, or SNNs. These networks use “spikes” to send information. Imagine a neuron that collects tiny charges over time; once it collects enough, it sends a spike or a message. This system allows neuromorphic processors to process information only when needed, saving energy and adding efficiency. SNNs work more like real brains, which don’t constantly fire off signals but instead respond to triggers.
  3. Analog Circuitry and Synaptic Devices Instead of binary code (0s and 1s): Neuromorphic processors use analog signals to carry information, resembling the electrical signals in our brains. This setup relies on artificial synaptic devices that act like biological synapses to connect neurons. Rather than just using “on” or “off” states, these synapses can take on a range of values, allowing them to hold more information in a smaller space.
  4. Massively Parallel Processing: Neuromorphic processors are designed to handle massive amounts of tasks at the same time, just like the brain. Imagine millions of neurons working together, each performing a different task all at once. This “parallel processing” power allows the chip to process more information at once, making it capable of handling complex tasks that would slow down traditional processors.
  5. Event-Driven Computation: Another cool feature of neuromorphic processors is that they’re “event-driven.” Instead of constantly working, they only fire up when triggered by a spike. This means they use energy only when necessary. If a neuron isn’t triggered, it stays inactive, which significantly cuts down on power consumption. This method is more efficient and is a big reason why neuromorphic processors are so energy-conscious.
  6. Adaptability and Plasticity: One of the brain’s greatest strengths is adaptability or “plasticity.” Neuromorphic processors are built to mimic this quality, adapting based on the tasks they’re handling. Neurons and synapses in these chips can adjust their responses, change connections, and learn from past interactions. This flexibility helps neuromorphic processors solve new problems and quickly adjust to different environments.
  7. Fault Tolerance: In traditional computers, if a part fails, it often affects the whole system. Neuromorphic processors, however, are built with a high level of fault tolerance. Information is stored across multiple neurons, so if one neuron fails, others can continue processing without major issues. This resilience is similar to how the brain compensates for damaged areas.

Using neuron based computers, hence solve many problems we are currently facing with digital processors. In short, they are more powerful, consume less power, adapt well to conditions, do not require hard coded complicated instructions to function, and can function even if a part of the organoid is damaged or stops working due to parallel processing.


Importance of Biocomputing

Let’s talk numbers to get a better insight about biocomputing and digital computing and which might be more suitable to AI.

Data centres, which house the high-powered servers essential for AI models, are massive energy consumers and contribute substantially to the global carbon footprint. These facilities provide the processing power needed for tasks like training AI algorithms and analysing vast datasets, a process heavily reliant on cloud computing infrastructure and the sophisticated chips within these servers. For AI companies like OpenAI, these resources are vital to run advanced models, though they come at a significant environmental cost.

Carbon Emissions and Large Neural Network Training showed that GPT-3 training with 175 billion parameters consumed 1,287-megawatt hours of electricity and produced 502 tons of carbon dioxide emissions, which is comparable to the emissions generated by 112 gasoline-powered cars for a year. Google, on the other hand, estimated that GPT-3’s carbon footprint is about 8.4 tons of CO2 per year. The energy source powering data centres significantly influences their carbon emissions: facilities powered by coal or natural gas generate far higher emissions than those relying on renewable sources like solar, wind, or hydroelectricity. This variability in energy sourcing makes it challenging to calculate precise emissions for each data centre.

In addition to carbon emissions, recent research from the University of California, Riverside, has highlighted the substantial water footprint of AI models like GPT-3 and GPT-4. Microsoft reportedly consumed approximately 700,000 litres of freshwater in its data centres to train GPT-3 alone—a volume comparable to the water required to produce 370 BMW cars or 320 Tesla vehicles. This high water usage stems from the significant energy demand of the training process, which generates considerable heat. To maintain optimal operating temperatures and prevent equipment from overheating, large volumes of freshwater are needed for cooling. Beyond training, even routine inference tasks (like generating text responses) consume water. For instance, answering a conversation with 20-50 questions requires the equivalent of a 500 ml water bottle. Given ChatGPT’s extensive user base, the cumulative water footprint of these operations is considerable.

One of the biggest advantages of biological computing is that neurons compute information with much less energy than digital computers. It is estimated that living neurons can use over 1 million times less energy than the current digital processors we use. When we compare them with the best computers currently in the world, such as Hewlett Packard Enterprise Frontier, we can easily see that for approximately the same speed and 1000 times more memory, the human brain uses 10-20 W, as compared to the computer using 21 MW. This is one of the reasons why using living neurons for computations is such a compelling opportunity. Apart from possible improvements in AI model generalisation, we could also reduce greenhouse emissions without sacrificing technological progress.


Ethical Implications of Biocomputing

The advent of biocomputers promises transformative impacts across society, yet it also raises pressing ethical and regulatory concerns. The integration of biocomputing into daily life could result in profound shifts within the job market, potentially displacing jobs and redefining required skills as traditional computing roles evolve. Moreover, the power of biocomputers, especially if they progress toward advanced general intelligence, introduces the risk of misuse, from unauthorised data manipulation to unethical experimentation. This highlights the need for robust governance structures, enforceable regulations, and well-defined laws to guide the development and application of biocomputers responsibly. To safeguard ethical usage, it is essential to maintain substantial human oversight, ensuring these systems remain aligned with human values and purposes. Establishing clear control over biocomputers, with well-defined limitations, will be crucial to prevent potential negative impacts and support a societal shift that benefits humanity.

?

Is AGI the Dead End of Human Intelligence?

The idea that Artificial General Intelligence (AGI) might mark humanity’s final major technological achievement is a topic of intense discussion, stirring both excitement and concern. AGI, if realised, holds the potential to reshape nearly every area of human life, possessing abilities to tackle complex problems, understand and interpret natural language, adapt to unforeseen situations, automate sophisticated tasks, and deepen the human-machine partnership. This level of impact could catalyse unprecedented progress across sectors like healthcare, manufacturing, finance, and scientific research, potentially offering solutions to some of our most critical global challenges.?

One of AGI’s defining features is its capacity for autonomous learning, adaptation, and application of knowledge across diverse fields. This capability could enable AGI to innovate and self-improve without needing human guidance, possibly triggering exponential growth in technological progress. If AGI reaches a stage where it can refine itself at an accelerating pace, it may lead to what is known as a technological singularity—a point where the speed of technological change outpaces human comprehension and adaptation.

Yet, despite AGI’s promise of automating complex functions and propelling innovation, it is unlikely to entirely replace human creativity and insight. Human oversight will still be vital in setting ethical standards, managing AGI’s societal impact, and addressing moral complexities. Consequently, the human role is expected to shift towards higher-level tasks that demand emotional intelligence, ethical judgment, and interpersonal skills, ensuring that people remain essential drivers of technological advancement.

To conclude, AGI could indeed be a landmark innovation with transformative effects, but it will not mark the end of human technological progress. Instead, AGI will likely act as a powerful force driving further innovation, with humans continuing to shape and steer its development, regulation, and applications for the benefit of society.

?



The Conclusion

As we explore the boundaries of organoid-based computing, we stand on the brink of what could be the next great technological breakthrough—one that not only enhances our capabilities in AI but also offers more sustainable and efficient solutions than traditional silicon-based systems. However, with such transformative potential also comes immense responsibility. Ethical governance, societal readiness, and rigorous safeguards must be prioritised to ensure that biocomputers, particularly as they approach Artificial General Intelligence, remain a force for positive change. By embracing biocomputing with foresight and caution, we can pave the way toward a future where technology and biology harmoniously coexist, driving innovation that is both powerful and ethically grounded. The journey is just beginning, but the possibilities are as vast as the human imagination.

要查看或添加评论,请登录

Trinabh Marwah的更多文章

社区洞察

其他会员也浏览了