Analog Superintelligence

Analog Superintelligence

Neuromorphic Systems for Cognitive Artificial Intelligence

This excerpt was previously released on Medium.

This is an excerpt from the upcoming 4th book in the Artificial Superintelligence Handbook series of books currently available on Amazon that details designs used in building and developing the world’s most advanced Cognitive Artificial Intelligence systems designed to surpass human cognition. The 4th book in the series will be released in spring 2022.

by Rob Smith, eXacognition — Director Advanced Cognitive Artificial General Intelligence

Building human level Cognitive?AI requires the ability to bridge the gap between binary electrical systems, their digital representations & our constantly fluid and variable human perception. The variable dimensional nature of cognition is what gives humans the power to deeply comprehend elements like relevance to changing context & it is context that gives us the ability to solve problems, communicate, contemplate, feel empathy, create, innovate & evolve & we do all of this as part of a continuously flowing cognition. This is also done by a human cognition using ultra low power & ultra low resource levels and with the ability to respond to complex stimuli near instantly. So how do we achieve a consistent level of cognition as well as consistent resource and power efficiency in a Cognitive AI? The answer is we do it by building an infrastructure optimized for ‘human level’ artificial cognition.

This quest has resulted in some of the leading tech companies in the world (and a few labs like ours) exploring holistic embedded AI designs across the entire software and hardware infrastructure to optimize it for Cognitive AI. In particular, one of our current dev branches extends the Cognitive Artificial Intelligence roadmap into development areas like Neuromorphic tech and other processor designs (see the chapter on ERRIS systems). Neuromorphic systems are analog based (i.e. variable level) systems that attempt to build circuits to mimic variable neural structures in the human mind for use in sensory detection and processing (i.e. computer sensing). This is different than digital systems (dual signal processing) & has now developed into a symbiosis of both analog & digital process signaling that is more consistent with Cognitive AI design and theories of human intelligence like Cognitive Wave Theory. Such new ‘on chip’ designs are not a stretch for companies like Google & NVIDIA who already produce purpose built infrastructure in areas like graphics rendering and processing (GPU’s for autonomous machines) & machine learning (Tensorflow).

The truth is that today’s binary chip & hardware architectures and their digital overlays are simply not optimized for advanced Artificial Intelligence but as new cognitive systems move from the design lab into real world products (i.e. Google’s LaMDA foundation), these more advanced Cognitive AI systems will require greater processing speed in a smaller more efficient format for intelligent stand alone autonomous systems (i.e. cars or robots). This has resulted in holistic AI designs that include infrastructure such as memory, signaling (i.e. communication), processing etc., as well as code and such that the whole of the AI development life cycle (AIDLC) is optimized for human level Artificial Cognitive Intelligence. Most of the largest tech companies in the world are researching, designing and building in new hardware areas of optimized AI infrastructure and most of them now consider their roadmap as a pathway to artificial ‘human level’ cognition.

The Rise of Neuromorphic Systems

The concept of Neuromorphic systems has been around since the 1980’s. It relies on the inclusion of analog circuits as part of the foundation of digital electronic infrastructure and it was originally intended to mimic the neurobiology of human cognition. I have discussed the nature of human bio electro-mechanical physiology in cognition in earlier ASIH books. The short version is that we humans use proteins, neural synapses and faint light speed electrical fields to perform every act of human cognition from keeping our heart beating to contemplating the meaning of life. In doing so we are decisively?not?binary. While this foundation permits us to possess an ultra optimized fluid cognition in an ultra low powered and ultra fast self contained unit that can interact with other such units instantaneously, mapping this whole ‘human thing’ into a binary world of on/off light switches is non compatible. It can be done to some degree but to achieve a complete build will simply never be realized unless we try something new. One of these ‘new’ things is Neuromorphic systems. If you can’t beat it, why not join it?

The recent resurgence in interest toward Neuromorphic systems has less to do with advancements in the technology developed over the last decade, including advancements by corps like Intel, and more to do with changes in the development landscape regarding cognitive computing. The current push for human level artificial cognition has exposed deep rifts between the current infrastructure and even the very software foundations that our current AI systems reside on. Most of these old architectures were designed exclusively on the notion of a binary electronic circuit and its digital representation. This structure lent itself well to basic binary information processing since the foundation itself, or that of solid state circuits, was also binary in nature (positive and negative charge). However today we build advanced AI systems that require more than a simplistic on/off gate style circuitry and even the promise of quantum circuits is not enough to power an artificial ‘human level’ general cognition that can flow with the stimuli of a world full of constantly changing instigation. While current incantations of Neuromorphic systems were designed exclusively to mimic physical elements of human sensory perception like human eyes, newer designs seek to optimized the power behind sensory interpretation or that of human level cognition.

Neuromorphic Processors

While neuromorphic systems have been considered and researched for the past couple of decades, only recently have companies like Intel dug deeper into the nature of neural structures and variant power architectures in their Spiking Neural Net or SNN back in 2017. This included the use of the Loihi test chip optimized for these networks and therein optimized for Artificial Intelligence specifically Cognitive AI. Recently Intel has upped the number of processors on the Loihi chip to increase performance. Their approach is to use binary electrical systems to mimic analog waves without additional energy or performance loss which is an integral problem with traditional analog systems. However the use of the new architectures is still in the early stage as Intel software architects work to optimize code and functionality to the new chip. If Intel can successfully bridge the gap from binary AI to fluid Cognitive AI they will be well ahead of the pack in raw processing power optimized for variable flowing artificial human level cognition but only if they can comprehend the true nature of wave like human cognition.

Another company digging into Neuromorphic tech is Samsung with their nanoelectrode array designed to map the human brain. The topology of the Samsung array is pushing into multidimensional designs that are the foundation of Cognitive AI methods such as multiangulation and other context mapping structures. They are implementing these technologies to accelerate the move into the fundamentals of human level cognition including proxy context, ultra low power processing, complex contextual cognition, etc. Samsung hopes to use this technology to map the function of neurons inside the human mind and successfully duplicate a human mind inside a machine. Their use of the nanoelectrode array to capture and deliver signals from the human brain is consistent with other neural implants that I have discussed throughout the ASIH series. Of course the challenge to Samsung, as it is with most Neuromorphic technology developers, is designing code that effectively levers the new infrastructure. To achieve maximum optimization, the code of current AI must be retooled for human level cognition or that of fluid variance. To this end, companies involved in Neuromorphic tech are attempting to comprehend the nature of how human cognition implements neural connections and power architectures to stimulate the proteins that carry and instigate near instantaneous perceptual response to stimuli. While the ‘mapping’ of the human mind into a machine is still very far off, a much closer goal exists for Samsung and Intel that is the achievement of Artificial General Intelligence or the first step on the path to full Cognitive AI.

The key to designing and building Cognitive AI software on Neuromorphic infrastructures is to comprehend that cognition is a fluid constantly changing perception of relationships that are weighted probabilities of context including the context of our own self awareness. We build this into a Cognitive AI by setting weights to represent the relationships between elements (i.e. both physical and cognitive in a perception) and the layers of flowing contextual dimensions that surround a perception as well as the relevance of these relationships to specific context. The ‘levels’ are multidimensional ranges captured and set by algorithms and represent the probabilities of relevance, occurrence, space, time, etc. Moving these ‘weights’ changes dimensional perception and this impacts future context. It is important to realize that changing the values of relationships into representations is how we bridge the gap between binary systems and real world analog style variance range settings (doubly so when you realize the setting of flow levels rise and fall like a wave and do so within cascades). The output of this variance is perception that flows across and through dimensions such as time (I discuss this structure in greater detail in earlier chapters).

To Err is Binary, to Flow is Divine

The problem with today’s binary foundation is that it isn’t human. Not even close. We can replicate some of the actions and nature of human cognition in binary systems and our lab has wrestled with such designs for the better part of two decades but it is clear that we are reaching the limits of binary systems especially as we extend simple Cognitive AI into Deep Cognitive AI. An example of this is the use of layered context for the development of artificial response to stimuli as quickly as we humans perform such a feat. We simply are a long way from building the systems and foundation necessary for such nano second response to a world full of constantly flowing variance. Even the promise of quantum systems will not be enough to accomplish the task of achieving a full self contained and mobile human level cognition inside an artificial entity but there?does?exist a pathway to attaining this lofty goal. However all of this is predicated on the notion that our human perceived world is not binary but is primarily analog.

This is not to say that the nature of binary systems is not relevant or useful, just that they are inefficient as well as limited and further they can be greatly optimized by improved representation. The use of analog foundations inside binary digital (or quantum) systems can greatly improve the optimization of existing computers for Cognitive AI by building the necessary perceptive variance that requires the use of weights and probabilities to determine contextual relevance and building this as close to the source of power as possible (i.e. on the chip). You can think of this in the design of GPU’s for gaming. Optimizing motion in gaming (i.e. lifelike video motion) has always proven difficult. This is a result of gaming devs constantly adding additional layers of imagery and motion (calculations of representation) into the games thereby demanding greater resources and speed to attain anything near to ‘fluid lifelike motion’. The answer until recently was just to add more power and processors to the GPU’s. However as the physics of processors began to reach limits, the GPU companies needed to think outside the GPU box. The change was to move toward optimization in rendering code to Real Time Rendering (I discuss these designs deeper in the ASIH 3).

The Parallel Universe You Seek is Right Here

Slowly major corporations around the world are beginning to comprehend the immense value in Cognitive AI and are investing billions into new research like Neuromorphic architecture. On the other side of the equation, software companies are slowly waking up to the realization that they are a key part of this change and they too need to adapt and evolve to optimize the use of these new architectures. For now, software is behind hardware development but this is about to change as players in Cognitive AI from small research labs to global tech giants seek to move rapidly along the Superintelligence roadmap. Part of this ‘bridging of the chasm’ between hardware and software involves the realization that the representation we seek to create is a world of proxy cognition filled with contextual relevance unique to our own self awareness (I discuss proxy cognition elsewhere in the ASIH series). Cognitive AI developers are essentially creating another dimension of the physical world we occupy including parallel dimensions of artificial reality discussed in the ERRIS chapter. This is because our perception is a fluid motion of knowledge, evolutionary impetus and shared stimuli that define the flowing variance that surround us. The same is true of any advanced cognition we create. Like the vision of animals and insects creates a perception different than our own human perception, it also creates a cognition that is unique from ours. There may be consistencies between all cognition but if you experience the world differently than I do, your perception will be different from mine.

What Neuromorphic engineering and infrastructure provides is a critical component of the Cognitive AI roadmap and more importantly a clear pathway to Deep Artificial Cognition and beyond to Superintelligence. It is likely that by mid century the world will see its first truly stand alone self contained narrow Cognitive Artificial General Intelligence inside an autonomous entity. While the likelihood is very high that this AI will be developed for military purposes, the tech will eventually bleed into the general market in systems, autonomous robots and vehicles, etc. However there is also an immense impending benefit in a variety of other areas including assistive devices, emergency response and sustainability.

Of course the downside is that the foundation of such systems will ignore critical elements such as ethics and control as ‘cost inefficient’. The risk is that if these components are included ad hoc after the fact, humans will face the threat of creating a psychopathic AI that is more cognitively advanced and powerful than our own human mind.

On the positive side, the development of Cognitive AI infrastructure, code foundations and products will be one of the largest economic and technological transformations in history eclipsing even the wealth produced by the internet and powering human cognition to even greater heights while extending our lives significantly. Today we stand on the threshold of another technological revolution just as the founders of Apple, Google, Amazon and Microsoft stood at the dawn of the graphical internet dreaming of innovation.

The balance of the chapter includes information on Neuromorphic Code vs Hardware, Variant Analog Architecture for Artificial Human-level Context, The Coming ‘World of Sensors’, Indication and Representation in Advanced Cognitive AI Architectures, the Cognitive Wave Theory and Instigation, and more. All contained in the upcoming Artificial Superintelligence Handbook IV.


#Intel #Samsung #Google #AI #artificialintelligence #deeplearning #machinelearning #cognitiveai #superintelligence

要查看或添加评论,请登录

Rob Smith的更多文章

社区洞察

其他会员也浏览了