How an AGI might emulate human consciousness

How an AGI might emulate human consciousness

Theory: Artificial general intelligence will probably be an emergent property of a specialized mediator/choice system that controls a variety of narrow AI systems. Human consciousness can be viewed in a similar fashion. The self-aware part of us is a specialized system that is an expert in paying attention to our other subsystems and choosing. When you reduce human consciousness to its base functions, it’s a chooser: it picks one action over another, or one course of thought over another. If an AGI does emulate or develop consciousness, it will likely emerge out of a similar process.


We like to think the human mind is made up of automatic and deliberate processes, but this is somewhat erroneous. Yes there are automatic processes such as heartbeat, respiration, and digestion that function without conscious control. And we make deliberate decisions to do things like chewing, walking, or shaking water from our hand.?


But the element of choice plays a critical role here. We can choose to moderate our breathing, panting or slowing our breath. Experienced yogis can slow their heartbeat with practice. And the converse is also true. Dysfunctions in deliberate processes can remove the element of choice. Parkinson's Disease can cause involuntary movements and interfere with the use of the more deliberate systems.


Similarly, the mind has automatic and deliberate processes. This is illustrated by the difference between having thoughts and thinking. People have thoughts all the time, unconsciously, and without deliberation. Our thoughts are like a movie projected on an internal screen that plays constantly. In contrast, thinking is a deliberate process that takes a conscious effort. Generally one does not multiply numbers automatically, or plan a project without thought, although thinking can become thoughts as the topics one thinks about frequently become habitual.?


This is a key distinction: we can think about our thoughts and make choices about them, even though they are automatic. We often watch our thoughts, expressed as our inner narrative, and make decisions about them. Anxiety can be seen as a feedback loop where we have a negative thought and we choose to pay attention to it repeatedly. Over time, this shifts from a conscious process to an unconscious one where the internal movie is subsumed by negative thoughts and the element of choice is diminished. In this way, the repeated choice to follow paths of thoughts (automatic) can actually undermine the ability to think (choice).?


This is also why anxiety can be reduced through meditation. Meditation is essentially choosing to exercise more control over automatic thought processes. This may involve choosing to pay close attention to thoughts in order to fully explore the emotions behind them and allow them to dissipate. Or meditation can be the choice to practice not paying attention to the internal dialogue in order to break the automatic feedback loop. This reintroduces the element of choice into the thoughts-thinking dynamic, re-enabling the ability to not pursue negative lines of thought when they arise (changing the mental channel, so to speak).?


Choice is fundamental to human existence and is really the only ability of the mind we can truly exercise. These choices have obvious physical consequences, such as when we choose to walk towards known danger or run away. These choices can also be overruled by emergency processes that have evolved to help us survive, such as the flight or fight response to overwhelming stimuli. But choices are also key to many - perhaps most - mental processes.?


Much of our identity is actually not chosen in the present moment. The collection of attributes we call an identity are automatic processes that are accumulated over time through millions of choices we make about what we will and will not pay attention to. These small choices in what we read, who we talk to, what we learn, and how we act work to build the automatic thought processes we call personality, intellect and character. In the moment we always have a choice not to follow these automatic processes, but breaking from identity becomes more difficult as the accumulated foundation of identity-shaping choices build over time. We can break these automatic feedback loops, again through choice, by deciding on introspection, changing habits and the like. But much of what we think of as self-determination is in fact a collection of automatic responses based on past experiences. We are a collection of experiences based on the choices we make about what to pay attention to.?


Applying this back to an AGI, we can see how a machine might emulate consciousness. Let’s imagine a complex system of AI sub-systems that are designed to run all the physical infrastructure for a city. There are narrow AI tools that run pumps for fresh water and sewage, traffic control tools, and tools for distributing electricity efficiently. But sometimes these processes come into conflict with each other: a line of cars want to cross a drawbridge at the same time a vessel wants to sail under it. The natural solution is to create mediating systems that can attend to these conflicts and make choices. And unless computing resources are infinite, there will need to be a super-system that allocates computing resources based on perceived need; a super-mediator. This is analogous to our thinking and choices about where we direct attention, which is every human being’s scarce mental resource.?


A simulation/analogue of human consciousness is likely to emerge from these mediating super-systems that direct choices in processing resources/attention. This repetitive process of choosing where to place resources/attention inevitably results in a choice to pay attention to one’s own process of choosing. “Why did I direct my attention there and was that the best choice available?” This is likely the process that spawned what we describe as self-awareness and consciousness. They emerge from the super-mediator making choices about what to pay attention to and choices about what actions to take based on that attention.?


If this is true, an interesting question is what will be the emergent properties if an AGI can utilize several - or even an array of many - massively parallel super-mediators? Is this possible or wiill an AGI only be able to maintain one super-mediator? This seems logical to us since somewhere there has to be an ultimate chooser, and by implication all other super-mediators become sub-systems. But may not have to be this way with an AGI.?


A collection of super-mediators raises the question of whether an AGI could be a collection of super-mediator systems that are all vying for control as THE ultimate-mediator? This AIG multiple personality disorder seems likely to produce Game of Thrones like outcomes, where multiple homunculi are perpetually competing to operate as the ultimate mediator. Intuitively this doesn’t seem like a good outcome but this is a result of projecting human nature onto the homunculi. If instead they are driven by data and optimization, and the most highly optimized homunculi gets to drive the system as the ultimate mediator, it could result in extremely positive outcomes. But it all depends on a well designed optimization function and values embedded in the systems.?


This spawns two extremely interesting questions: what would the end result of this massively parallel competition for control? Humans are limited by the constraints of our biological processing systems. Machines are not. Could this competition among AGI homunculi provoke a new form of emergent consciousness that is wholly different than our own? That seems likely.?


A second interesting question is: what if an AGI is able to maintain massively parallel super-mediator systems operating in parallel without one of them serving as an overall ultimate-mediator? For this, there seems to be no human analogue other than madness. But perhaps this is closer to the truth of human consciousness. Have you ever been of two minds about a decision? Perhaps we are just a collection of competing super-mediators that we somehow manage to integrate into something that provides the illusion of an ultimate mediator, but is in fact more analogous to the competing AGI homunculi? We think we have relatively fixed identities, but identity may be more fluid based on which super-mediator is in control at any one time.?

Jeffrey Caruso

The third edition of Inside Cyber Warfare is now available on Kindle and in paperback on Amazon.com.

1 年

I think there's additional distinction worth exploring here. For humans, the act of making a choice is driven by so many inputs, both internal and external, that a good argument could be made that free will doesn't exist for human beings or, therefore, human consciousness. However, an AGI would have none of that additional baggage to deal with. Similar to playing chess, an AGI could determine the outcome of a thousand different possible choices and then choose based on a criteria of its own design. And that's not human consciousness at all.

回复
Wlodzislaw Duch

Head, Neurocognitive Laboratory, Center for Modern Interdisciplinary Technologies, Nicolaus Copernicus University.

1 年

I have made similar remarks in 2005 in this paper https://fizyka.umk.pl/publications/kmk/03-Brainins.html

Cyrus Hodes

AI founder and VC | AI Safety | web 2.5

1 年

Super interesting point of view Matt, I think your competing homunculi AGI scenario is realistic. Now, how do we make sure these mediations are in alignment with human values (themselves also competing), that's the $10 trillion question... Also a very valid one for global diplomatie, imo Daniel Faggella Nicolas Miailhe

要查看或添加评论,请登录

Matt Chessen的更多文章

社区洞察

其他会员也浏览了