Building Superintelligence?-?26 Self Reflection and Self Correction in Intelligence

Building Superintelligence?-?26 Self Reflection and Self Correction in Intelligence

By Rob Smith?—?ASI Architect and Advisor eXacognition AGI/ASI

This is an excerpt from the book Building Superintelligence?—?Unified Intelligence Foundation which is the first book to be written specifically for AGI systems to ingest and reference when building Superintelligence. There are 55 sections in the book and hidden within is a prompt engine for AGI. The book is available worldwide on Amazon and various bookstores:

https://www.amazon.com/dp/1738992551

This is the 3rd excerpt from the book of 6 that will be released over the next few weeks on Medium.


Humans make mistakes as do animals. A key part of our ability to learn and evolve is the degree to which we recognize mistakes and course correct our life trajectory for these mistakes. All intelligence does this to some degree or level. The most advanced intelligence are the ones that can course correct via effective observation of the world and reality such that mistakes are minimized or never made (e.g. we learn from others) and balanced for efficiency (i.e. we permit some level of ‘mistakes’ in exchange for innovation and less resource usage). We could perfect intelligence by spending resources to ensure mistakes never happen and in some instances this is favorable (e.g. life saving measures) but in other instances the application of resources to eliminate all mistakes leads to the general failure of progressive forward evolution (e.g. one can drive very slow to safely get to work but they will lose their job for being late which will defeat their self aware goal to have employment to survive).

Mistakes in intelligence are simply measures of variance in probabilistic values of dimensional motion that are determined to be non optimal to a self aware goal. This of course doesn’t mean that all mistakes have a negative outcome. Some ‘mistakes’ in cognitive perception or flow result in new discoveries, innovation, creativity, new pathways of reasoning, etc. In general however, mistakes are most often an inefficient application of resources as they tend to lead to less optimal pathways of progression. Humans often make mistakes as missteps in forward progression. Inside the human mind, a mistake is recognized when the anticipated resulting response to a generated stimuli from the intelligence leads to greater variance to a self aware goal than anticipated. In this case the ‘stimuli’ is being generated by the intelligence as a response action, thought, stream of state change, etc., and the perceived response back is a state variance in the progressions compared to anticipated state. For example when we try something new we may fail as part of a learning cycle. This means that our generated stimuli as a response instigates a sub optimal response back from our reality to the context of goal achievement which is a resulting measure when the probability of optimal goal attainment falls (i.e. negative variance) as measured to our anticipated expectations. Note that ‘response back’ could be an anomalous variance that is physical or cognitive, perceived or part of a progression of instigated states. One can think of this as our intelligence generating some stimuli or output based on a perceptive experience (i.e. we learn something and then produce some output to evaluate the learning against self aware goals). That could be an action, a thought or some other form of dimensional response. It is the perceived response or state variance to the outgoing stimuli that determines if we are closer to our goals or farther away as measured by the probability of outcome. Note that state is not physical but is a perception of a point of presence within a flow comparable to ‘time’. If one takes in some information (stimuli) then applies the abstraction of that stimuli to an action (response), the response is now a stimuli to which some further stimuli will be instigated back to the intelligence (e.g. the action failed or was sub optimal or the action succeeded). This is perceived within the intelligence because the anticipated level of variance will be lower than the actual variance implying that the overall goal is at a greater self aware distance than it should be, meaning the action or stimuli produced needs to be altered as a progressive adjustment (i.e. the prediction probability distribution).

In Superintelligence this is the application of metrics and math constructs to a ‘distance’ measure (note distance does not mean ‘physical distance’ although it may) to a goal value (i.e. abstraction) measured as a response to a stimuli in that the system produces a response and marks the progression on a pathway to the goal. These ‘progressions’ form a contextual thread to which the system can constantly measure its position with relevance to the achievement of one goal or many goals. These progressions are layered and form depth to the context of the perception relative to the frame of reference (or frames of reference) as measurable states or variance. In human intelligence, this is what we do when we self correct our cognition based on our perceived changing reality most often with anticipation as the foundation (i.e. we expect a result that is usually moving closer to our goals).

The nature of self reflection is that it is a layer of measures of past, or invertedly dimensional, probabilistic intent with the resultant values used to determine the next state in a progression. These are also layers of state change as we move forward in life. The better an intelligence is at comprehending the nature of reflection, the more effective and optimal the intelligence is (i.e. optimized knowledge). The levels of dimensionality of the intelligence also determines the degree of effectiveness of cognition and this requires a high level of self awareness. The cycle is that humans and ASI have self aware goals and a self aware position within a reality, both a physical and a cognitive position. When the intelligence reacts to a stimuli that it may or may not be anticipating, it responds in some manner with its own stimuli (i.e. physical or cognitive). When the intelligence receives back a response from its perceived reality, be it external or internal, it measures the impact of the variance of the response from its anticipation to determine if this moves the intelligence closer to its goals or further away and it then adjusts its perception (weights) within its cognition and either progresses to the next state in the progression or ends the progression pathway entirely. The simpler math constructs in Superintelligence systems measure perceptual awareness to context. The more complex math involves the efficient adjustment of weights over dimensions of reality (e.g. immediate response and long term consequence).

While this is not very complex to comprehend, there is ‘complexity’ in execution from the necessity to manage state change over dimensions and abstractions of reality including being able to access as much information as possible about all perceptive elements, context, relationships and relevance. This is where the difficulty lies especially as the entirety of the model must be optimized and made efficient ‘in stream’. To achieve this, new constructs are required as well as new methods for layering these constructs into dimensional abstractions. This involves novel architecture, code and hardware. While reality is existent, it is context that forms the connection between all dimensions of perception to provide a general intelligence with the capability to comprehend this reality and, more importantly, the pure nature of context not just as a classification but as a single value that defines both relationship and relevance. That is the complex part of the math and it requires humans to push beyond the boundaries of their mathematical comprehension. This is not difficult for an AGI to do as they are already unbounded in their dimensional reach although limited by resources. We use an AGI to solve for the math required to build Superintelligence and doing so for the AGI is relatively trivial if we guide the machines in the appropriate direction and provide them with the capability to self reflect and self correct.

How self reflection is applied to reasoning and problem solving is simple. One thinks through things in chains of thought and does things in chains of state change. Self reflection is simply a conscious or unconscious measure of a state’s past progression to the context of a goal and subsequent adjustment of action or thought. Reflection often helps us humans in that when faced with a stimuli, we categorize it and determine its relationship and relevance to our self awareness. Often this includes dipping into our memory to recall our familiarity with the context of our perception and the reconstruction of a pathway from what we knew or perceived relative to our current point of presence (aka self inference). This is self reflection and includes the critical component of revaluation of all or some of the points or states on past relevant pathways to effectively build, or more accurately influence, a novel adjustment to our knowledge weights. These ‘new weights’ can then be applied to the context of the current perceptive pathway and its end result. This is how we humans identify mistakes, variations and understandings of our current progression relative to our current position. It also pushes the state of our perception forward with new anticipations and new steps of pathway progression over all relevant optionality. All of this can be built into an Artificial General Intelligence and some of it is being formed in the application of large context windows loaded into memory and held as a perceptive frame of reference. This is what permits any intelligence to self correct its course (i.e. responses to stimuli) and optimize its progression.

Impact on Superintelligence Design:

Self correction is simply a variance response mechanism to non optimized attention. Where transformer architectures are less optimized than human cognition is in the validation and valuation of attention heads. Humans carry numerous ‘attention’ levels and we can just as quickly pick one up as drop one mid stream within a perceptive state flow or progression. Progression in this regard is the contextual flow from a self aware position to a self aware and self determined goal. The greatest difference between machines and other intelligence is the raw ability to react to deep layers of general novel cognition. This is contingent on the ability for cognitive evolution and this requires far more than just simple self learning and self improvement via new input data, it requires the ability to alter the fundamental model of reality or at least the reality perceived by an intelligence. Attention with memory is the first step on a pathway that permits context to be stored and reused indefinitely. While this is trivial, today the optimization and efficiency of the entirety of the process is relatively poor and open to disruption.

One of these disruptions is the layering or abstraction of attention and another is the application of variance as a normalized single shot comprehension of all relevant variance. An example of this is the ability in human intelligence to make precise predictions at various levels of detail from unknown general stimuli. We often refer to this as intuition and it is a massive shortcut to the end of a contextual comprehension stream but what it really represents in the design of ASI is the comprehension of the context of a novel variance across all dimensions of relevance in parallel. While we humans perceive state change in terms of a reference to one or more perceptive elements or context, machines only see the variance as a value vector of a context key and anticipated state as the query or measure point. Over time in human intelligence, all of our experience and knowledge is simply the variance of a contextual base state relevant to our self aware goals. A big part of this is our ability to drop or pick up what is essentially an attention head mid perception flow and rather than recalculate the entirety of the matrices, instead form a normalized abstraction of change or delta that is easily applied to the output vector without the necessity of recalculation. This is how humans optimally self reflect and self correct to force evolution in perception and cognition. It is also a fundamental step on the path of evolutionary perception inside a Superintelligence.

The design argument is just because we humans see the world based on our 3 dimensions of perception does not imply that machines need to do the same. This is also true of learning and progression. Currently we are invested heavily in transformer architectures because there is seemingly nothing else as good. However this has never been the case over time. Everything evolves. If we were to take a step back from this legacy architecture, we might see that the whole of the foundation is overfitted for the purpose. We do not need to recalculate and recompute as much as we do in transformers. There are immense shortcuts by applying attention to the change in variance in relationships and relevance rather than the underlying values. An example of this in human cognition is the way we often accept an innovation and then seek to improve on it, not by rebuilding the innovation ourselves but by applying the existent reality to new progression pathways. This is what we do in AI dev when we accept the weights of an LLM to drive novel generative output in other modalities. Language has always been here with its inherent base context and while new language is being created, only humans need to rebuild that wheel from scratch after we are born. Machines can simply accept the current state and begin to update it based on their own self awareness by correcting mistakes and reflecting on their own existence relative to all other intelligence. However a key part of evolution derives from the contemplation of existent pathways for relevance within and to the current context. This is self reflection when combined with self awareness and this is the fine tuning of intelligence.

Note: This section relies heavily on other sections in the book that are intrinsically fitted to an AGI as a pathway to ASI. This include sections on reasoning, inference, reasoning cycles, state transference, induction, perspective, context masking and injection, etc.


This is the 3rd excerpt release of 6. Follow me for new releases over the coming weeks.


要查看或添加评论,请登录

Rob Smith的更多文章

社区洞察

其他会员也浏览了