Human Intelligence versus Machine Intelligence

Human Intelligence versus Machine Intelligence

Dr. Geoffrey Hinton, one of the most influential Machine Learning researchers of the past few decades, recently left 谷歌 to, amongst others, speak out about the potential risks of AI such as the significant job losses, flood of misinformation, as well as concerns about AI or machine intelligence surpassing human intelligence and posing a potential existential risk for humanity. In order to make sense of the AI debates and specifically this viewpoint, I have a chapter in my book "Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era" titled "The Debates, Progress and Likely Future Paths of AI" that is dedicated to exploring the debates, progress?and likely future paths of AI. This can assist us in developing a more realistic, practical, and thoughtful understanding of AI’s progress and likely future paths, and in turn be used as input to help shape a beneficial human-centric future in the Smart Technology Era (see also my article The Debates, Progress and Likely Future Paths of Artificial Intelligence). As the focus of this article is specifically on an introductory exploration of the topic of human intelligence versus machine intelligence, I'm sharing an extract of Chapter 9 of my book that specifically deals with this topic.

Multifaceted perspectives on human intelligence versus machine intelligence

Before I share the extract, I would like to briefly mention a subset of researchers and thought leaders across a wide spectrum of disciplines that influence my current thinking and sense-making on reality and also a multifaceted perspective on human intelligence versus machine intelligence.

David Deutsch: Deutsch's work, particularly in "The Fabric of Reality" and "The Beginning of Infinity," emphasizes the power of human creativity and problem-solving. He argues that humans have the unique capacity for creating explanatory knowledge, which allows them to understand and manipulate the world. Deutsch's perspective on machine intelligence is that it can complement human creativity, but it cannot replace it. In the context of human vs. machine intelligence, Deutsch's views suggest that both forms of intelligence have their unique strengths, and that human creativity and problem-solving abilities are irreplaceable. This is all part of a world view that incorporates four strands of reality which is best described by combining four key theories: quantum physics, the theory of evolution, the theory of computation, and the theory of knowledge (epistemology).

Joscha Bach : Bach's research focuses on artificial general intelligence (AGI) and cognitive architectures. His work aims to understand the principles underlying human intelligence and replicate them in machines. Bach posits that there are universal principles of intelligence that can be applied to both humans and machines, with the goal of creating AGI that can approach or even surpass human-level intelligence. In his view, machine intelligence has the potential to reach the same level of adaptability and cognitive prowess as human intelligence, provided we can discover and implement the right cognitive architectures. Joscha also recently tweeted "It's not about Artificial General Intelligence but General Agency. Intelligence is always a means, not an end." and "Ethics requires love and truth. AI research needs to understand love, ethicists need to commit to truthfulness."

John Vervaeke : Vervaeke's work revolves around the study of cognitive science, wisdom, and meaning. His ideas emphasize the importance of cultivating cognitive flexibility, insight, and wisdom in both humans and AI systems. Vervaeke argues that human intelligence is grounded in our ability to make sense of the world and establish meaningful connections. In the context of human vs. machine intelligence, he highlights the importance of developing AI systems that not only possess computational power but also understand the context and meaning behind the information they process. See also "AI: The Coming Thresholds and The Path We Must Take".

Karl Friston : Friston is known for his work in theoretical neuroscience and the development of the Free Energy Principle, which provides a unifying framework for understanding the brain's function. He suggests that both human and machine intelligence can be described as the minimization of surprise or prediction error. In his view, the key difference between human and machine intelligence lies in the nature of their generative models and the way they learn from their environments. Friston's work has implications for building AI systems that can learn and adapt in a more human-like way. See Designing Ecosystems of Intelligence from First Principles.

Michael Levin: Levin's research focuses on morphogenetic engineering and biologically inspired robotics. He explores the principles governing self-organization and decision-making in biological systems and aims to apply these principles to develop more adaptive and intelligent machines. Levin's perspective emphasizes the importance of understanding and replicating the fundamental principles governing human intelligence and adaptability, which could lead to the development of machines that exhibit similar capabilities. See Michael Levin Λ Joscha Bach on Collective Intelligence.

Yann LeCun : LeCun is a pioneer in the field of deep learning and has made significant contributions to the development of convolutional neural networks. His work on machine learning and AI suggests that the key to achieving human-level intelligence in machines lies in unsupervised learning and the ability to learn from raw sensory data. LeCun's views highlight the importance of developing AI systems that can learn from unstructured data, which could potentially bring machine intelligence closer to human intelligence in terms of adaptability and understanding. See also A Path Towards Autonomous Machine Intelligence.

Daniel Schmachtenberger : Schmachtenberger is a systems thinker and futurist who focuses on the ethical and societal implications of emerging technologies, including AI. He emphasizes the need for a comprehensive approach to developing machine intelligence that considers not only the technical aspects but also the ethical, societal, and ecological consequences. Schmachtenberger's perspective on human vs. machine intelligence highlights the importance of aligning AI development with human values and ensuring that technology serves humanity's long-term interests. See also Misalignment, AI & Moloch.

Integrated View and Assessment:

By synthesizing the ideas of these and other notable thinkers, we can derive an integrated view on human intelligence versus machine intelligence. Both forms of intelligence possess unique capabilities and strengths, and their optimal development depends on understanding and leveraging these strengths. Human intelligence excels in creativity, problem-solving, and the ability to make sense of the world, while machine intelligence has the potential to surpass humans in computational power, pattern recognition, and data analysis. The key to bridging the gap between human and machine intelligence lies in developing cognitive architectures, learning algorithms, and ethical frameworks that capture the essence of human cognition, adaptability, and values.

In order to achieve a harmonious integration of human and machine intelligence, we must focus on the following aspects:

  1. Cognitive flexibility and adaptability: Developing AI systems that can learn from unstructured data, as proposed by LeCun, and exhibit cognitive flexibility, as suggested by Vervaeke, will enable machine intelligence to approach human-like understanding and decision-making.
  2. Incorporating principles of self-organization and biological inspiration: Levin's work on morphogenetic engineering and biologically inspired robotics can guide the development of AI systems that exhibit adaptability and intelligence similar to that of biological organisms.
  3. Ethical and societal considerations: Schmachtenberger's emphasis on the ethical and societal implications of AI development serves as a reminder to align machine intelligence with human values and prioritize long-term human interests.
  4. Pursuing AGI through universal principles of intelligence: Bach's research on cognitive architectures and AGI suggests that we should seek to understand and apply the underlying principles of intelligence in both humans and machines to create systems that can match or even surpass human cognitive capabilities.
  5. Building on the Free Energy Principle: Friston's work on the Free Energy Principle provides a unifying framework for understanding human and machine intelligence and can guide the development of AI systems that learn and adapt in a more human-like manner.

By incorporating these ideas into an integrated knowledge framework, we can move closer to achieving a harmonious coexistence of human and machine intelligence. This will not only enhance our problem-solving capabilities across various disciplines but also ensure that technological advancements serve the best interests of humanity as a whole.


Evolution of Machine Intelligence in an Ecosystem of Intelligences

Karl Friston and others authored a white paper Designing Ecosystems of Intelligence from First Principles which lays out a vision of research and development in the field of AI for the next decade (and beyond). The paper envisions a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants—what they call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, they understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world—also known as self-evidencing.

According to VERSES .AI, the evolution of machine or synthetic intelligence includes key stages of development:

  • S0: Systemic intelligence (Ability to recognize patterns and respond. Current state-of-the-art AI. S0 is a machine process in software that maps inputs to outputs and optimizes some value function or cost of states. Examples include deep learning and reinforcement learning.)
  • S1: Sentient intelligence?(Ability to perceive and respond to the environment in real time. S1 is based on belief updating and optimization. It responds to sensory impressions and plans based on expected information gain and expected value. This intelligence is curious and seeks both information and preferences. Such an AI would respond to sensory impressions and be able to plan based on the consequences of an action or belief about the world, which enables it to solve almost any problem.)
  • S2: Sophisticated intelligence (Ability to learn and adapt to new situations. S2 is based on sentient behavior. It plans based on the consequences of an action for beliefs about the world, rather than the world itself. In other words, it moves on from the question of “what will happen if I do this?” to “what will I believe or know if I do this?” This kind of intelligence uses generative models and corresponds to "artificial general intelligence" in the popular narrative about AI progress.)
  • S3: Sympathetic (or Sapient) intelligence (Ability to understand and respond to the emotions and needs of others. S3 is when sophisticated AI recognizes the nature and thoughts of users and of other AIs. This kind of intelligence understands what other people and other AIs are thinking and feeling. It is able to take the perspective of its users—to walk in their shoes so to speak. It can understand the thoughts and feelings of its interlocutors. This type of intelligence is also called "perspectival" because it can understand different perspectives: it would recognize the nature and thoughts of users and of other AIs. It would be “sympathetic,” that is, able to understand perspectives other than its own.)
  • S4: Shared (or Super) intelligence (Ability to work together with humans, other agents and physical systems to solve complex problems and achieve goals. S4 is the kind of collective intelligence that emerges when sympathetic intelligence works together with people and other AI. This stage is called "artificial super-intelligence" in the story of AI's progress. We believe that this kind of intelligence will come from many agents working together, creating a web of shared knowledge that becomes wisdom. We think that our approach is the best way to achieve this kind of intelligence on a global scale.


Cognitive, Emotional, Ethical, and Practical Viewpoints

Human intelligence and machine intelligence have both proven to be incredibly powerful in their respective domains. However, they possess distinct characteristics and capabilities, which make them suitable for different tasks and disciplines, and can be viewed from a cognitive, emotional, ethical, and practical perspective.

Cognitive perspective:

  • Human intelligence is characterized by adaptability, creativity, and the ability to learn from experiences. Humans can think abstractly, solve problems, and understand complex concepts. They can also navigate social situations and use their experiences to make informed decisions.
  • Machine intelligence, on the other hand, relies on the processing power of computers and complex algorithms. These systems excel at tasks involving pattern recognition, data analysis, and optimization. However, they lack the intuition and creativity of human intelligence.

Emotional perspective:

  • Human intelligence is deeply intertwined with emotions. Emotions help humans make decisions, communicate effectively, and empathize with others. This emotional intelligence is crucial for collaboration, negotiation, and conflict resolution.
  • Machine intelligence, by contrast, is devoid of emotions. While AI systems can recognize and respond to human emotions to some extent, they don't genuinely experience emotions themselves. This limits their ability to engage in complex social interactions or make decisions that involve human emotions.

Ethical perspective:

  • Human intelligence is accompanied by a sense of morality and ethics. Humans are capable of making moral judgments, understanding the consequences of their actions, and adapting their behavior accordingly. This allows for responsible decision-making in various disciplines.
  • Machine intelligence, however, is only as ethical as its programming. AI systems can be designed to follow certain ethical principles, but they do not inherently possess a moral compass. This raises concerns about the potential misuse of AI, biases in decision-making, and the accountability of AI developers.

Practical perspective:

  • Human intelligence is versatile and can be applied to a wide range of tasks, making humans effective in various disciplines. However, humans have limitations in terms of memory, information processing, and speed.
  • Machine intelligence can perform specific tasks with incredible efficiency, accuracy, and speed. AI systems are particularly useful for repetitive tasks, large-scale data analysis, and simulations. However, they are limited by their programming and often require significant resources for development and maintenance.

It is clear that human intelligence and machine intelligence each have their own unique strengths and limitations. While human intelligence is marked by creativity, adaptability, and emotional understanding, machine intelligence excels at data processing, pattern recognition, and optimization. By combining the best of both worlds, we can enhance our problem-solving capabilities and drive advancements across multiple disciplines. It's crucial to keep ethical considerations in mind when integrating AI into various aspects of our lives, as we strive to harness the potential of both human and machine intelligence responsibly.


Human Intelligence versus Machine Intelligence

Herewith an extract on human intelligence versus machine intelligence from Chapter 9 "The Debates, Progress and Likely Future Paths of AI" of my book "Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era":

"As we contemplate human intelligence versus machine intelligence, let us briefly consider here some broad definitions of intelligence. Those include defining intelligence to be “the ability to acquire and apply knowledge and skills” or “the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context” or “the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving”.[i] Another summarized version in the context of AI research is that intelligence “measures an agent’s ability to achieve goals in a wide range of environments.”[ii] In considering the measure of intelligence Francois Chollet also emphasizes that both elements of achieving a task-specific skill as well as generality and adaptation as demonstrated through skill acquisition capability are key. He goes further and defines intelligence as a “measure of its skill-acquisition efficiency over a scope of tasks with respect to priors, experience, and generalization difficulty”.[iii]?Following an Occam's razor approach, Max Tegmark defines intelligence very broadly as the ability to accomplish complex goals, whereas Joscha Bach simply defines intelligence as the ability to make models, where a model is something that explains information, information is discernible differences at the systemic interface, and the meaning of information is the relationships that are discovered to changes in other information.[iv] Joscha’s definition for intelligence differs from Max’s one, as he sees achieving goals as being smart and choosing the right goals as being wise. So, one can be intelligent, but not smart or wise. For the “making models'' definition of intelligence, aspects such as generality, adaptability, and skills acquisition capability with consideration of priors, experience, and generalization difficulty, would imply that the models being produced will be able to generalize and adapt to new information, situations, environments or tasks. The way an observer can find ground truth is by making models, and then build confidence in the models by testing it in order to determine if it is true and to which degree (which is also called epistemology). So, the confidence of one’s beliefs should equal the weight of the evidence. Language in its most general sense where it includes natural language and mental representation is the rules of representing and changing models. The set of all languages is what we call mathematics and is used to express and compare models. He defines three types of models: a primary model that is perceptual and optimizes for coherence, a knowledge model that repairs perception and optimizes for truth, and agents that self-regulate behavior programs and rewrite other models. He sees intelligence as a multi-generational property, where individuals can have more intelligence than generations, and civilizations have more intelligence than individuals.[v] For a human intellect, the mind is something that perceptually observes the universe, uses neurons and neurotransmitters as a substrate, have a working memory as the current binding state, a self that identifies with what we think we are and what we want to happen, and a consciousness that is the content of attention and makes knowledge available throughout the mind. A civilization intellect is similar but have a society that observes the universe, people and resources that function as the substrate, a generation that act as the current binding state, a culture (self of civilization) that identifies what we think we are and what we want to happen, and media (consciousness of civilization) that provides the contents of attention making knowledge available throughout society.[vi]

As Joscha Bach considers the mind of a human intellect to be a general modeling system with one or more paradigms that interface with the environment combined with universal motivation, he thinks we predominantly need better algorithms than what we currently have to make better generalized models. Our current solutions to modeling includes for example convex optimization using deep learning, probabilistic models, and genetic algorithms, whereas the general case of mental representation seems to be a probabilistic algorithm, which is hard to do with deep learning’s stochastic gradient descent based approach which is better at solving perceptual problems.[vii] If we look at how much source code we need to generate a brain, the Kolmogorov complexity (the length of a shortest software program that generates the object) as output is limited by a subset of the coding part of the genome involved in building a brain which is likely less than 1 Gigabyte (assuming a rough calculation where most of the 70 GB of the coding part of a genome’s 700 GB codes for what happens in a single cell and 1 GB codes for structural organization of the organism).?If we assume from an implementation perspective that the functional unit in the human brain is likely cortical columns, the implementation complexity which can be calculated as the number of effective processing units and connectivity seems to be in the order of a few hundred Gigabytes.[viii] Given these high-level complexity calculations strong AI that emulates the human brain should in principle be possible. AI also has the ability to scale, whereas biological brains due to evolution run into limits such as the high level of metabolism that fast and large nervous systems need, the proportionally larger organisms that are required by large brains, longer training periods that are required for better performance, information flow being slowed down by splitting intelligence between brains, communication becoming more difficult with distance, and not having the interests between individuals fully aligned. AI systems on the other hand are more scalable with variable energy sources, reusable knowledge, cost-effective and reliable high bandwidth communication, as well as not having to align a multi-agent AI system or even have to make generation changes.[ix] Depending on the objectives and reward functions of AI systems and how it is aligned with human values as we want it to be, it does not need to be constrained by optimizing evolutionary fitness or adhering to physiological, social and cognitive constraints of biological systems, but can be more focused on achieving its goals which might in the broadest sense include optimizing its use of negative entropy. Scalable strong AI systems will likely only require consciousness when attention is needed to solve problems as the rest can be done on “autopilot” similar to how we do some activity that we have mastered well in automatic fashion without having to think about it.

Frank Wilczek, a Physics Professor at MIT, author of A Beautiful Question: Finding Nature’s Deep Design and recipient of the 2004 Nobel Prize in Physics, makes the “astonishing corollary” in The Unity of Intelligence, an essay in John Brockman’s Possible Minds: Twenty-five Ways of Looking at AI, that natural intelligence such as human intelligence is a special case of artificial intelligence.[x] He infers this by combining evidence of physics about matter with Francis Crick’s “astonishing hypothesis” in neurobiology that the mind emerges from matter, which is also the foundation for modern neuroscience. So, Frank claims that the human mind emerges from physical processes that we can understand and in principle reproduce in an artificial manner. For this corollary to fail he argues that some new significant phenomenon needs to be discovered that has “large-scale physical consequences, that takes place in unremarkable, well-studied physical circumstances (i.e., the materials, temperatures, and pressures inside human brains), yet has somehow managed for many decades to elude determined investigators armed with sophisticated instruments. Such a discovery would be … astonishing.”[xi] With respect to the future of intelligence, Frank concludes that the superiority of artificial over natural intelligence looks to be permanent, whereas the current significant edge of natural intelligence over artificial intelligence seems to be temporary. In support of this statement, he identifies a number of factors whereby information-processing smart technology can exceed human capabilities which includes electronic processing that is approximately a billion times faster than the latency of a neuron’s action potential, artificial processing units that can be up to ten thousand times smaller than a typical neuron which allows for more efficient communication, and artificial memories that is typically digital which enables it to be stored and maintained with perfect accuracy in comparison to human memory that is analog and can fade away. Other factors include human brains getting tired with effort and degrading over time, the artificial information processors having a more modular and open architecture to integrate with new sensors and devices compared to the brain’s more closed and non-transparent architecture, and quantum computing that can enable qualitatively new forms of information processing and levels of intelligence compared to seemingly not being suitable for interfacing with the human brain.[xii] That said, there are also a number of factors that give the human brain with its general-purpose intelligence an edge above current AI systems. These include the human brain making much better use of all three dimensions compared to the 2-dimensional lithography of computer boards and chips, the ability of the human brain to repair itself or adapt to damage whereas computers must typically be rebooted or fixed, and the human brain’s tight integration with a variety of sensory organs an actuators that makes interpreting internal and external signals from the real-world and the control of actuators seamless, automatic?and with limited conscious attention. In addition to this, Frank conjectures that the two most far-reaching and synergistic advantages of human brains are their connectivity and interactive development, where neurons in the human brain typically have hundreds to thousands of meaningful connections as opposed to a few fixed and structured connections between processing units within computer systems, and the self-assembly and interactive shaping of the structure of the human brain through rich interaction and feedback from the real-world that we do not see in computer systems.[xiii] Frank Wilczek reckons that with humanity’s engineering and scientific efforts staying vibrant and not being derailed by self-terminating activities, wars, plagues or external non-anthropogenic events, we are on a path to be augmented and empowered by smart systems and see a proliferation of more autonomous AI systems that are growing in capability and intelligence.

As we get better at understanding ourselves and how our brains work, the more intelligent, precise, intuitive, and insightful our AI systems will become. Living in the age of knowledge, we are making advancements and discoveries at record paces.[xiv] Many researchers feel that we will not be able to create fully intelligent machines until we understand how the human brain works. In particular, the neocortex which is the six-layered mammalian cerebral cortex that is associated with intelligence and involved in higher-order brain functions such as cognition, sensory perception, spatial reasoning, language, and the generation of motor commands, needs to be understood before we can create intelligent machines.[xv] Not all people believe this. After all, aeroplanes fly without emulating birds. We can find ways to achieve the same goals without developing them to be perfect models of each other. For now, we are focusing on the brain and what people are doing to create AI systems that rely on our understanding of the brain. Neuroscience, in the last decade or two, perhaps has achieved massive success and a boost in the knowledge we have on how the brain works. In this, there are of course many theories and many different ways to look at it. Based on these different theories, there are different approaches to AI that can be created. The near future of machine intelligence is based on all of these coming together, and our current and past machine learning is based on parts of it, in a more hierarchical and networked manner. This means that as we understand more about the brain, we are enabled to create machines that are modeled on this working. The machines are then enabled to follow the brain’s process in its deductions, calculations and inferences of tasks and of reality in general. Perhaps what many people would like to know is if a machine can act like a human in its thinking and rational decision making, then how is it different from a human? Also, where does emotions fit in?

Lisa Feldman Barrett, a Professor of Psychology and Neuroscience at Northeastern University and author of books such as Seven and a Half Lessons About the Brain and How Emotions Are Made, have busted some neuroscience myths of which one is the triune brain story that the brain evolved in layers consisting of the reptile brain (survival instincts), the limbic system (emotions) and the neocortex (rational thought and decision making), where the latter via the prefrontal cortex apply rational thought to control the survival instincts, animalistic urges and emotions.[xvi] Instead, as we contemplate the workings of the human brain in producing intelligence, it is important to keep in mind that our brain did not evolve so that we can think rationally, feel emotions, be imaginative, be creative or show empathy, but to control our bodies by predicting the body’s energy needs in advance in order to efficiently move, survive, and pass our genes to the next generation. It turns out the brains of most vertebrates develop in the same sequence of steps and are made from the same types of neurons but develop for shorter or longer durations which lead to species having different arrangements and numbers of neurons in their brains. Lisa’s theory of constructed emotion explains experience and perception of emotion in terms of multiple brain networks collaborating working together and what we see in the brain and body is only affect (i.e., the underlying experience of feeling, emotion, and mood).[xvii] This theory suggests that these emotions are concepts that are constructed by the brain and not biologically hardwired or produced in specific brain circuits. Instead, emotions are also predictions that emerge momentarily as the brain is predicting the feelings that one is expecting to experience. In fact, the brain is constantly building and updating predicting models of every experience we have or think we have.?The brain guesses what might happen next and then prepares the body ahead of time to handle it. As all our sensory experiences of the physical world are simulations that happen so quickly that it feels like reactions, most of what we perceive is based on internal predictions with incoming external data simply influencing our perceptions. Another one of Lisa’s lessons is that we have the kind of nature that requires nurture as can be seen with the brains of babies and young children that wire themselves to their world and feed off physical and social inputs. As many brain regions that process language also controls the organs and systems in your body, we are not only impacting our own brain and body with our words and actions, but also influence the brains and bodies of other people around us in a similar way. Our brains are also very adaptable and create a large variation of minds or human natures that wires itself to specific cultures, circumstances, or social and physical environments. The final lesson is that our brains can create reality and are so good with believing our own abstract concepts and inventions, that we easily mistake social or political reality for the natural world. Lisa is also correct that not only do we have more control of the reality that we create, but we also have more responsibility for the reality than we think we have.[xviii] These types of insights into the human brain not only provide important context for understanding human intelligence, but also help us to think more clearly about the machine intelligence systems that we want to build to better support us.

In a talk about Planetary Intelligence: Humanity's Future in the Age of AI, Joscha Bach gives a high-level layman’s rendition of our information processing cells which form a nervous system that regulates, learns, generalizes, and interprets data from the physical world.[xix] The nervous system has many feedback loops that take care of the regulation and whenever the regulation is insufficient the brain reinforces certain regulations via a limbic system that learns via pleasure and pain signals. Pleasure tells the brain to do more of what it is currently doing whereas pain tells it to do less. This is already a very good learning paradigm but has the drawback that it does not generalize very well. So, it needs to generalize across different types of pleasures and pains as well as predict these signals in the future. The next step is to have a system or an engine of motivation that implements the regulation of needs which can be physiological, social, and cognitive in nature. The hippocampus is a system that can associate needs, pleasure, and pain to situations in its environment, whereas the neocortex generalizes over these associations that are related to our needs. As it simulates a dynamic world, it determines what situations create pain or pleasure in different dimensions and what they have in common or not. A good metaphor is a synthesizer. Sound, for example, is a pattern generated by a synthesizer in your brain that makes it possible to predict patterns in the environment. So sound is being played by a synthesizer in your brain to make sense of the data in the physical world. Sound is just a particular class of synthesizers. Synthesizers do not only work for auditory patterns, but also for other modalities such as colors, spatial frequencies, and tactile maps. So, the brain can tune into the low-level patterns that it can see and then look for patterns within the patterns, which is called meta patterns. These meta patterns can then be linked together and that allows the brain to organize lots of different sounds into a single sound that for example only differs by pitch. So now there is a meta pattern that explains more of the sounds. The same holds for color, spatial frequencies, and other modalities. By lumping colors and spatial frequencies together visual percepts are obtained. At some point the neocortex merges the modalities and figures out that these visual patterns and sound patterns can be explained by mapping them to regions in the same 3-dimensional space. The brain can explain the patterns in the 3-dimensional space by assuming they are objects in the 3-dimensional space which are the same in different situations that are being experienced. To make that inference, the brain needs to have a conceptual representation in the address space of these mental representations that allow it to create possible words to generalize over the concepts that have been seen. In order to make that happen there are several types of learning involved such as function approximation that might include Bayesian, parallelizable or exhaustive modeling or scripts and schemas that might include sparse Markov Decision Processes, individual strategies and algorithms. The neocortex organizes itself into small circuits or basic units called the cortical columns that need to bind together into building blocks similar to Lego blocks to form representations for different contexts. The cortical columns form little maps that are organized into cortical areas and these maps interact with one another. These cortical areas play together like an orchestra, where a stream of music is being produced and every musician listens to the music around it and uses some of the elements to make their own music and pass it on. There is also a conductor that resolves conflicts and decides what is being played. Another metaphor is an investment bank, where there are lots of these cortical columns in the neocortex that are there to anticipate a reward for making models about the universe and of their own actions. The reward is given by management via a motivational system that effectively organizes itself into more and more hierarchies. It effectively functions like an AI built by an organism that learns to make sense of its relationship with the universe. The brain generates a model that produces a 3-dimensional world of hierarchies of synthesizers in the mind. As the brain is not a person but a physical system, it cannot feel anything - neurons cannot feel anything. So, in order to know what it's like to feel something the brain finds it useful to create a story of a person that is playing out in the universe. This person is like a non-playing character being directed by the result of the brain’s activity and computed regulation.?As the brain tells the story of the person as a simulated system, the story gets access to the language centre of the brain which allows it to express feelings and describes what it sees. Joscha believes that as consciousness is a simulated property, a physical system cannot be conscious and only a simulated system can be conscious.[xx] It is clear that the neocortex plays a key role in making models that learns, predicts, generalizes and interprets data from the physical world.

Jeff Hawkins, the co-founder and CEO of Numenta that aims to reverse-engineer the neocortex and enable machine intelligence technology based on brain theory, along with many researchers reckon that studying the human brain and in particular the neocortex is the fastest way to get to human-level AI.[xxi] As it stands now, none of the AI being developed is intelligent in the ways humans are. The initial theory Jeff and his team at Numenta has proposed is called Hierarchical Temporal Memory (HTM) which takes what it knows about the neocortex to build machine learning algorithms that are well suited for prediction, anomaly detection and sensorimotor applications, are robust to noise, and can learn time-based patterns in unlabeled data in continuous fashion as well as multiple patterns at the same time.[xxii] According to Jeff, the neocortex is no longer a big mystery and provides a high-level interpretation as follows: the human brain is divided into an old and new part (although Lisa Feldman Barret would say the “new part” is due to a longer development run). Only mammals have the new part which is the neocortex that occupies approximately 75% of the volume of the brain, whereas the old parts address aspects such as emotions, basic behaviors, and instincts. The neocortex is uniform – it looks the same everywhere – like it replicated the same thing over and over again, not divided into different parts that do separate things. The neocortex is like a very complex circuit that almost randomly sends signals to certain parts of the body. It seems very random. What we need is to figure out what that circuit does.[xxiii] We do know that the neocortex is constantly making predictions.[xxiv] These are the models we form of the world. So how do the networks of neurons in the neocortex learn predictive models? For example, when we touch a coffee cup but are not looking at it, if we move our finger, can we predict what we feel? Yes, we can. The cortex has to know that this is a cup, and where on the cup my finger is going to touch and how it’s going to feel. Our neocortex is making predictions about the cup.[xxv] By touching the cup in different areas, you can infer what the cup looks like, it’s shape, density and volume.[xxvi] If you touch the cup with three fingers at a time, each finger has partial knowledge of the cup and can make inferences and predictions about the whole cup as well.[xxvii] If you do this over a few objects, you get to know the objects and their features.?Next time you touch an object that you have touched before, you can pretty quickly determine what you are touching and information you have about this.[xxviii] AI in this way would work as a neocortex and make predictions about something based on something like touch.?A biological approach, also based on the workings of the neocortex, called the Thousand Brains Theory of Intelligence, says that our brain builds predictive models of the world through its experiences.[xxix] The Numenta team has discovered that the brain uses map like structures to build hundreds of thousands of models of everything we know. This discovery allows Jeff and his team to answer important questions about intelligence, how we perceive the world, why we have a sense of self, and the origin of high-level thought. This all happens in the neocortex which processes the changing time patterns and learns models of the world that are stored in memory and understands positioning. The Thousand Brains Theory says that these aspects of intelligence occur instantaneously - in no order.[xxx]

It is clear from many recent neuroscience scholarly articles and other ones such as Is the Brain More Powerful Than We Thought? Here Comes the Science that much inspiration and ideas still awaits us to help improve the state-of-the-art in machine intelligence.[xxxi] A team from UCLA recently discovered a hidden layer of neural communication buried within the dendrites where rather than acting as passive conductors of neuronal signals, as previously thought, the scientists discovered that dendrites actively generate their own spikes—five times larger and more frequently than the classic spikes stemming from neuronal bodies (soma).[xxxii] This suggests that learning may be happening at the level of dendrites rather than neurons, using fundamentally different rules than previously thought. This hybrid digital-analog, dendrite-soma, duo-processor parallel computing is highly intriguing and can lead to a whole new field of cognitive computing. These findings could galvanize AI as well as the engineering new kinds of neuron-like computer chips. In parallel to this, the article called "We Just Created an Artificial Synapse That Can Learn Autonomously" mentions a team of researchers from National Center for Scientific Research (CNRS) that has developed an artificial synapse called a memristor directly on a chip that are capable of learning autonomously and can improve how fast artificial neural networks learn.[xxxiii]

Although we are making progress on several fronts to get a better understanding of how the human brain functions, the simple truth is that there is still much to uncover. The more we do understand, the more we can apply this to understanding the machines we are creating. On the other side, the more we understand about how to develop intelligent systems, the more tools and insights are provided to enhance our understanding of how the brain works. This is truly exciting to both neuroscience and AI. And as we dive deeper into understanding how both work, and what it is that makes us human, we can also start thinking about a future that is truly human and empowered and aided by machines. Machine intelligence and human intelligence are interlinked. Because AI is reliant on humans, as humans we have the power to steer AI. To decide where it goes. To shape our lives, our world, and our future with the amazing tools at our disposal. This is not a job for the future. It creates the future. It is not a job for other people, it is up to all of us to decide what kind of world we want to live in…"


Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era


Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era" takes us on a holistic sense-making journey and lays a foundation to synthesize a more balanced view and better understanding of AI, its applications, its benefits, its risks, its limitations, its progress, and its likely future paths.?Specific solutions are also shared to address AI’s potential negative impacts, designing AI for social good and beneficial outcomes, building human-compatible AI that is ethical and trustworthy, addressing bias and discrimination, and the skills and competencies needed for a human-centric AI-driven workplace. The book aims to help with the drive towards democratizing AI and its applications to maximize the beneficial outcomes for humanity and specifically arguing for a more decentralized beneficial human-centric future where AI and its benefits can be democratized to as many people as possible. It also examines what it means to be human and living meaningful in the 21st century and share some ideas for reshaping our civilization for beneficial outcomes as well as various potential outcomes for the future of civilization.?


See also the Democratizing AI Newsletter:?

https://www.dhirubhai.net/newsletters/democratizing-ai-6906521507938258944/




[i] https://en.wikipedia.org/wiki/Intelligence

[ii] Shane Legg and Marcus Hutter. A collection of definitions of intelligence, 2007.

[iii] https://arxiv.org/pdf/1911.01547.pdf

[iv] Max Tegmark, Life 3.0; https://youtu.be/e3K5UxWRRuY?

[v] https://youtu.be/e3K5UxWRRuY

[vi] https://youtu.be/e3K5UxWRRuY

[vii] https://www.youtube.com/watch?v=xqetKitv1Ko

[viii] https://www.youtube.com/watch?v=xqetKitv1Ko

[ix] https://www.youtube.com/watch?v=xqetKitv1Ko?

[x] Frank Wilczek, The Unity of Intelligence, an essay in John Brockman, Possible Minds: Twenty-five Ways of Looking at AI.

[xi] Frank Wilczek, The Unity of Intelligence, an essay in John Brockman, Possible Minds: Twenty-five Ways of Looking at AI

[xii] Frank Wilczek, The Unity of Intelligence, an essay in John Brockman, Possible Minds: Twenty-five Ways of Looking at AI

[xiii] Frank Wilczek, The Unity of Intelligence, an essay in John Brockman, Possible Minds: Twenty-five Ways of Looking at AI

[xiv] Yuval Noah Harari, Homo Deus.

[xv] https://www.youtube.com/watch?v=-EVqrDlAqYo&feature=youtu.be

[xvi] Lisa Feldman Barret, How Emotions Are Made; Lisa Feldman Barret; Lisa Feldman Barret, Seven and a Half Lessons About the Brain.

[xvii] Lisa Feldman Barret, How Emotions Are Made.

[xviii] Lisa Feldman Barret, Seven and a Half Lessons About the Brain.

[xix] https://youtu.be/OheY9DIUie4

[xx] https://youtu.be/OheY9DIUie4

[xxi] https://www.youtube.com/watch?v=-EVqrDlAqYo&feature=youtu.be

[xxii] https://numenta.com/

[xxiii] https://www.youtube.com/watch?v=-EVqrDlAqYo&feature=youtu.be

[xxiv] https://www.youtube.com/watch?v=uOA392B82qs

[xxv] https://www.youtube.com/watch?v=uOA392B82qs

[xxvi] https://www.youtube.com/watch?v=uOA392B82qs

[xxvii] https://www.youtube.com/watch?v=uOA392B82qs

[xxviii] https://www.youtube.com/watch?v=uOA392B82qs

[xxix] https://www.youtube.com/watch?reload=9&v=6ufPpZDmPKA

[xxx] https://www.youtube.com/watch?v=-EVqrDlAqYo&feature=youtu.be

[xxxi] https://singularityhub-com.cdn.ampproject.org/c/s/singularityhub.com/2017/03/22/is-the-brain-more-powerful-than-we-thought-here-comes-the-science/amp/

[xxxii] https://www.physics.ucla.edu/~mayank/https://www.physics.ucla.edu/~mayank/

[xxxiii] https://futurism.com/we-just-created-an-artificial-synapse-that-can-learn-autonomously/; https://www.cnrs.fr/index.php; https://www2.cnrs.fr/en/2903.htm?

Stanley Rorke

Capital Raise Specialist - Author - Speaker - Moderator - Researcher

11 个月

Great article Jacques. Questions came up for me 1. How much are we truly capable of if we expend energy on what matters (how much of our energy is used up my zero-sum game thinking? 2. Aligned to point 1 is "free will." The universe in its entirety is so much greater than what man or machine could begin to comprehend, that any other comparison is relative. I am largely biased by mu belief that we/us/me/you are "localized infinity" which is very different to say a quantum computer, which is domain excellent but not concious. Best Stanley

要查看或添加评论,请登录

Jacques Ludik的更多文章

社区洞察

其他会员也浏览了