Artificial Intelligence: The Journey into the Unknown - PART II
Vidya Munde-Mueller
SaaS Founder ?? | Speaker | AI Evangelist | Top 100+ Women Advancing AI in 2023 | Coaching Female Founders | Agile Coach | Design Thinking Expert
OP-ED: Vidya Munde-Müller and Sascha Lambert
We are in the middle of serious debate about human history approaching ‘singularity’. This is not science fiction anymore. AI technology has the potential to reshape our society within the next few decades. The reshaping already started with more and more automation that finds way into our workday life, e.g. voice assistants always available inside our smartphones. The authors of this op-ed intend to give their understanding of singularity and what kind of future may befall on humanity. Rather than choosing one or the other future, the authors want to show different possibilities of AI evolution using the lessons of human evolution.
The Journey into the Great Unknown
As clarified in the chapter (The Journey - Part I) before, the second wave is very much harder to predict with the possibility of human-level AI and existential risk it could pose to humans if things get out of hands. To recap where we are on the scale to Superintelligence:
Image Credits: Alexa via Amazon, Robot via Unsplash | Alex Knight, Ex-Machina motion picture (Universal Pictures International Germany GmbH). Pictures should just symbolize potential evolutionary steps of AI
Many of the theories are based on hypothetical scenarios. It is not clear, if this is what will happen. As can be seen from the chapters ahead, a lot of central questions remain to be answered. Till this day, we haven’t completely deciphered the workings of the brain and many processes are not understood, making it very hard or even impossible to create machines with a real capability to think like humans do. The next question that arises, is it something we should aim for?
The Long Path Towards Artificial General Intelligence
Excursus: What is Genuine Intelligence?
Before we get started on how to build Artificial General Intelligence or AGI, it is important to reflect what is genuine intelligence according to us humans. A human being is a generalist, a jack of all trades so to say. So the question is how could a machine gain general intelligence to match the humans? If AI can match average human performance in all or nearly all spheres of intellectual activity then it is said to have human-level intelligence. Humans are creative and they have common sense. Endowing machines with common sense for example, would mean that machines can understand the laws of physics. A child can easily understand and visualize new situations like if you dangle a rat by tail, the part closer to the ground is its nose and not its ears, without ever seeing it before. So machines do need to understand new situations or abstract concepts, like numbers or law or money. These concepts are an imagined order in humans and not found in nature and are entirely human creation. This is a necessary component of what we humans define as genuine intelligence.
AI needs to be able to create the hypothetical and imaginary scenarios and invent new categories of useful things like we humans did, such as books, steam engines, internet, etc. It would need to invent whole new stuff. It would require more than just predictions to fulfill the intelligence requirements to be human-level. Planning would be essential for this. Can it plan a large engineering project or design a faster computer. We would consider it intelligent if it had goals and knew how to achieve them. Take a cat with its animal intelligence. It knows to wait patiently by the tree stump if the mouse disappeared behind it. Will a machine intelligence know this and be able to anticipate it?
What makes us humans is not only the ability to perceive or think or talk about everyday concepts like cats or books or trees but also to imagine stars, cells, atoms as well as magnetic fields, bank accounts, computer programs. We are unique among animal kingdom, in that we can reflect on human condition by using language and transcend our biological imperatives. Human intelligence is collective and was achieved by building on achievements of previous generations. Additionally, we are shaped by sexual selection and competition for societal status. This resulted in additional forms of creativity exemplified by dance, fashion, art, music and literature. We have invented new categories like agriculture, printing, steam engine as humans and have come up with concepts of farming, writing or postmodernism. The big question is, if machines would be able to somehow replicate these types of concepts and technologies.
As written in previous chapter, the evolution of individual neurons led us to have complex brain structures that seem to accomplish more than the sum of the individual components. If that works in biology on the basis of chemistry and electricity, then why not in the world of mathematics or algorithms. If every thought we think is based on smaller parts and all (sub-)processes can be described in a physical, chemical or mathematical way then it should also be possible to find a way to build machines with genuine intelligence.
Is the magic ingredient access to infinite knowledge, is it experience or something we do not know today? Seen from a scientific viewpoint, there cannot be any kind of magic involved. It seems that the processes are still too complex for us to understand, but they can be understood. Perhaps we will decipher the ‘magic’ in some decades, perhaps not in a hundred years but someday it might happen.
Building Artificial General Intelligence
Let’s jump into how we could build a genuinely intelligent machine someday. There are broadly two ways how AGI could be created in future (For details, please read “Technological Singularity” by Murray Shanahan):
- Biological Approach
- Engineering Approach
Image Credits: Biological brain via sciencebasedmedicine.org, Digital brain via wallhere.com
Both the approaches could lead to AGI but, naturally there are lot of challenges involved. One of the biological approaches could be to create AGI using a whole brain emulation. Human brain contains around 80 billion neurons and tens of trillions of synapses. The brain of mouse has 70 million neurons and each neuron has several synaptic connections. So for this hypothetical experiment let us consider that we could create a brain of a mouse. The emulation of the brain will have three steps - mapping, simulation and embodiment. We would need to map the brain at a high enough spatial resolution and build a real-time simulation of all of its electrochemical activity. In the final stage, the simulation would be interfaced to external environment. Here is where biotechnology (genetically modifying the mouse) and nanotechnology (mapping the neural activity of the mouse) could help.
Simulating the brain of a mouse is a huge feat but not enough if we want human-level general intelligence. There would be a lot of challenges, especially we would need a lot of processing power to simulate the tens of millions of neurons. Just to show how difficult it would be to replicate the whole brain of humans, consider in comparison the openworm project which wants to build a first digital life form. In comparison to humans, openworm has only 302 neurons and the project is still pending due to lack of fundamental data on signaling properties of 302 neurons.
Another way to create AGI from scratch is engineering. Here there are no limitations as faced in emulating the brain, as this approach is based not on creating exact digital copies of neurons but rather on learning from the data. The architecture for such an AGI is based on Machine Learning (ML) and optimization algorithms. ML would help to construct probabilistic predictive models of the world and will use optimization to find actions that maximize expected reward. But for this to function, ML would require huge amount of data about everyday world and behavior of humans in order to build predictive models.
The ML framework will consist of these core functions:
- Reward function
- Learning function
- Optimization
Reward function will determine how AI will behave. It helps the AI to improve. After every action or series of actions the reward function can be calculated from the current output. The information given by the evaluation of the reward function is then used by the AI to change its behavior. The reward can also be given by a teacher as in supervised learning.
The learning function describes how the machine will learn something. There are several ways to implement a learning function. Most prominent ones are supervised learning, unsupervised learning and reinforcement learning. Each learning function has its advantages and its drawbacks. It depends on the given problem which one to choose. There is no golden learning function that is suitable for all kind of problems. Even a combination of learning concepts is often used to tackle more complex problems.
There are so many neural underpinnings of a human brain, which are yet to be revealed. So this is very difficult to teach an AI when we ourselves don’t know how our brain works. The optimization function maximizes the expected reward. Will AI learn new tools to maximize the reward? A new approach in research is to implement AI algorithms with curiosity. The problem of AGI is that it has to be capable to adapt to unseen situations with solutions it learned in the past (transfer learning). Imagine how many things you learned in your life. It’s an unbelievable amount of data. It won’t be efficient to implement algorithms for all possible environments and problem situations even with tiny differences. This could be an infinite task. Curiosity is what drives children of humans and animals to keep learning for a long period of time. Curiosity implemented to an AI might be a puzzle piece towards an AGI.
Towards Superintelligence
Suppose we end up building AGI or human-level intelligence, the path to Superintelligence then becomes more plausible. According to Arthur C Clark any sufficiently advanced technology is indistinguishable to magic. If AI can outwit humans at every turn then it is superintelligent AI and would appear as if it is magic. It can be built in the same way as the AGI and using the same approaches as before:
- Brain inspired Superintelligence
- Engineered AI Superintelligence
The biological approach could be used to enhance the simulated brain at anatomical level for example by enlarging the prefrontal cortex or the hippocampus, especially because the simulated brain does not have to fit inside a physical cranium. But one could also build several copies of AI for doing different tasks and exploit parallelism. Two of the very important tricks which can make the engineered AI or brain-inspired AI superintelligence more viable are speedup and parallelism.
AI could combine powerful optimization with a very powerful ML algorithm to maximize expected reward that we can barely imagine. As with brain-based AI, once the AI is engineered whose intelligence is only slightly above human level, the dynamics of recursive self-improvement can trigger an intelligence explosion.
A big question is still unanswered. Even though these superintelligent machines might outperform us in every task on earth but will they really think like us with what we call mind, soul or spirit? Will something like moral, feelings and awareness evolve just from the above described intelligence or is still something missing? We don’t know.
Endgame 2045: The Dawn of Singularity
What will happen if machines attain a level far beyond that of humanity? If they gain Superintelligence as shown in the previous step? Could it mean extinction of humanity as we are bad for our planet and the universe? According to the futurist Ray Kurzweil, we are approaching ‘Singularity’ in the year 2045. In layman’s terms singularity means that machines will be smarter than humans. Kurzweil bases his vision on the technological trends in the past decades (especially on Moore’s law) and predicts that the exponential growth in computing power will make simulating the human cortex in real-time a possibility. Although the industry is largely predicting the end of Moore’s law soon, there are signs that new types of architectures such as domain-focused accelerating which can 1000x the computing performance. Quantum Computing might be an answer to today's computations limitations as well. Add to that another belief in law of accelerating returns, the more you have the faster you grow, for making huge advances AI in realms of possibility.
Many naturally disagree with Kurzweil on the timetable and consider 2045 a distraction. But few can challenge that AI will be one of the most important technological trends in the 21st century. According to notes AI researcher and someone who is often called the ‘Father of AI’, Jürgen Schmidhuber, “It is much more than just another industrial revolution. It is something that transcends humankind and life itself.”
Central Questions and Hypotheses
We have discussed in the previous sections about how we could technically build AGI or Superintelligence. In this chapter we will ask a line of questions which need to be answered before we can create such an intelligence.
Do We Even Know the Human Brain?
In order to make any progress with the brain-inspired approach to AI a reality, the key fact is knowing the neural underpinnings of the human brain. Our brain is a very sophisticated organ and master of adaptation. So the central question is whether it is really possible to replicate this unbelievable feat of evolution. Currently, we haven’t really understood the brain enough to do that. There is a joke going around that there is yet no real cure for baldness in men. How are we then advanced enough to have a machine really understand us? In addition many open challenges remain, like computing power cannot really keep on increasing as predicted by Moore’s law and the physics of the brain is really non-computable, at least at the present. We would really need to understand intelligence to replicate it.
Do Lessons of Human Evolution Apply to AI?
Creation of all complex life on Earth was due to sheer brute force of evolution. There is a parallelism found in evolution where basic elements are replicated, varied, repeated countless number of times to evolve the marvels of hand, eye and the brain. Our human brain in turn came up with farming, writing, postmodernism and punk rock. But even if trying to maximize the reward i.e. proliferation of our species, there was no global cost function or utility guiding the progress. Evolution simply explores vast amount of possibilities. Are the AI algorithms going to invent a hand or eye to find a solution? The simple answer maybe no. Like mentioned before curiosity might be an answer. Curiosity can be a great driver to continuously learn new concepts and to adapt to new situations with things already learnt.
The big lesson from evolution is natural selection. Advanced technology can emerge from even a simple, brute-force algorithm if we wait long enough. Similarly in case of AI, we could devise the right sort of brute-force algorithm and supply it with an open-ended reward function and unleash it on the environment. Then the only thing limiting its capabilities would be computing power. Genetic algorithms provide another approach based on evolutionary concepts. Huge loads of AI algorithms can be put in a competition. Only the fittest will survive this competition and will spawn to the next population perhaps with some mutations in the genes (the code). This way researchers try to replicate evolution in a digital way already today. Combined with other concepts like Quantum Computing it could be a way to repeat evolution but in a much shorter time.
The first AI would be ‘Seed AI’. As with brain-based AI, once the AI is engineered whose intelligence is only slightly above human level, the dynamics of recursive self-improvement become applicable and possibly leading to intelligence explosion.
Common Sense and Creativity in Machines
Another question is how to endow machines with generic capabilities like everyday things, laws of physics, human psychology as well as with abstract models and concepts like integer or money etc. We talked about some examples of intelligence in the previous chapter and what we would consider genuine intelligence. It would be very important for AI to be able to discover (or even invent) abstract concepts for itself to cope with a world that cannot be known in advance. Creativity is a very human quality and requires unique skills. It is more an open-ended combinations like Lego bricks, where you can assemble them in endless combinations. Yoshua Bengio, a deep learning pioneer, believes that more brain-based approach would be necessary for endowing machines with real intelligence and creative thinking skills.
Genie in the Bottle - Inevitability of AI?
One of the central questions for an engineered approach is, can a mathematical model be created that could be effective in predicting human behavior. Can we design a reward function that is guaranteed not to produce undesirable behavior? Or will it be like King Midas who asked that everything he touched became gold and who could not even eat or drink? Do we have a chance to program morality or legality in the reward function? Just imagine that morality is interpreted differently in different regions on earth.
If as described earlier, recursive self-reinforcing AI (either brain-based or engineered using ML) with huge computing power is created, then a mouse-level artificial general intelligence is not only possible but it is a near term prospect. If mouse-level artificial intelligence is possible than human level intelligence is possible. If human-level intelligence is possible than superhuman intelligence is almost inevitable. The genie is out of the bottle!
Will Machine Learning Surprise Us?
Machine Learning is an important step in engineering an AI from scratch. Such AI will operate very differently to the biological brain as it relies on big data and fast processing. It might solve problems in an un-intuitive way for us humans. We may not fully understand how it did it. Therefore human-level AI does not or will not act like a human. If this is inscrutable at the level of AGI how can we even predict or control a superintelligent AI? It is also a question of trust. As mentioned earlier we do not fully understand our brain processes but yet we trust ourselves more or less. If we could make sure that AGI or a superintelligent machine is operating in well defined borders would that be enough for us to have trust even if we don’t understand the processes inside?
How Effective is Data?
There was a very interesting paper ‘The Unreasonable Effectiveness of Data’ written by some Google scientists. It showed a very important data effect. It was found that a trillion order messy dataset was highly effective in machine translation for which a clean dataset with mere million items was remarkably useless. To reflect back to the common sense of a machine to understand laws of physics like a rat dangled at its tail will have its nose near the ground and not the ears. The bet is that with enough data an adequate model of the world can be created. This model can predict using the data from millions of videos in internet of dangling objects, millions of rats in different positions doing different things. It seems what is needed is a bigger training set even though it may be noisy and not clean. We have to find ways that allow us to use AI even with sparse and incomplete data or noisy data or even biased data. Data in all our databases and all the data in the internet reflect our real life somehow. Of course our life is full of bias. Everyone has bias so does our data. Do we have to eliminate all subjectivity from the data or can we cope with it? Often data is not really available, nevertheless this does not have to be a show stopper for a process.
Dissimilar to how machines learn children don’t need vast amount of data to learn for example the concept cat. A kid just needs to see a cat once or twice and it got it. Maybe today’s machine learning algorithms reveal the wrong patterns inside the data. Maybe because we programmed the algorithms with our knowledge which is incomplete.
Would AI come to Human-Level Intelligence?
One of the great abilities of human beings is to transcend the biological reward function through rationality and reflection.By using rationality and principled design, humans could develop new technology in a more efficient way than the evolutionary brute-force manner. Would AI ever come to this level? Would it investigate the world and build-up know-how? Would it construct rational arguments?
As in the book ‘Sapiens’ by Yuval Harari, one of the things that makes us humans is the possibility to think in abstract concepts (or imagined order). We have developed sophisticated language which was one of the pillars of human development. A very difficult thing for AI for example is to learn human language. The question is if AI can grasp it with ML just like it can detect patterns in a crowd. But even language is a form of behavior. Can it then not be replicated by the sheer brute-force approach of natural evolution? The examples of deep learning algorithms in place today can understand patterns in a crowd or pattern in vegetation. Can’t it then use the algorithms to understand language? If a Superintelligence is built at some point, it may use emotive language but it wouldn’t be out of empathy or from deceptive intent. It would be be for a purely instrumental reason. So even if AI can come to human-level intelligence it won’t really act human.
Excursus: How Humans Could Evolve
In the above sections, we discussed the different possibilities of AI evolution including the fact that human-level AI might not behave as a human. In this section we want to browse through different ways humans might address the threat of AI. One concept that has been out there is that humans could extend their life by using ‘Mind Uploading’. There are two ways to do that:
- Transhumanism
- Mind Uploading
Transhumanism is the belief that humans can evolve beyond their current and mental limitations using science and technology. It is a way to cognitively enhance humans. The other way is to do mind uploading.
A digital mind is a mind that runs on a computer. One type of a digital mind is the upload of a human mind that has been moved to a digital format and that runs as a software program on a computer. Another digital mind is that of Artificial General Intelligence (AGI). The first case of upload is based on human mind and replicating it with software but the second case of an AGI is based on computer science principles and will have little or no resemblance to the human mind.
Unlike a biological brain the digital realized brain can be copied arbitrarily many times. It can be speeded up and is liberated from biological imperatives like need of food and sleep.
Could extending consciousness in a digital mind help us mitigate the threat posed by superintelligent machines? We will detail our final thoughts on AI evolution in the next chapter.
In the next article we will dig deeper into the question is AI a 'Utopia' or 'Dystopia'. Stay tuned!
-----------------------------------------------------
About the Authors:
Vidya Munde-Müller is the Founder of Givetastic.org (Giving. Made Fantastic) and Women in AI Ambassador, Germany
Sascha Lambert is the Business Owner of Artificial Intelligence at Deutsche Telekom IT and Co-lead of AI Community at Deutsche Telekom