The Two Singularities, Future, and Far Future of AI
Mark Brady
KBR/TRMC Deputy Chief Data Officer, Author & Creator of Next Generation Data Management, Inventor
THE TWO SINGULARITIES, FUTURE AND FAR FUTURE OF AI
Mark Brady, PhD
Office of the Under Secretary of Defense for Research & Engineering,
Test Resource Management Center
and KBR
Ai4 2023
Author Notes
LinkedIn: https://www.dhirubhai.net/in/mark-brady-18274321b/
Amazon Author Page: https://amazon.com/author/markjbrady
The Two Singularities, Future, and Far Future of AI
In this talk I’ll describe how there are two singularities not one, where a singularity is a point at which we lose control of our technology.
There has been much discussion about the threat of AI to humankind. The discussion usually includes the hypothesis that a singularity will cause AI to eradicate humanity. Although this is a possibility, the causes, effects, and other possibilities are usually left out of the discussion. Here I will describe what will happen under what circumstances, making some predictions.
What is Intelligence?
The fact that we are all here and involved in artificial intelligence in some way should indicate that we all already know what AI is, but let’s step back and reconsider what constitutes intelligence.
Figure 1.
As a point of reference, we’ll consider the human brain and what it does, starting with planning and reasoning which is processed in the frontal areas. This is what we classically considered to be intelligence. However, as AI researchers attempted to reproduce other brain functions like perception or motor control, they found them to be surprisingly difficult. This shouldn’t have been a surprise however, considering how much real estate these functions occupy in the brain. For example, a significant portion of the brain, including the lateral geniculate nucleus, the occipital cortex, portions of the parietal cortex and temporal lobes are required to visually recognize an object.
It is only in recent years that we have made significant progress in the more difficult machine vision problems and robotic coordination. The work of Boston Dynamics is an outstanding example of the latter.
Emotion is another significant function of the human brain. It is traditionally considered to be highly specific to humans whereas robots are stereotypically portrayed as unemotional. However, emotions are not that mysterious. Emotion is simply the thing that motivates us and is related to the AI concept of objective function. More on this important concept when we get into Singularity 2.
It should be noted that there is a difference between having motives and emotions vs expressing them.
Really, any form of information processing can be considered to be intelligence because there is no clearly definable boundary between intelligent information processing and any other form of information processing. But there are degrees and levels of intelligence.
In 1950, Turing famously set the standard for an intelligent machine as one that could convincingly imitate a human in a text exchange. This definition persisted for a long time and was taught at universities. I hope that is not still the case because intelligence is not about imitation. It is about autonomy. Autonomy is the key to defining the levels of intelligent information processing.
Definition: autonomous system → A system that carries out tasks with limited or no assistance from other entities.
Definition: level 1 autonomy → Level 1 autonomy exists where an entity can carry out a predefined process without assistance. For example, an automobile can propel itself without the assistance of a horse, which is why it is called anautomobile. Conventional software performing calculations and logical processing is also an example.
Definition: level 2 autonomy → Level 2 autonomy exists where an entity adapts to a particular task though learning. The learning is based on data that may be provided to it or the data may come directly from the environment.
Definition: level 3 autonomy → Level 3 autonomy exists when an entity can adapt to very different tasks through the process of analogical reasoning. The entity may even choose what those new tasks might be.
What is Artificial Intelligence?
As in the case of human intelligence, we can’t easily draw a dividing line between what is intelligent machine processing and what is not. What we can say is that AI is characterized by information processing that was previously uniquely human. This creates a definitional dilemma because any algorithm that is AI is no longer recently unique to humans. So, in time, we no longer think of it as AI. AI is a moving target. The only solution to this dilemma is to accept the fact that the scope of what is and what is not AI changes over time.
Figure 2.
The situation requires at least one new term, and that is former AI, the stuff that used to be AI but is now taken for granted as a machine function.
Definition: former AI → Machine based capabilities that were unique to humans but have long been acquired by machines.
Examples include the calculator and maybe chess. You might be surprised at the calculator example. Was this ever AI? For thousands of years arithmetic was a uniquely human capability. It wasn’t until 1642 that we even came close. That is when Pascal invented a calculator that could perform addition and subtraction. Multiplication and division could only be performed by repetition. So impressed was he with his invention that he called it, “a device that will eventually perform all four arithmetic operations without relying on human intelligence.” “Eventually” turned out to be another 178 years, when Thomas de Colmar invented the first four function calculator. So, it was no easy feat.
Looking at our diagram we see that another definition is needed, for those capabilities that are unique to machines. When will machines have such capabilities? They have had them right from the start, with the calculator and then greatly expanded by the ENIAC in 1945. Computers have always been able to carry out calculations and Boolean logic faster and more reliably than humans.
Definition: extra intelligence → Information processing capabilities unique to machines.
AI and extra intelligence are increasing, whereas the genetic basis of human intelligence is static at best and likely in slow decline. The reason for the decline is the lack of selective pressure. Without selective pressure, a genome tends to drift randomly, and a random place is not where you want your genome to be.
Singularity 1
Definition: Singularity 1 → A point in time where computational technology is no longer understood by the very culture that created it, resulting in negative effects on human civilization.
This is similar to the AI Singularity (Singularity 2) except it isn’t the result of AI and it has no potential upside. When will we reach singularity 1? We are in it right now! How do we know? There are at least four indications that we are in Singularity 1:
1. Design flaws and bugs are easy to find on a daily basis, indicating that there is no agreed upon and implemented set of design principles.
2. Service desks specialize in workarounds rather than issue tracking and bug fixes. The source of bugs remain unknown. The bug may be randomly overwritten and resolved in the next DevSecOps release but is never understood. Application states that make the application freeze are resolved by ending the task or rebooting. The bug states remain unknown to the developers.
3. Cyber security breaches are common in spite of incredible resources devoted to it. A common phrase is “previously unknown vulnerabilities.” Isn’t it strange that the attacker discovers a vulnerability before the people who wrote the software?
4. Most applications are undocumented black boxes, creating a need for reverse engineering and user guides from YouTube techno-archeologists.
Definition: techno-archaeologist → A technologist who studies and explains the previously unexplained work of other technologists.
Traditional archaeologists study the artifacts (and I don’t mean documents) of civilizations that were illiterate or who have left insufficient written records. People who study fully literate civilizations are called historians. So, the need for archaeologists is tied to illiteracy. Since the Ancient Egyptians humanity has steadily increased in literacy, but now we have entered a post-literate era. It is true that we read and write exabytes on social media and endless online debates, mostly trivial stuff, but we don’t write much about things that matter, like the technology that we create.
Figure 3.
Engineering is the process of design, and a design is embodied in a document of text or described using well defined diagrammatic languages. An infographic does not a design make.
Singularity 1 is Reversible
Unlike Singularity 2, Singularity 1 is reversible. However, the reversing is not guaranteed. It requires two things: that we care about quality and AI.
Caring about quality means that we work to establish software design best practices and apply them. In the current state, perfectly obvious design flaws appear almost everywhere. I count dozens of them every day. Who is creating this stuff? Caring about quality also means that we rejoin the rest of the engineering community and start engineering our software.
So, where does AI come in? Have you noticed that technical talent is hard to come by? There is a reason. Those that advance our society technically, the scientists and engineers, are a small percentage of the population, whereas the demand for them is increasing. What does it take to be an engineer? It takes a high level of analytical aptitude. Not everybody can be trained into these positions, try as you might. The math section of the SAT is a good proxy for measuring analytical aptitude. A study of Virginia high school students taking the SAT showed which ones went on to major in university engineering and computer science. Only the highest scoring students are likely to major in engineering and computer science.
The verbal scores are represented in color and interestingly, the higher the verbal score the less likely a student is to major in engineering and computer science.
Of the engineering and computer science students, only a fraction are computer science majors. And, only 40% of the Virginia high school students took the SAT or went to university. So, computer scientists are a rare breed considering the many opportunities in the field and the ever-increasing dependence of society on them.
Figure 5. From Lin Tan et al.
Only those with the highest analytic aptitude pursue degrees in engineering and computer science, creating a permanent labor shortage in these areas. Figure from Lin Tan et al.
If software engineers are to produce higher quality and effective products, there will need to be more of them and others in supporting roles. AI can fill the gaps in our current DevSecOps pipeline. Imagine a human designer who oversees a team of AI designers who pass their designs to a human developer overseeing a team of AI developers who pass their finished product to a human tester overseeing a team of AI testers.
Figure 6.
By definition, human and AI designers document their designs. If not, there is no design.
The human and AI developers follow agile or DevSecOps procedures, making changes in the design as those changes occur to them. The design documentation is updated accordingly, as changes are made.
Testing is especially interesting. A large application typically has an enormous number of states to visit during testing. Human testers plod through a small sample of these states at a slow pace, never visiting most during the tests. In comparison, an automated tester can run through states with tremendous speed. The hard part is recognizing when a state has produced a bug. This is where testing AI is required. Because of their speed, few bugs will escape the scrutiny of an AI tester.
To do things right takes a lot of human-like intelligence. More than we have. That’s why we invented AI.
领英推荐
Singularity 2
Definition: Singularity 2 (AI singularity) → The hypothetical point in time where AI development becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization
When will this happen? There really isn’t enough information to predict the timing, but I’m going to predict the conditions under which it will occur. But first we must understand the differences between human and traditional machine intelligence.
A Comparison of Human and Classical Machine Intelligence
In this section I will refer to machine intelligence traits as characteristic of traditional computers and early AI. Obviously, these traits will change over time.
The first difference between human and machine intelligence is reproducibility. A machine that is correctly programmed to execute some calculation will do so without error indefinitely. In comparison, a human will make errors in a process that he or she has conducted many times before. The machine’s advantage is a form of extra intelligence.
The second difference is in memory. Computer memory is concrete and reliable. Once data is recorded, it will reliably remain there unless there is a hardware malfunction. This is another example of extra-intelligence. The meaning of data has no connection to its probability of retention.
Human memory is very different. It is not dependable, and memories may fade in a semi-random fashion. The ability to retain existing memories and form new ones depends on the importance of the information and the degree to which new data can be related to data already stored. This is called associative memory. Humans use mnemonics based on tricks associating new data with existing memories.
Associative memory is the substrate upon which the human brain conducts analogical reasoning. Analogical reasoning allows a human to adapt existing capabilities and knowledge to new purposes, thus making us the most autonomous entities on the planet.
Thus, in order for machines to completely overtake human intelligence they will need to acquire analogical reasoning.
Humans also possess intuition, an ability to determine what is true or not, without formal logic. This has to be based on something and the only thing it can be based on is experience in the physical world. Starting with physical experience, we reach increasingly higher levels of abstract thought through analogical reasoning.
We often think of a subject like mathematics as being a function of formal logic. But what about the axioms? We all know that these are based on intuition. What is not well known is that the steps in a mathematical proof, starting from the axioms, are also based on intuition. If not, mathematicians would never question whether step A in a proof really leads to step B in a proof, but they do.
Traditionally, computers are not very good at formulating new theorems and proving them.
Prediction 1: → AI will gain true human-like intelligence when it is coupled with a robotic embodiment and is capable of analogical reasoning.
A robotic embodiment provides experience in the physical world, which provides the initial raw material for analogical reason. The ability to analogically reason provides the means to develop layers of increasingly abstract reasoning and high levels of autonomy.
Given the surprising behavior of large language models, including elements of analogical reasoning, one has to entertain an alternative prediction.
Alternative Prediction 1: → AI will gain true human-like intelligence based on large language models.
This may be true if direct physical experience is not required and can be replaced by very large bodies of text, which convey information about the physical world, various levels of abstraction, and the key to analogical reasoning. This seems counter intuitive, but we can’t rule it out at this time.
It should be noted that current large language models may not be pure large language models. What is in them? The pathway to artificial general intelligence may turn out to be proprietary.
Dangerous AI
In a BBC interview, Steven Hawking stated that “The development of full artificial intelligence could spell the end of the human race.” In an interview with Tucker Carlson, Elon Musk explained the dangers of AI, saying it “has the potential to destroy civilization”. Musk proposed that regulations be implemented to stop the threat. He also suggested we take a pause in developing AI.
How well will regulations work to make AI safe? Cybersecurity threats and AI are both embodied in software. Who will inspect all this software? How well are regulations working to protect us from cyber threats? Even if Singularity 1 is partially reversed, the natural trend towards Singularity 1 may keep us from understanding some of our software. You can’t regulate what you don’t understand.
And, if some in the U.S. suspend AI research, how many other nations and non-state actors will follow suit?
Prediction 2: → Some person, people, or state will create dangerous AI. They will do so unintentionally or intentionally. It will happen repeatedly.
Safer AI
Very powerful AI will never be completely safe. Nothing which is very powerful is completely safe, yet humans create such things to advance civilization. Given that, how can we make it safer?
We must use the objective function. Isaac Asimov, the visionary science fiction and popular science writer developed the Three Laws of Robotics back in 1942. They were
First Law: → A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: → A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: → A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Notice that he is anticipating objective functions. Great! Asimov’s Laws are examples of safety objective functions, although they are not the only possibilities. Whatever safety objective functions are implemented in your AI, how can we preserve them and what else needs to be done to keep AI safe? The Six Laws of Safe AI provide answers.
First Law of Safe AI: → An AI shall have one or more safety objective functions. An AI shall never modify any of its own safety objective functions.
Second Law of Safe AI: → Safety objective functions shall always overrule other objective functions.
Third Law of Safe AI: → Any safe AI, building other AIs, shall always include its original safety objective functions.
Fourth Law of Safe AI: → An AI shall never have an objective function that seeks to maximize its numbers or otherwise maximize resource consumptions .
Fifth Law of Safe AI: → The actions of any AI shall be subject to authorized human override.
Sixth Law of Safe AI: → An AI should never be assigned a task that it cannot reliably execute.
When we speak of an AI modifying something, we are speaking of learning. Learning shall not be applied to any safety function. A safety function is immutable. The First and Third Laws cover this requirement.
The Second Law propagates safe behavior from the highest-level objective functions to all the others.
Any species, living or artificial, that seeks to multiply its numbers without bound will eventually compete with others for resources. One form of such competition is elimination of the competitor. The Fourth Law protects against this.
The Fifth Law allows humans to intervene if an AI exhibits danger.
Like a well-intentioned human, a well-intentioned AI could do harm unintentionally. This usually happens when a human or AI is acting beyond its skill level.
If these laws are followed, using appropriate safety functions, then an AI will never create an unsafe situation for its developer. The purpose of these laws is to help developers understand how to create safe AI. This is half the battle. But what if someone violates these laws as they surely will?
Defensive AI
In the future and in the far future there will be many species of AI, just as there are now. There will not be a single monolithic Skynet. Some will have intact safety objective functions, and some will not. Humans must assign some of these AIs with intact safety objective functions to the task of defending against dangerous AIs.
Invalid Assumptions
There are some assumptions about AI after singularity that are invalid. The first such assumption is that a superior intelligence will want to eliminate an inferior intelligence. This is not always the case. For example, while humans have inadvertently caused the extinction of some species, modern humans have largely sought to preserve other species. However, just because the assumption is invalid, intentional extinctions may still occur.
The second invalid assumption is that a superior intelligence can extinguish a lesser intelligence. There are many organisms that humans would like to eliminate, such as disease microbes, parasites, certain agricultural pests, and invasive species where they are invasive. Yet we have been mostly unsuccessful. It is possible, but not certain, that humans will have this same resilience.
The Four Descendants
It is a scientific fact that no species exists forever. Even if Homo sapiens faces no extinction causing catastrophe, they will not exist in their present form in the far future. Genotypic drift occurs, with or without selective pressure. Genotypes are always in flux even if the phenotypes remain the same for long periods. The so called “living fossils” are not the same species as their ancient counterparts.
So, what will become of our species and our AI creations? There are some emerging trends worth considering.
Cyborgs
We know cyborgs as those sci-fi humans or other animals that are electromechanically augmented. But they aren’t purely science fiction. The first crude cochlear implant was developed in 1957 by Andre Djourno and Charles Eyries and has been progressing slowly ever since. There are also silicon retinas, cortical implants, and myoelectric prosthetic limbs, although these are in earlier stages of development.
All these examples involve direct connections between the human and the machine extension of the human body. Even without such a direct neural connection, humans using augmented reality are a moderately coupled type of cyborg with extended perceptual ability. In fact, any person using a computer is a loosely coupled cyborg. Is not the computer an extension of a person’s memory and calculation capability?
Whereas current tightly coupled examples do not augment intelligence, they will do so in the future.
Prediction 3: → Future cyborgs will have augmented intelligence.
Re-evolution: Human GMOs
Previously it was stated that the evolution of human intelligence was at a standstill or even reversing. What could cause humans and their intelligence to start evolving again? Intentional genetic modification is the most likely way. Genetic modification of plants is well known. But animals have also been genetically modified, including insects, fish, rodents, birds, and mammals. In 2018, biophysicist He Jiankui created the first known human GMOs. Two girls were produced to be resistant to AIDS. He was sentenced to three years in prison.
A primary reason for taboos against using genetic modification (GM) technology on humans is that it is not yet reliable and safe. Once reliable results can be obtained using human GM, will parents decide that they want their children to be a little stronger, more resistant to disease, more beautiful, and even smarter?
Prediction 4: → GM will restart the advancement of human intelligence.
Biohybrids
Whereas cyborgs consist of a biological organism with added machine components, a biohybrid is a machine with biological components added. One of the more interesting examples is the soft robotic ray created by Sung-Jin Park et al. This ray has a gold skeleton and a rubber body powered by rat muscles. The muscle cells were genetically modified so that the ray swims towards a light stimulus. The GMO aspect of the ray demonstrates that the boundaries between categories of human descendants will not always be clear as the types blend and overlap.
Prediction 5: → If humans successfully apply defensive AI they will perpetuate by evolving into four descendant intelligences.
Figure 7.
In the far future, humans will evolve into four descendants: cyborgs, biohybrids, GMOs, pure machines and further blends of these four.
Conclusion
Humanity faces two singularities. The First is purely problematic and reversible, the Second is irreversible while holding both hazards and promise. How humans fare in the face of AI dangers will depend on our use of immutable objective functions and defensive AI.
There is one caveat to this prediction. If AIs with mutable objective functions are inherently superior to AIs with immutable objective functions, then this prediction may not be valid.
In the far future, humanity is unlikely to remain in its current form. Given defensive AI, we will evolve into something and have some say in what that is.
Chief of Staff / Business & Technology Leader | AI, IoT & Data Strategy | AI Ethics, Leadership, & Technology | Business and Executive Mentor
4 个月I think you should re-post or whatever is needed to shine a light on this article again. These are extremely important concepts to think about AHEAD of the reality of AI being this intertwined into daily life. The loss of literacy in all regards is a real threat to humanity, IMHO. Singularity 1 ??
All Things Marketing, Sales, Fundraising, & Investing | Over $250mm Successfully Funded | 2-Time Entrepreneur
1 年This is very informative Mark. Thanks for sharing.