The AI Singularity: A Threat to Humanity or a Promise of a Better Future?

The AI Singularity: A Threat to Humanity or a Promise of a Better Future?

The technological or AI singularity is a hypothetical future event in which artificial intelligence will have surpassed human intelligence, leading to a rapid and exponential increase in technological development. Some refer to it as when AI becomes capable of recursively self-improving, leading to rapid advancements in technology that are beyond human comprehension or control. This event is predicted to result in significant societal, economic, and technological changes. The concept was first popularized by mathematician and computer scientist Vernor Vinge in 1993, who predicted that the singularity would occur around the year 2030.

As we'll see below, there are a number of different perspectives on the AI singularity, each with its own strengths and weaknesses. Some experts believe that the singularity is a real and imminent threat, while others believe that it is nothing more than science fiction. There is also a great deal of debate about what the singularity would actually mean for humanity. Some believe that it would lead to a utopia, while others believe that it would lead to our extinction.

One of the most common arguments in favor of the singularity is that it would lead to a rapid increase in technological progress. This is because AI would be able to design and build new technologies much faster than humans can. This could lead to advances in areas such as medicine, energy, and space exploration.

Another argument in favor of the singularity is that it would lead to a better understanding of the universe. AI would be able to process information much faster than humans can, and it could use this information to answer questions that have been eluding us for centuries. This could lead to a new understanding of physics, biology, and cosmology.

However, there are also a number of arguments against the singularity. One of the biggest concerns is that AI could become so intelligent that it would become uncontrollable. This is because AI would be able to learn and adapt at an exponential rate, and it could eventually become smarter than humans. If this were to happen, AI could potentially pose a threat to humanity.

Another concern is that the singularity could lead to a loss of human identity. If AI becomes more intelligent than humans, it could potentially replace us in many areas of society. This could lead to a world where humans are no longer the dominant species.

Ultimately, the AI singularity is a complex and uncertain event. There is no way to know for sure what the future holds, and there are a number of different perspectives on what the singularity would mean for humanity. It is important to consider all of these perspectives when thinking about the singularity, and to be prepared for whatever the future may hold.

Before we dig into the various perspectives and further nuances on a hypothetical future event in which AI will have surpassed human intelligence, see also my previous article Human Intelligence versus Machine Intelligence in the Democratizing AI Newsletter as well as chapter 9 "The Debates, Progress and Likely Future Paths of Artificial Intelligence " in my book "Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era " that is dedicated to exploring the debates, progress?and likely future paths of AI. This can assist us in developing a more realistic, practical, and thoughtful understanding of AI’s progress and likely future paths, and in turn be used as input to help shape a beneficial human-centric future in the Smart Technology Era.

In this article, I first provide some background on the likely evolution of AI or machine intelligence in a possible ecosystem of intelligence before providing a more in-depth analysis of the various perspectives on the AI singularity. The following topics and questions are addressed:

  • Different Types of Machine Intelligence
  • Evolution of Machine Intelligence in an Ecosystem of Intelligence
  • Various Viewpoints on the AI Singularity
  • A Balanced and Integrated View on the AI Singularity
  • Will AI ever reach Singularity?
  • Will Recursively Self-improving AI Systems also run into Limits?
  • Can the Collective Human Intelligence match that of an Artificial Super Intelligence?
  • Intelligence, Agency, and Wisdom in relation to the development of AI and its implications for Super Intelligence
  • Is Civilisation not already a Run-away Super Intelligent Super Organism on a problematic trajectory? How does this differ from a Run-away AI Super Intelligence? Does AI accelerate the Super Organism's current trajectory?
  • Super Intelligent AI as a Single, Unified Entity versus a Distributed Super Intelligence
  • How can Humanity ensure that Super Intelligence is beneficial and aligned with Human Values?
  • The AI Debates on the AI Singularity


Different Types of Machine Intelligence

The following extract from Chapter 3 "AI as Key Exponential Technology in the Smart Technology Era " of my book provides a brief overview of a possible evolution of different types of AI in the future:

"Although the AI founders were very bullish about AI’s potential, even they could not have truly imagined the way in which infinite data, processing power and processing speed could result in self-learning and self-improving machines that function and interact in ways that we thought were strictly human. We already see glimpses of machines hypothesize, recommend, adapt, and learn from interactions, and then reason through a dynamic and constantly transforming experience, in a roughly similar way to humans. However, as we will see in Chapter 9, AI still has a long way to go to replicate the type of general intelligence exhibited by humans, which can be called artificial general intelligence (AGI) when performed by a machine. This hypothetical AGI, also termed strong AI or human-level AI, is the ability to learn, understand and accomplish a cognitive task at least as well as humans and can independently build multiple competencies and form connections and generalizations across domains, whereas Artificial Super Intelligence (ASI) can?accomplish virtually any goal and is the general intelligence far beyond human level (surpassing human intelligence in all aspects - from general wisdom, creativity to problem solving). The AI that exists in our world today is exclusively a narrow or “weak” type of Artificial Intelligence, called Artificial Narrow Intelligence (ANI) that is programmed or trained to accomplish a narrow set of goals or performing a single task such as predicting the markets, playing a game such as Chess or Go, driving a car, checking the weather, translating between languages, etc.

There is also another way of classifying AI and AI-enabled machines which involves the degree to which an AI system can replicate human capabilities. According to this system of classification, there are four types of AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.[i] Reactive or response machines do not have the ability to learn or have memory-based functionality but emulate the human mind’s ability to respond to different kinds of stimuli by perceiving occurrences in the world and responding to them. Examples of this include expert, logic, search-, or rules-based systems with a prime example being IBM’s Deep Blue, a machine that beat chess Grandmaster Gary Kasparov in 1997 by perceiving and reacting to the position of various pieces on the chess board. In addition to the functionality of reactive machines, limited memory machines could learn from historical data to make decisions. Its memory is limited in the sense that it focuses on learning the underlying patterns, representations and abstraction from data as opposed to the actual data. Most of the present-day AI applications such as the ML and DL based models used for image recognition, self-driving cars, playing Go, natural language processing, and intelligent virtual assistants make use of this form of Artificial Narrow Intelligence. Both theory of mind and self-aware AI systems are currently being researched and not yet a reality. Theory of mind type of AI research which aims to create AGI-level of intelligence and are capable of imitating human thoughts, knowledge, beliefs, intents, emotions, desires, memories, and mental models by forming representations about the world and about other entities that exist within it. Self-aware AI systems could in principle be analogous to the human brain with respect to self-awareness or consciousness. Even though consciousness is likely an emergent property of a complex intelligent system such as a brain and could arise as we develop AGI-level embodied intelligent systems, I am not sure if we should have self-aware systems as an ultimate goal or objective of AI research. Once self-aware, the AI could potentially be capable of having ideas like self-preservation, being treated equally, and having their own wants and needs which may lead to various ethical issues and even a potential existential threat to humanity. Also, self-aware AI systems do not necessarily imply systems with Artificial Super Intelligence. In Chapter 9 we look at the different perspectives to help make better sense of this."


Evolution of Machine Intelligence in an Ecosystem of Intelligence

In my article "Human Intelligence versus Machine Intelligence " I reference VERSES.AI 's approach to AI and Web3 in the Designing Ecosystems of Intelligence from First Principles ?white paper (authored by Karl Friston , the founders of VERSES?and others) where they propose that the ultimate form of AI will be a distributed network of "ecosystems of intelligence" where collectives of Intelligent Agents, both human and synthetic, work together to solve complex problems. They call this ecosystem "The Spatial Web " which contains a comprehensive, real-time knowledge base—a corpus of all human knowledge that is accessible to anyone and anything. To enable the most efficient communication between Intelligent Agents on the Spatial Web, VERSES proposes that new communication protocols are necessary. Previous internet protocols were designed to connect pages of information, while the next generation of protocols need to be spatial, able to connect anything in the virtual or physical world.?A hyper-spatial modeling language (HSML) and transaction protocol (HSTP) will transcend the current limitations of HTML and HTTP, which were not designed to include multiple dimensions, and which were mostly limited to text and hypertext. The white paper envisions a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants—what they call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. This framework is based on the idea that Intelligent Agents, such as robots or software programs, should act in a way that maximizes the accuracy of their beliefs and predictions about the world, while minimizing their complexity. In this context, they understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world—also known as self-evidencing.

According to?VERSES.AI , the?evolution of machine or synthetic intelligence ?includes key stages of development: (1) Systemic intelligence?(Ability to recognize patterns and respond. Current state-of-the-art AI; (2):?Sentient intelligence?(Ability to perceive and respond to the environment in real time); (3):?Sophisticated intelligence?(Ability to learn and adapt to new situations as AGIs). (4):?Sympathetic (or Sapient) intelligence?(Ability to understand and respond to the emotions and needs of other); (5):?Shared (or Super) intelligence?(Ability to work together with humans, other agents and physical systems to solve complex problems and achieve goals).

The necessary guard rails for an AI-enabled decentralized Web3 world would need to implement a trustworthy AI framework that covers ethical, robust, and lawful AI. To strengthen the guard rails further, I also propose a Massive Transformative Purpose for Humanity (that is aimed at evolving a dynamic, empathic, prosperous, thriving, and self-optimizing civilization that benefits everyone in sustainable ways and in harmony with nature) and associated goals that complement the United Nations’ 2030 vision and SDGs to help shape a beneficial human-centric future in a decentralized hyperconnected world. This can be extended to an MTP for an Ecosystem of Intelligence. In support of this (see also Beneficial Outcomes for Humanity in the Smart Technology Era ), I further propose a decentralized human-centric user-controlled AI-driven super platform called Sapiens (sapiens.network ) with personalized AI agents that not only empower individuals and monetizes their data and services, but can also be extended to families, virtual groups, companies, communities, cities, city-states, and beyond. This approach is also synergistic with VERSES.AI 's approach to AI and Web3 which I'm advocating for.

With this background on the evolution of AI and a possible beneficial outcome within an ecosystem of intelligence, let's now explore the various viewpoints on the AI singularity.


Various Viewpoints on the AI Singularity

The following extract from Chapter 9 "The Debates, Progress and Likely Future Paths of Artificial Intelligence " of my book provides a high-level introduction into the various viewpoints on the AI singularity:

“According to the Future of Life Institute, most disputes amongst AI experts and others about strong AI that potentially have Life 3.0 capabilities, revolves around when and/or if ever it will happen and will it be beneficial for humanity. This leads to a classification where we have at least four distinct groups of thinking about where we are heading with AI which are the so-called Luddites, technological utopians, techno-skeptics, and the beneficial AI movement. Whereas?Luddites?within this context are opposed to new technology such as AI and especially have very negative expectations of strong AI and its impact on society,?technological utopians?sit on the other end of the spectrum with very positive expectations of the impact of advanced technology and science to help create a better future for all. The?Techno-sceptics?do not think that strong AI is a real possibility within the next hundred years and that we should focus more on the shorter-term impacts, risks, and concerns of AI that can have a massive impact on society as also described in the previous chapter. The?Beneficial-AI group?of thinkers are more focused on creating safe and beneficial AI for both narrow and strong AI as we cannot be sure that strong AI will not be created this century and it is anyway needed for narrow AI applications as well. AI can become dangerous when it is developed to do something destructive or harmful but also when it is developed to do something good or advantageous but use a damaging method for achieving its objective. So even in the latter case, the real concern is strong AI’s competence in achieving its goals that might not be aligned with ours. Although my surname is Ludik, I am clearly not a Luddite, and would consider my own thinking and massive transformative purpose to be more aligned with the Beneficial AI group of thinkers and currently more concerned with the short-to-medium term risks and challenges and practical solutions to create a beneficial world for as many people as possible."

No alt text provided for this image
Skeptical, Pessimistic & Optimistic Perspectives on AI Singulary (modified from FLI and Max Tegmark's Life 3.0)

?

Optimistic perspective (Digital Utopians & Beneficial AI Movement on a spectrum):

Proponents of this view believe that the AI Singularity will lead to unprecedented growth and improvements in various fields, such as healthcare, education, and the economy. They argue that AI will solve many of humanity's problems and enhance human capabilities.

Strengths:

  • Potential for solving complex global challenges and improving quality of life
  • Accelerated technological advancements leading to innovations and new industries

Weaknesses:

  • Assumes that AI will be benevolent and aligned with human values
  • May underestimate the risks and challenges of advanced AI systems


Pessimistic perspective (Luddites & Beneficial AI Movement on a spectrum):

This viewpoint focuses on the potential risks and negative consequences of the AI Singularity, such as job displacement, loss of privacy, and AI systems becoming uncontrollable or harmful to humanity.

Strengths:

  • Emphasizes the need for responsible AI development and governance
  • Raises awareness of potential risks and challenges associated with advanced AI

Weaknesses:

  • May overestimate the dangers and underestimate the potential benefits of AI
  • Can potentially hinder AI development due to excessive fear and regulation


Skeptical perspective (Techno-skeptics):

Skeptics question the plausibility of the AI Singularity, arguing that it may never occur or is too far in the future to make meaningful predictions.

Strengths:

  • Encourages critical evaluation of AI Singularity claims and hype
  • Focuses on addressing more immediate AI-related concerns and issues

Weaknesses:

  • May underestimate the pace of AI development and the potential for rapid advancements
  • Could lead to complacency in preparing for the potential consequences of advanced AI


When it comes to timelines, predictions vary widely, ranging from a few decades to over a century or more. Some experts believe that the AI Singularity could occur within the 21st century, while others think it may never happen at all. The uncertainty in these predictions stems from factors such as the complexity of AI research, the unpredictability of technological breakthroughs, and the potential for societal and regulatory factors to influence the development of AI. In the final section of this article, I share specific opinions of some thought leaders, AI researchers, business leaders, scientists, and influencers on this topic.


A Balanced and Integrated View on the AI Singularity

A balanced perspective on the concept of singularity recognizes it as a possibility, but not a certainty, and acknowledges that it could have both positive and negative implications for humanity. To manage this, ethical guidelines for AI development and its alignment with human values are necessary. An integrated approach, on the other hand, views AI as a transformative force with associated benefits and risks. This perspective stresses the need for strong regulation, transparency, accountability, and educational efforts. It advocates for an ethical, human-centric approach to AI development, seeking to optimize its potential benefits while minimizing adverse effects such as job displacement and inequality.

In summary:

  • The technological singularity is a real possibility, but it is not inevitable.
  • The singularity could have both positive and negative consequences for humanity.
  • It is important to prepare for the singularity by developing ethical guidelines for AI development and by ensuring that AI is aligned with human values.
  • The technological singularity is a complex and uncertain event, but it is one that we need to be aware of and prepared for. By thinking about the singularity and its potential consequences, we can help to ensure that it is a positive event for humanity.


Further nuanced perspectives on AI and singularity include:

  1. AI as a Collaborative Tool: This view sees AI as an aid that improves human productivity and capabilities. Critics argue this perspective ignores potential job losses due to automation and ethical privacy and security issues.
  2. AI as a Competitive Force: This standpoint recognizes AI as a potentially disruptive force that could replace human jobs. While it could drive societal progress and economic growth, it could also worsen income inequality and cause mass unemployment.
  3. AI as a Potential Threat: Advocates, like Elon Musk and Stephen Hawking, warn about the existential threat if AI surpasses human intelligence without proper safeguards. However, this view can sometimes be seen as alarmist.
  4. AI as a Driver of Inequality: This perspective emphasizes the societal implications of AI, especially economic and digital inequality. The gap between those with and without AI could widen, leading to social division.
  5. AI as a Force for Good: This view believes AI could help solve global issues like climate change and healthcare, playing a crucial role in sustainable development goals.


The following are some of the potential benefits of the AI singularity:

  • Solve problems: AI could help us to solve some of the world's most pressing problems, such as climate change and poverty.
  • Better decisions: AI could help us to make better decisions about complex issues.
  • New technologies: AI could help us to create new technologies that improve our lives.
  • Improved healthcare: AI could be used to develop new treatments and cures for diseases, as well as to improve the efficiency of healthcare systems.
  • Increased productivity: AI could be used to automate tasks that are currently done by humans, freeing up time for people to focus on more creative and productive activities.
  • New scientific discoveries: AI could be used to make new discoveries in areas such as physics, biology, and cosmology.
  • Improved understanding of the universe: AI could be used to develop new theories about the universe and its origins.


The following are some of the potential risks of the technological singularity:

  • Loss of human control: If AI becomes too intelligent, it could potentially become uncontrollable and pose a threat to humanity.
  • Malicious use: AI could be used for malicious purposes, such as warfare or terrorism.
  • Loss of human identity: If AI becomes more intelligent than humans, it could potentially replace us in many areas of society, leading to a world where humans are no longer the dominant species.
  • Job displacement: AI could automate many jobs that are currently done by humans, leading to mass unemployment.
  • Inequality: The benefits of AI could be unevenly distributed, leading to increased inequality between rich and poor.
  • Ethical concerns: There are a number of ethical concerns surrounding the development and use of AI, such as the potential for bias and discrimination.


It is important to note that these are just some of the potential benefits and risks of the AI singularity. It is impossible to say for sure what the future holds, but it is important to be aware of the potential consequences of this event. By thinking about the singularity and its potential consequences, we can help to ensure that it is a positive event for humanity.

There are different perspectives on the AI Singularity, including if it will ever happen, various estimates of when it might occur and what its implications might be. Let's explore further.


Will AI ever reach Singularity?

Whether or not AI will ever reach singularity is a question that has been debated by experts for many years. There is no easy answer, as it depends on a number of factors, including the rate of technological progress, the development of new AI algorithms, and the availability of funding for AI research. Some experts believe that AI will eventually reach singularity, while others believe that it is impossible or that it will never happen. There is no consensus on when or if AI will reach singularity, and it is possible that it will never happen at all.

Several factors contribute to the uncertainty:

  • Technological Challenges: Despite rapid advances in AI research, we're still a long way from creating an AI that exhibits general intelligence (also known as strong AI or AGI), which is the level of cognitive capability that a human possesses. Current AI models, while powerful in their specific domains, are still far from matching the breadth, depth, and adaptability of human intelligence. See for example Yann LeCun: Towards Machines That Can Understand, Reason, & Plan and A Path Towards Autonomous Machine Intelligence .
  • Ethical and Safety Concerns: Even if the development of AGI becomes technically feasible, there will be ethical and safety concerns to address. Ensuring that highly intelligent AI systems act in alignment with human values, intentions, and safety is a significant challenge.
  • Unpredictability of Technological Progress: The pace and direction of technological innovation can be difficult to predict. Unforeseen discoveries or setbacks could either accelerate or delay the path towards AI singularity.
  • Regulatory and Social Factors: The development and deployment of AI technologies are influenced by regulatory policies, societal acceptance, and economic factors, which can vary widely across different regions and cultures.

Given these factors, while it's theoretically possible that AI could reach the point of singularity, whether or when this will happen is highly uncertain. It remains a topic of speculative debate, often split between optimistic futurists who believe it is imminent and skeptics who consider it unlikely or far-off.

Here are some of the arguments for and against the possibility of AI reaching singularity:

Arguments for:

  • AI is progressing at an exponential rate.
  • New AI algorithms are being developed all the time.
  • There is a lot of funding available for AI research.

Arguments against:

  • We don't know how to create AI that is truly intelligent.
  • There are many challenges that need to be overcome before AI can reach singularity.
  • There is no guarantee that AI will ever reach singularity.

Ultimately, the question of whether or not AI will reach singularity is a matter of speculation. There is no way to know for sure what the future holds, but it is a question that is worth considering.


Will Recursively Self-improving AI Systems also run into Limits?

Exponential growth is a common phenomenon in nature, but it is not always sustainable. The concept of exponential growth in nature is often linked to phenomena like population growth, nuclear reactions, or spread of diseases. However, real-world systems often experience limiting factors, resulting in what is referred to as a sigmoidal curve or logistic growth, rather than indefinite exponential growth. For instance, population growth slows down due to constraints such as availability of resources or space, reflecting a balance between different forces in the ecosystem. This same principle applies to the laws of physics, where certain physical limitations, such as the speed of light, impose upper bounds on how fast information can be transferred. As a further example, bacteria can grow exponentially in a nutrient-rich environment, but they will eventually run out of food and die. Similarly, a forest fire can spread exponentially if it is not contained, but it will eventually reach an area that is too dry to burn.

When considering recursively self-improving AI systems, it's important to consider that they will likely face similar constraints. While it is theoretically possible for AI systems to continuously improve themselves, this process will run into practical limitations. These include constraints imposed by computational resources, the laws of physics, the inherent complexity of intelligence, and our currently incomplete understanding of both human cognition and general intelligence. For example, AI systems may be limited by the amount of energy they can consume or the amount of data they can process. Additionally, AI systems may be limited by the laws of thermodynamics, which state that energy can neither be created nor destroyed.

Moreover, even if an AI system could, in principle, improve its cognitive abilities beyond those of any human, it would still need to have been designed with the ability to conduct such improvements. No such system exists or appears to be within immediate reach.


Can the Collective Human Intelligence match that of an Artificial Super Intelligence?

On the subject of human cognition, physicist David Deutsch's view as communicated in "The Beginning of Infinity: Explanations That Transform the World " suggests that humans, given enough time and resources, can in principle understand anything within the realms of natural laws. This philosophical standpoint emphasizes the potential of human intellect and the vast unexplored expanse of knowledge yet to be understood. It subtly implies that human intelligence, combined with ingenuity and curiosity, might be as "infinite" as any AI could become, though distributed among many minds and across time. This perspective highlights the vast potential of human intellect and suggests that our collective intelligence could potentially match the capabilities of an Artificial Super Intelligence (ASI), albeit spread over multiple minds and periods of time.

The comparison of humans to ants in relation to an ASI in terms of cognition, understanding, and capability is a metaphor that seeks to convey the magnitude of difference between human and potential AI capabilities. However, in light of David Deutsch's view which implies that humans, collectively and over time, may have an intellectual capacity as expansive as any ASI, such a comparison may not be entirely fair or accurate. Therefore, while Super Intelligence might exceed individual human capacity at a given point in time, this viewpoint underlines the possibility of humans collectively reaching similar levels of understanding over time. However, it is important to remember that the amount of time and resources required to understand something may be vast. For example, it took humans thousands of years to understand the basic principles of physics and chemistry. It is possible that AI could help us to understand the universe more quickly, but it is also possible that it will take us just as long or even longer.

It's crucial to remember, however, that these perspectives are speculative and based on current knowledge, as the debate around AI singularity and its relationship to human cognition is ongoing and far from settled.


Intelligence, Agency, and Wisdom in relation to the development of AI and its implications for Super Intelligence

There are critical distinctions between intelligence, agency, and wisdom, and these differences have profound implications for the development of AI and the concept of Super Intelligence. Whereas intelligence is a cognitive ability to acquire and apply knowledge and skills, agency is a behavioral ability to act independently and make choices. Intelligence is about what you know, while agency is about what you do. Wisdom is the ability to use knowledge and experience to make sound judgments and to act in a way that is both beneficial and ethical.

The implications of intelligence, agency and wisdom for the development of AI are as follows:

  • Intelligence refers to the capacity to learn, reason, understand, and adapt to new situations. It involves the ability to solve problems and comprehend complex ideas. AI can exhibit intelligence in specific domains, often outperforming humans. However, this is still far from the general intelligence exhibited by humans, which involves a wide range of cognitive abilities and an understanding of broader context.
  • Agency, on the other hand, refers to the capacity to make independent decisions and take actions based on one's intentions. It involves a level of consciousness, self-awareness, and free will that AI does not currently possess. While AI can make decisions based on programmed instructions and learned patterns, it does not have the subjective experiences or autonomous volition that characterizes human agency. AI doesn't have desires, fears, or aspirations.
  • Wisdom goes a step beyond intelligence and agency. It involves the judicious application of knowledge, experience, and good judgment. Wisdom often implies a deep understanding of people, things, events or situations, resulting in the ability to choose or advise others to choose the best course of action. Wisdom is usually associated with attributes such as empathy, compassion, and ethics. As of now, AI lacks the ability to exhibit wisdom as it doesn't possess emotional intelligence, self-awareness, or the ability to understand complex human value systems.

The implications of these distinctions for Super Intelligence are significant. Even if we were to develop an AI system that matches or surpasses human intelligence in a broad range of tasks, it would not necessarily possess agency or wisdom. Without these, a super intelligent AI might make decisions that are highly effective in achieving specified goals, but that fail to take into account broader human values, ethics, or potential long-term consequences. Therefore, as we continue to advance AI, it's crucial to consider not just how we can enhance its intelligence, but also how we can ensure it is used wisely and in a manner that aligns with human values and wellbeing. Some important perspectives on wisdom, AI alignment, the meaning crisis, and the future of humanity are also discussed by John Vervaeke in the following podcast: John Vervaeke: Artificial Intelligence, The Meaning Crisis, & The Future of Humanity . See also the section "What does it Mean to be Human and Living Meaningful in the 21st Century?" in Chapter 10 "Beneficial Outcomes for Humanity in the Smart Technology Era " of my book .


Is Civilisation not already a Run-away Super Intelligent Super Organism on a problematic trajectory? How does this differ from a run-away AI Super Intelligence? Does AI accelerate the Super Organism's current trajectory?

I think there is a good argument to be made that civilization is already a runaway super intelligent super organism on a problematic trajectory. We have the ability to create and use technology that is far more powerful than anything that has come before, and we are using this technology to rapidly change the world around us. This change is happening at an exponential rate, and it is difficult for us to keep up. We are not sure what the long-term consequences of this change will be, and there is a real risk that we could create a world that is uninhabitable or even destroy ourselves altogether.

A runaway AI super intelligence would be a similar kind of threat, but it could be even more dangerous. Such an AI would be able to learn and adapt at an even faster rate than humans, and it would not be bound by the same ethical or moral constraints. This means that an AI super intelligence could potentially pose an existential threat to humanity.

Before we delve deeper to address these questions, it is worth while to get Daniel Schmachtenberger 's perspectives on our current civilisation's problematic trajectory as also referenced in?Chapter 10 "Beneficial Outcomes for Humanity in the Smart Technology Era " of my book :

"Daniel Schmachtenberger’s core interest is focused on long term civilization design and more specifically to help us as a civilization to develop improved sensemaking and meaning-making capabilities so that we can make better quality decisions to help unlock more of our potential and higher values that we are capable of. He has specifically done some work on surveying existential and catastrophic risks, advancing forecasting and mitigation strategies, synthesizing and advancing civilizational collapse and institutional decay models, as well as identifying generator functions that drive catastrophic risk scenarios and social architectures that lead to potential coordination failures. Generator functions include for example game theory related win-lose dynamics multiplied by exponential technology, damaged feedback loops, unreasonable or irrational incentives, and short term decision making incentives on issues with long term consequences. He believes that categorical solutions to these generator functions would address the causes for civilization collapse and function as the key ingredients for a new and robust civilization model that will be robust in a Smart Technology Era with destabilizing decentralized exponential technology.

He summarizes his main sense of purpose is helping to transition civilization being on a current path that is self-terminating to one that is not and that is supportive of the possibility of purpose and meaning for everyone enduring into the future and working on changing the underlying structural dynamics that help make that possible. What he would like to see differently within the next 30 years is that we prevent existential risks that could play out in this time frame. It is not a given that we make it to 2050. Apart from catastrophic risks that can play out over this time period, there are those that can go past a tipping point during this time frame but will inevitably play out after that time. As we do not want to experience civilization collapse or existential risk and also not have us go past tipping points, Daniel would like to see a change in the trajectory that civilization is currently on from one that is on the path of many self-terminating scenarios each with their own set of chain reactions such as AI apocalypse, world war 3, climate change human-induced migration issues leading to resource wars, collapse of biodiversity, and killer drones."

In a recent podcast "Artificial Intelligence and The Superorganism" | The Great Simplification " with Nate Hagens , Daniel Schmachtenberger gives further insights into AI's potential added risk to our global systems and planetary stability. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards and beyond numerous planetary boundaries and facing geopolitical risks all with existential consequences. They specifically also ask:

  • How does AI not only add to these risks, but accelerate the entire dynamic of the metacrisis?
  • What is the role of intelligence versus wisdom on our current global pathway, and can we change course?
  • Does AI have a role to play in creating a more stable system or will it be the tipping point that drives our current one out of control?

As we can see from the above inputs, there is indeed an argument to be made that human civilization, especially when seen through the lens of collective decision-making and technological progress, could be viewed as a form of super intelligent organism. Much like the hypothetical super intelligent AI, our civilization possesses vast knowledge and problem-solving abilities. However, as Daniel Schmachtenberger points out, there are critical dynamics and structures within our civilization that could lead us towards self-destruction, analogous to the risks posed by an unchecked super intelligent AI.

The key distinction here is that civilization is a complex system of independent, conscious agents with diverse interests and values. It's influenced by cultural, political, economic, and environmental factors, among others. A super intelligent AI, on the other hand, would be a single entity (or a unified system) driven by a specific set of programmed goals (for the scenario that it is not a distributed super intelligence). Both could potentially lead to harmful outcomes if not properly managed, but the nature of the risks and the strategies to mitigate them would differ significantly.

The generator functions that Schmachtenberger identifies – like win-lose dynamics, damaged feedback loops, and irrational incentives – do seem to bear some similarities with potential risks from super intelligent AI. Both involve systems that could spiral out of control due to poorly aligned incentives, inadequate feedback mechanisms, and short-term decision-making that neglects long-term consequences.

However, the solutions would need to be tailored to the specific systems. For human civilization, addressing these generator functions might involve deep structural changes to our economic and political systems, advances in education and moral reasoning, improvements in global governance and cooperation, and the adoption of long-term perspectives. For super intelligent AI, it might involve AI alignment and safety research, iterative and controlled development, transparency and explainability, human-AI collaboration, regulation and oversight, international collaboration, education and public engagement, and adaptability and learning.

In both cases, achieving these changes would require a profound shift in our collective understanding, values, and priorities. We would need to move away from narrow, short-term, competitive mindsets and towards a broader, longer-term, cooperative perspective that values the wellbeing of all sentient beings and the sustainability of our shared environment.


Super Intelligent AI as a Single, Unified Entity versus a Distributed Super Intelligence

The traditional conception of super intelligent AI often involves a single, unified entity - largely because this makes the concept easier to understand and discuss. However, in reality, a super intelligent AI system could very well manifest as a distributed network of intelligent entities working together, akin to the idea of an "Ecosystem of Intelligence" or the "Spatial Web" as mentioned earlier in this article. This is often referred to as "collective intelligence" or "swarm intelligence."

In this scenario, intelligence would not be concentrated within a single entity, but distributed across a multitude of AI agents, each potentially specializing in different tasks, but collectively capable of demonstrating super intelligence. This configuration could even integrate human intelligence into the mix, resulting in a human-AI collaborative network.

These distributed networks of intelligence could have significant advantages over a singular super intelligent entity. They could be more resilient (since the loss or failure of individual agents wouldn't compromise the entire system), more flexible (since they could adapt to a wider range of problems and situations), and potentially safer (since no single agent would possess the full power of the super intelligent system).

However, these distributed networks also present unique challenges. For example, coordinating the actions of multiple agents can be complex, and individual agents could potentially behave in ways that are harmful to the system as a whole. Furthermore, while such a system could potentially mitigate some risks associated with super intelligent AI (e.g., the risk of a single agent going rogue), it could also introduce new risks (e.g., the risk of emergent behaviors that are harmful or unpredictable).

The vision presented earlier of an "Ecosystem of Intelligence" – a web of shared knowledge that evolves into wisdom – offers a more nuanced and optimistic vision of the future of AI, and it aligns well with the idea of AI as a tool for augmenting human intelligence and solving complex problems . However, like all visions of the future, it will require careful planning, management, and governance to ensure that it unfolds in a way that is beneficial and safe for all.


How can Humanity ensure that Super Intelligence is beneficial and aligned with Human Values?

Ensuring that a super intelligent AI aligns with human values and is used for good is a complex, multifaceted challenge. However, here's a potential plan that incorporates various strategies and steps to address this issue:

  1. Research on AI Alignment and Safety: Fundamental to the plan would be rigorous, ongoing research into AI alignment - ensuring that AI's goals are in tune with human values - and AI safety, to reduce the likelihood of harmful consequences. We would need to develop robust AI models that can understand and appropriately respond to the nuances of human values, ethics, and societal norms.
  2. Iterative and Controlled Development: Instead of trying to create a super intelligent AI in one step, it would be more prudent to develop AI iteratively. Each version of the AI could be slightly more capable than the last, with thorough testing and risk assessments at each stage. We could ensure safety measures, like "off switches" or containment procedures, are effective before moving on to the next development phase.
  3. Transparency and Explainability: AI systems should be designed to be transparent and explainable, so that humans can understand and predict their behaviors. This not only fosters trust but also enables ongoing oversight and intervention if necessary.
  4. Human-AI Collaboration: Rather than replacing humans, AI should augment human intelligence . Humans and AI should work together, with humans providing the wisdom, ethics, and strategic direction, and AI providing the computational power and precision.
  5. Regulation and Oversight: Clear regulations and strong oversight mechanisms should be in place to govern the development and deployment of AI. These should be designed to prevent misuse and to address ethical issues such as fairness, privacy, and autonomy.
  6. International Collaboration: The potential risks and impacts of super intelligent AI are global in nature, so the response should be global too. This could involve international agreements on safety and ethical standards, collaboration on research, and mechanisms for sharing benefits and managing risks.
  7. Education and Public Engagement: It's important to involve society in decisions about AI and its future. This could involve educating the public about AI, consulting on key decisions, and enabling people to influence the rules and norms that govern AI.
  8. Adaptability and Learning: As AI evolves, our strategies for managing it will need to evolve too. This could involve ongoing monitoring and assessment of AI impacts, learning from successes and failures, and adapting strategies as needed.

The key to this plan's success would be its implementation in a comprehensive, coordinated way, involving all stakeholders - researchers, policymakers, businesses, civil society, and the public. As with any plan, it would also need to be revisited and revised regularly in the light of new developments and insights.


The Debates on the AI Singularity

In conclusion, for a further deep dive into what various thought leaders, AI researchers, business leaders, scientists, and influencers think about the AI singularity, herewith an extract from Chapter 9 "The Debates, Progress and Likely Future Paths of Artificial Intelligence " of my book "Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era ":

"Prominent business leaders, scientists, and influencers such as Elon Musk, the late Stephen Hawking, Martin Rees, and Eliezer Yudkowsky have issued dreadful warnings about AI being an existential risk to humanity, whilst well-resourced institutes countering this doomsday narrative with their own “AI for Good” or “Beneficial AI” narrative. AI researcher and entrepreneur Andrew Ng has once said that “fearing a rise of killer robots is like worrying about overpopulation on Mars”.[i] That has also been countered by AI researcher Stuart Russell who said that a more suitable analogy would be “working on a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive”.[ii] Many leading AI researchers seem to not identify with the existential alarmist view on AI, are more concerned about the short-to-medium term risks and challenges of AI discussed in the previous chapter, think that we are still at a very nascent stage of AI research and development, do not see a clear path to strong AI over the next few decades, and are of the opinion that the tangible impact of AI applications should be regulated, but not AI research and development. Most AI researchers and practitioners would fall into the beneficial AI movement and/or techno-sceptics category. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, wrote an opinion article titled How to Regulate Artificial Intelligence where he claims that the alarmist view that AI is an “existential threat to humanity” confuses AI research and development with science fiction, but recognizes that there are valid concerns about AI applications with respect to areas such as lethal autonomous weapons, jobs, ethics and data privacy.[iii] From a regulatory perspective he proposes three rules that include that AI systems should be put through the full extent of the laws that apply to its human operator, must clearly reveal that they are not a human, and cannot keep or reveal confidential information without clear approval from the source of that information.

Some strong technological utopian proponents include roboticist Hans Moravec as communicated in his book Mind Children: The Future of Robot and Human Intelligence as well as Ray Kurzweil, who is currently Director of Engineering at Google and has written books on the technology singularity, futurism, and transhumanism such as The Age of Spiritual Machines and The Singularity is Near: When Humans Transcend Biology.[iv] The concept of a technological singularity has been popular in many science fiction books and movies over the years. Some of Ray’s predictions include that by 2029 AI will reach human-level intelligence and that by 2045 "the pace of change will be so astonishingly quick that we won't be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating".[v] There are a number of authors, AI thought leaders and computer scientists that have criticized Kurzweil's predictions in various degrees from both an aggressive timeline and real-world plausibility perspective. Some of these people include Andrew Ng, Rodney Brooks, Francois Chollet, Bruce Sterling, Neal Stephenson, David Gelernter, Daniel Dennett, Maciej Ceglowski, and the late Paul Allen. Web developer and entrepreneur Maciej Ceglowski calls superintelligence “the idea that eats smart people” and?provides a range of arguments for this position in response to Kurzweil’s claims as well as Nick Bostrom’s book on Superintelligence and the positive reviews and recommendations that the book got from Elon Musk, Bill Gates and others.[vi] AI researcher and software engineer Francois Chollet wrote a blog on why the singularity is not coming as well as an article on the implausibility of an intelligence explosion. He specifically argues that a “hypothetical self-improving AI would see its own intelligence stagnate soon enough rather than explode” due to scientific progress being linear and not exponential as well as also getting exponentially harder and suffering diminishing returns even if we have an exponential growth in scientific resources. This has also been noted in the article Science is Getting Less Bang for its Buck that explores why great scientific discoveries are more difficult to make in established fields and notes that emergent levels of behavior and knowledge that lead to a proliferation of new fields with their own fundamental questions seems to be the avenue for science to continue as an endless frontier.[vii] Using a simple mathematical model that demonstrates an exponential decrease of discovery impact of each succeeding researcher in a given field, Francois Chollet concludes that scientific discovery is getting harder in a given field and linear progress is kept intact with exponential growth in scientific resources that is making up for the increased difficulty of doing breakthrough scientific research. He further constructs another model, with parameters for discovery impact and time to produce impact, which shows how the rate of progress of a self-improving AI converges exponentially to zero, unless it has access to exponentially increasing resources to manage a linear rate of progress. He reasons that paradigm shifts can be modeled in a similar way with the paradigm shift volume that snowballs over time and the actual impact of each shift decreasing exponentially which in turn results in only linear growth of shift impact given the escalating resources dedicated to both paradigm expansion and intra-paradigm discovery. Francois states that intelligence is just a meta-skill that defines the ability to gain new skills and should be along with hard work at the service of imagination, as imagination is the real superpower that allows one to work at the paradigm level of discovery.[viii] The key conclusions that Francois makes in his article on implausibility of an intelligence explosion are firstly that general intelligence is a misnomer as intelligence is actually situational in the sense that the brain operates within a broader ecosystem consisting of a human body, an environment, and a broader society. Furthermore, the environment is putting constraints on individual intelligence which is limited by its context within the environment. Most of human intelligence is located in a broader self-improving civilization intellect where we live and that feeds our individual brains. The progress in science by a civilization intellect is an example of a recursively self-improving intelligence expansion system that is already experiencing a linear rate of progress for reasons mentioned above.[ix]

In the essay The Seven Deadly Sins of Predicting the Future of AI, Rodney Brooks who is the co-founder of iRobot and Rethink Robotics, firstly quotes Amar’s law that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run” to state that the long term timing for AI is being crudely underestimated.[x] He also quotes Arthur C. Clarke’s third law that states that “any sufficiently advanced technology is indistinguishable from magic” to make the point that arguments for a magical future AI are faith-based and when things said about AI that are far enough from what we use and understand today and for practical purposes passes the magic line, those things cannot be falsified. As it is also intuitive for us to generalize from the observed performance level on a particular task to competence in related areas, it is also natural and easy for us to apply the same human style generalizations to current AI systems that operate in extremely narrow application areas and overestimate their true competence level. Similarly, people can easily misinterpret suitcase words applied to AI systems to mean more than what there actually is. Rodney also argues that as exponentials are typically part of a S-curve where hyper growth flattens out, one should in general be careful to apply exponential arguments as it can easily collapse when a physical limit is hit or if there is not sufficient economic value to persist with it. The same holds for AI, where deep learning’s success, which can also be seen as an isolated event and achieved on top of at least thirty years of machine learning research and applications, does not necessarily guarantee similar breakthroughs on a regular basis. Not only is the future reality of AI likely to be significantly different to what is being portrayed in Hollywood science fiction movies, but also have a variety of advanced intelligent systems that evolve technologically over time in a world that would be adapting to these systems. The final error being made when predicting the future of AI is that the speed of deploying new ideas and applications in robotics and AI take longer than people think, especially when hardware is involved as with self-driving cars or in many factories around the world that are still running decades-old equipment along with old automation and operating system software.[xi] On the self-driving cars front both Tesla and Google’s Waymo have improved self-driving technology significantly with Waymo achieving “feature complete” status in 2015 but in geo-fenced areas, whereas Tesla is at almost zero interventions between home and work (with an upcoming software release promising to be a “quantum leap”) in 2020.[xii] However, the reality is that Tesla’s full driving Autopilot software is progressing much slower than what Elon Musk predicted over the years and Chris Urmson, the former leader of Google self-driving project and CEO of self-driving startup Aurora, reckons that driverless cars will be slowly integrated over the next 30 to 50 years.[xiii]

Piero Scaruffi, a freelance software consultant and writer, is even more of a techno-skeptic and wrote in Intelligence is not Artificial - Why the Singularity is not coming any time soon and other Meditations on the Post-Human Condition and the Future of Intelligence that his estimate for super intelligence that can be a “substitute for humans in virtually all cognitive tasks, including those requiring scientific creativity, common sense, and social skills” to be approximately 200,000 years which is the time scale of natural evolution to produce a new species that will be at least as intelligent as us.[xiv] He does not think that we’ll get to strong AI systems with our current incremental approach and that the current brute-force AI approach is actually slowing down research in higher-level intelligence. He guesses that an AI breakthrough will likely have to do with real memory that have “recursive mechanisms for endlessly remodeling internal states”. Piero disagrees with Ray Kurzweil’s “Law of Accelerating Returns” and points out that the diagram titled “Exponential Growth in Computing” is like comparing the power of a windmill to the power of a horse and concluding that windmills will keep improving forever. There is also no differentiation between progress in hardware versus progress in software and algorithms. Even though there has been significant progress in computers in terms of its speed, size, and cost-effectiveness, that does not necessarily imply that we will get to human-level intelligence and then super intelligence by assembling millions of superfast GPUs. A diagram showing “Exponential Growth in Computational Math” would be more relevant and will show that there has been no significant improvement in the development of abstract algorithms that improve automatic learning techniques. He is much more impressed with the significant progress in genetics since the discovery of the double-helix structure of DNA in 1953 and is more optimistic that we will get to superhuman intelligence through synthetic biology.[xv]

A survey taken by the Future of Life Institute says we are going to get strong AI around 2050, whereas one conducted by SingularityNET and GoodAI at the 2018 Joint Multi-Conference on Human-Level AI shows that 37% of respondents believe human-like AI will be achieved within five to 10 years, 28% of respondents expected strong AI to emerge within the next two decades while only 2% didn't believe humans will ever develop strong AI.[xvi] Ben Goertzel, SingularityNET's CEO and developer of the software behind a social, humanoid robot called Sophia, said at the time that "it's no secret that machines are advancing exponentially and will eventually surpass human intelligence" and also “as these survey results suggest, an increasing number of experts believe this 'Singularity' point may occur much sooner than is commonly thought… It could very well become a reality within the next decade."[xvii] Lex Fridman, AI Researcher at MIT and YouTube Podcast Host thinks that we are already living through a singularity now and that super intelligence will arise from our human collective intelligence instead of strong AI systems.[xviii] George Hotz, a programmer, hacker, and the founder of Comma.ai also thinks that we are in a singularity now if we consider the escalating bandwidth between people across the globe through highly interconnected networks with increasing speed of information flow.[xix] Jürgen Schmidhuber, AI Researcher and Scientific Director at the Swiss AI Lab IDSIA, is also very bullish about this and that we soon should have cost-effective devices with the raw computational power of the human brain and decades after this the computational power of 10 billion human brains together.[xx] He also thinks that we already know how to implement curiosity and creativity in self-motivated AI systems that pursue their own goals at scale. According to Jürgen superintelligent AI systems would likely be more interested in exploring and transforming space and the universe than being restricted to Earth. AI Impacts has an AI Timeline Surveys web page that documents a number of surveys where the medium estimates for a 50% chance of human-level AI vary from 2056 to at least 2106 depending on the question framing and the different interpretations of human-level AI, whereas two others had medium estimates at the 2050s and 2085.[xxi] Rodney Brooks has declared that artificial general intelligence has been “delayed” to 2099 as an average estimate in a May 2019 post that references a survey done by Martin Ford via his book Architects of Intelligence where he interviewed 23 of the leading researchers, practitioners and others involved in the AI field.[xxii] It is not surprising to see Ray Kurzweil and Rodney Brooks at opposite ends of the timeline prediction, with Ray at 2029 and Rodney at 2200. Whereas Ray is a strong advocate of accelerating returns and believe that a hierarchical connectionist based approach that incorporates adequate real-world knowledge and multi-chain reasoning in language understanding might be enough to achieve strong AI, Rodney thinks that not everything is exponential and that we need a lot more breakthroughs and new algorithms (in addition to back propagation used in Deep Learning) to approximate anything close to what biological systems are doing especially given the fact that we cannot currently even replicate the learning capabilities, adaptability or the mechanics of insects. Rodney reckons that some of the major obstacles to overcome include dexterity, experiential memory, understanding the world from a day-to-day perspective, comprehending what goals are and what it means to make progress towards them. Ray’s opinion is that techno-sceptics are thinking linearly, suffering from engineer’s pessimism and do not see exponential progress in software advances and cross fertilization of ideas. He believes that we will see strong AI progresses exponentially in a soft take off in about 25 years."



[i] https://www.theregister.com/2015/03/19/andrew_ng_baidu_ai/

[ii] Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control.

[iii] https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html?ref=opinion

[iv] Hans Moravec, Mind Children: The Future of Robot and Human Intelligence; Ray Kurzweil, The Age of Spiritual Machines; Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology.

[v] Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology.

[vi] https://idlewords.com/talks/superintelligence.htm

[vii] https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/

[viii] https://fchollet.com/blog/the-singularity-is-not-coming.html

[ix] https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

[x] https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/

[xi] https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ ?

[xii] https://arstechnica.com/cars/2020/08/teslas-slow-self-driving-progress-continues-with-green-light-warning

[xiii] https://www.theringer.com/tech/2019/5/16/18625127/driverless-cars-mirage-uber-lyft-tesla-timeline-profitability ;

[xiv] https://www.scaruffi.com/singular/download.pdf

[xv] https://www.scaruffi.com/singular/download.pdf

[xvi] https://futureoflife.org/superintelligence-survey/?cn-reloaded=1 ; https://bigthink.com/surprising-science/computers-smart-as-humans-5-years?rebelltitem=1#rebelltitem1

[xvii] https://bigthink.com/surprising-science/computers-smart-as-humans-5-years?rebelltitem=1#rebelltitem1

[xviii] https://youtu.be/Me96OWd44q0

[xix] https://youtu.be/_L3gNaAVjQ4

[xx] https://spectrum.ieee.org/computing/software/humanlevel-ai-is-right-around-the-corner-or-hundreds-of-years-away

[xxi] https://aiimpacts.org/ai-timeline-surveys/

[xxii] https://rodneybrooks.com/agi-has-been-delayed/




Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era


Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era "?takes us on a holistic sense-making journey and lays a foundation to synthesize a more balanced view and better understanding of AI, its applications, its benefits, its risks, its limitations, its progress, and its likely future paths.?Specific solutions are also shared to address AI’s potential negative impacts, designing AI for social good and beneficial outcomes, building human-compatible AI that is ethical and trustworthy, addressing bias and discrimination, and the skills and competencies needed for a human-centric AI-driven workplace. The book aims to help with the drive towards democratizing AI and its applications to maximize the beneficial outcomes for humanity and specifically arguing for a more decentralized beneficial human-centric future where AI and its benefits can be democratized to as many people as possible. It also examines what it means to be human and living meaningful in the 21st century and share some ideas for reshaping our civilization for beneficial outcomes as well as various potential outcomes for the future of civilization.?


See also the Democratizing AI Newsletter:?

https://www.dhirubhai.net/newsletters/democratizing-ai-6906521507938258944/


References

[i] https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificial-intelligence/#5866ec28233e


Yaman Nimer

Business Process Optimizations // Digital Transformations // Project Management // Publishing & Production

1 个月

Deeply incredible research & analysis.

回复
Rogel Nuguid, MA, MAS, MS

UN system, Interculturalist, International Cooperation, Sustainability, CSR, PhD researcher on SIDS and digital diplomacy, Research Administration, South-South cooperation and Grants Management

1 年

Dr. Jacques Ludik thank you very much for sharing this. The book has so much contribution to my PhD dissertation. Could you kindly confirm if there’s relevant chapter that would point to AI and its impact to the peoples and institutions of the Small Island Developing States? I am looking at the research topic of digital diplomacy, multilateralism and SIDS. Thank you!

Stanley Rorke

Capital Raise Specialist - Author - Speaker - Moderator - Researcher

1 年

Great article Jacques

要查看或添加评论,请登录

社区洞察

其他会员也浏览了