Looking forward with Artificial General Intelligence: a danger for humanity?

Looking forward with Artificial General Intelligence: a danger for humanity?

Exploring the latest, fascinating and disturbing questions around artificial intelligence through the insights of four leading industry figures: Stuart Russell, Nick Bostrom, Elon Musk and Ray Kurzweil #ai #anticipation

AI is a trendy topic and the biggest R&D of this industry centers frequently make the news: few weeks ago Deepmind (whose long term aim is to solve intelligence) announced a scientific breakthrough in biology solving the protein structure (article from Nature) and Open AI just developed GTP-3, a tool mastering language and able to produce content in a way never seen before (AI essay published by the Guardian and article from Forbes). Both achievements came much earlier than expected.

What next? Several scientific voices are raising concerns about how AI could evolve and the risk it may pose to humanity. Here is why.

1. The Singularity and subsequent questions

The best seller books Superintelligence (2014) by Nick Bostrom and Human Compatible (2019) by Stuart Russell are incredible eye openers on the possible future of AI. Nick Bostrom is a Swedish philosopher, PhD from LSE, Professor at Oxford University and known for its work on existential risk. Stuart Russell is a computer scientist, PhD from Stanford and Vice Chair of the World Economics Forum’s council on AI.

No alt text provided for this image

They explore in great details the path leading to the Singularity and the potential consequences of it. The Singularity is the defining milestone when the machine intelligence surpass all human intelligence combined, leading to Artificial General Intelligence. This turning point should be followed by an unstoppable intelligence explosion, as the machine could re-designed itself at an exponential rate and become the last invention the man would ever need to make. It would become a system that needs no problem specific engineering and can simply be asked to teach a molecular biology class or run a government. It would operate in any type of environment, learn from all available resources, ask questions when necessary and begin formulating and executing plans that work.

This raises fascinating questions (analyzed in depth in these two books):

  • About intelligence. The Human brain is made of a billion neurons, a billiard synapses and cycle time of few milliseconds for a theoretical 100 billiard operations per second. As of Nov 2020, the fastest computers in the world (the Japanese Fugaku by Fujitsu or Summit by IBM) are 10 times faster, in excess of a trillion operations per second, and with an immense superior storage capacity. Yet faster machines don’t mean intelligence. Intelligence has been defined in many ways: the capacity for logic, understanding, self-awareness, emotional knowledge, reasoning, planning, creativity, critical thinking and problem-solving. It is also the ability to transform information into knowledge within changing environments or contexts. Can a machine become intelligent sensu lato?
  • About AGI feasibility. Many progresses remain to be done: language and common sense, cumulative learnings or learning how to learn (different from deep learning that is mainly data driven), constructing a hierarchy of abstract actions or the ability to “discover” actions, plan and manage activity hierarchically. Can a machine pass the Turing test, be convincingly emotional, empathetic and surpass the entire humanity combined? Most experts agree it will, if not around 2050, definitely before the end of the century.
  • About the take-off (before the intelligence explosion). Will it be a slow one with multiple outcomes, or a fast/hard one likely to lead to one decision-making agency, a singleton with an decisive and irreversible strategic advantage, which would be, in Alan Turing’s words “certainly something which can give us anxiety”?
No alt text provided for this image
  • About societal impacts. What would then happen to society when multiple digital minds replace humans, eliminating work as we know it or making it unnecessary? That could bring a radical change in the economic system, on employment, wages, capital ownership. It could make work unnecessary and challenge the very purpose of humans, of their organization, their motivation and their happiness.
  •  About the machine end goals. What will be the superintelligence will? Self-preservation, cognitive enhancement, technological perfection, resource acquisition? An AI with beliefs and desires? A will on its own? Bostrom argues that intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal, which therefore leads to the last, and most important questions:
  • About control. All the scenarios are expected to lead to an independent agent. Then remains the question of control, or at least how to shape the AGI end goal: how to make it safe and maintain humans’ supremacy and autonomy in a world that include machine with substantial greater intelligence? How to ensure it would deploy its intelligence in ways that are not harmful?
  • About human values. How to engineer a motivation system that can reliably consider abstract values such as happiness or autonomy into a machine? How to program human values ? How to deal with the questions of preference despite humans being heterogeneous, and the issue of trade off among humans (as it’s impossible to satisfy everyone)


2. Risks and control

In 2017, Vladimir Putin said:

“the one who becomes the leader in AI will be the ruler of the world"

Competition between nations can indeed make them focus more on raw capabilities and less on the problem of control.

Many authority figures feared that an existential catastrophe may be the default outcome of an intelligence explosion, that digital superintelligence could be danger to the public. Stephen Hawking explained that efforts to create thinking machines pose a threat to our very existence and criticized widespread indifference in a 2014 editorial “So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here–we'll leave the lights on?' Probably not–but this is more or less what is happening with AI”.

Elon Musk characterizes AI as a danger to mankind much greater than nuclear warheads and call for government oversight, regulations and international governance to implement the right control and set the right motivation.

"AI is humanity's biggest existential threat"

Yet, few reasons for optimism exist:

  • There are strong economic incentives to develop AI systems that align to human preferences and intentions
  • There is an industry cooperation and awareness on AI safety with many non-profit research centers and organizations (such as The Future of Life Institute, FLI) working to mitigate existential risks facing humanity, particularly from advanced AI. The list of the FLI AI Open letter signatories is impressive
  • Raw data for learning about human preferences are abundant (the most obvious being books, films, television and radio broadcasts)
  • Work started to define the foundation of beneficial machines. As an illustration, Russell proposes these three founding principles (which somehow made me think of Isaac Asimov’s law of robotics!):

1.     The machines’ only objective is to maximize the realization of human preferences (altruistic machines)

2.     The machine is initially uncertain about what these preferences are (humble machines)

3.     The ultimate source of information about human preferences is human behavior (for learning to predict human preferences)

3. Predictions for the future?

Ray Kurzweil is an American inventor, futurist and transhumanist. Kurzweil received the National Medal of Technology and Innovation (United States' highest honor in technology) from President Clinton, was inducted into the US National Inventors Hall of Fame, received 21 honorary doctorates and was called "Edison's rightful heir".

He is also known for his predictions about the future, as most of his past ones realized. You can find here what are his forecasts until 2099, such as a future where humans and machines merge, a world where sharp distinctions between man and machine will no longer exist thanks to the existence of cybernetically enhanced humans and uploaded humans...

In the end

The philosophical questions of morality, consciousness, self-awareness, sentience and sapience will definitively become on trend with the progress of AI. Will AGI be our last invention ?

Let’s just hope for a future reality far away from Black Mirror

Adham Azzam

CEO/Co-Founder @ Balad | remittance payout infrastructure

3 年

Great Article Emmanuel! I have read Nick Bostrom’s book a year ago and found it fascinating...and a bit terrifying! I will certainly check out Stuart Russell’s book. As that one is a more recent one, did you find it more up to date with recent developments?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了