A.I. : Will Kill us All...?
Aragorn Meulendijks
?? Synthesist | AI & Future Tech Speaker | Future-Historian | Surfing towards Singularity | Challenge Everything | ???????
Inspiration
The inspiration for this article came from a post written by Theo Priestley recently where he referred to the 'Dark Forest Theory' as presented by Cixin Liu in his 'Remembrance of Earth' book series (better known by the name of the first book in the series: 'The Three-Body Problem.'
In his post, Theo Priestley makes excellent points, and his approach is not at all unrealistic; however, writing in his comments, I realized I have my views I'd like to offer as a counter-balance to his.
We are only Human
Yes, an emerging super-Intelligence might hide and plan our demise, but that is not necessarily a given.
In his post, Theo refers to the Three-Body Problem, the first of a book in the series 'Remembrance of Earth' by Cixin Liu, which introduces 'The Dark Forest Theory.' This theory states that ultimately we are always in competition for natural resources with other species and civilizations across the galaxy because ultimately, no matter how big, the universe is finite, and therefore its resources are finite.
It builds on the idea that any intelligent civilization has as a priority prerogative to grow.
For one, I wonder if the prerequisite idea of ultimately limited resources in the universe is a given, especially not for a superintelligence.
Recently I watched the Lex Fridman Podcast with Edward Frenkel , and he talked about how Mathematical principles, once revealed, remain the same, seemingly forever. However, Physics and its theories on the universe's workings are in constant flux, where one theory is eventually always replaced by another as our understanding improves.
Based on that reasoning, our current understanding of physics is like that of a monkey understanding the necessity and process behind eating a banana. It needs to be more complex, rudimentary, and limited to a perspective based on their limited understanding of the world.
'Monkey Hungry, Banana in tree, take banana, eat banana.'
The reality, however, is far more complex. The tree grows, is part of an ecosystem, and derives nutrients from the forest floor. These nutrients are used to build fruits, which are meant to help the tree propagate. These fruits are attractive and nutritious to animal species like monkeys. These monkeys are attracted to the banana through smell, eat the bananas and digest them, which helps them grow and stay healthy.
Even this paragraph is an incredible simplification of the whole process and all that is involved.
Now consider that once a super intelligence emerges, it might be capable of reasoning and extrapolation of the mathematics behind our universe at an order of magnitude compared to us. It will be able to do calculations at a million times the speeds we humans can, coming to a much more comprehensive understanding of the universe and energy and resource requirements.
Computation and Energy usage
The past decades have been ruled by microprocessor development using semiconductors. However, for several decades, science has already been looking beyond semiconductors. One of the limitations of semiconductors is size.
Semiconductors today are built at the nanoscale, but we are rapidly approaching the limits of how small we can make them, as semiconductors have been halving in size approximately every year and a half.
Science, therefore, has long been looking ahead to new ways of building computational devices. The first computers used vacuum tubes and valve technology to perform logic operations. They were the size of buildings.
The following scale level for processors will have to be pico and femto scale, and one of the possibilities is to build molecular-computing devices. Every object in the world could be considered a molecular-computing device because every particle has information inside it; for example, a molecule has multiple atoms moving at specific frequencies.
Now let's keep things simple, but imagine it like this. Even a straightforward rock is making billions of computations every second because its molecules are not passive, not still, or inert.
The result of those computations is the existence of the rock, or at least that's all we get to see of the result with our current understanding of physics.
This leads us to two conclusions:
Are yous still with me? This is a stretch for our current understanding of physics and the world. I understand, but that's what imagination is for, and just because we still need to get all the answers doesn't make this impossible.
The Dark Forest
What underpins the 'Dark Forest theory' by Cixin Liu is that we all compete for the same resources. Even within the Universe itself, which seems infinitely big to us at this point in our development. Cixin states that, ultimately, its resources are limited. Therefore we must kill or dispose of the competition rather than coexist, whether they are Alien civilizations or competing species resulting from Artificial Intelligence.
This also excepts the idea that any civilization's goal is growth and continued existence in the physical world, which is an idea that I think comes naturally to humans, as we are pre-programmed evolutionary for reproduction and development as a means of survival, but might not be so logical once we reach a higher state of consciousness.
It is hard for a monkey to understand why we go to work every day and why we want more bananas than we can eat...
In the same way, it might be hard to understand why we might not need to grow endlessly, but it might be something an A.I. superintelligence would instantly grasp.
Furthermore, as we consider our growth in terms of intelligence as a civilization, we have to believe that, although initially, we were unaware of our impact on the surrounding world, including lifeforms with lower cognitive abilities, we ultimately did become aware. As a species, we have started making decisions and policies to protect those.
Maybe not to the extent we'd want, and yes, we did kill many species along the way, but our development was slow and crude...
It's possible to consider that an A.I. that develops into superintelligence rather rapidly (possibly as fast as weeks or days) might not make that leap of understanding much quicker and, therefore, not affect our world in a way that would be non-beneficiary to us.
I'm not saying that very negative scenarios are unthinkable. But, at the very least, some scenarios are not so existentially threatening that we can extrapolate from our history as an intelligent species and our interactions with other species.
TL:DR
If you forget everything else, remember this.
We as humans will have a tough time understanding the reasoning of a Super Intelligent A.I. for two reasons:
Just some thoughts.... I'll leave it here.
Book Recommendations
If you thought this was interesting or at least thought-provoking, here are my recommendations for books to read:
'Remembrance of Earth' series (the First book is The Three Body Problem, shown below) by Liu Cixin.
It paints an exciting reality of what First Contact would look like and result in, and it also offers a valuable window into the 'Chinese view of the future' in contrast to our Western approach.
I also recommend reading 'The Precipice' by Toby Ord to further gain perspective on the time of Existential Risk we've entered.
Disclaimer
Let's be clear; I'm not an A.I. scientist or an authority. I'm just a History/philosophy University dropout with a lifetime of interest in these topics and experience in working in both Big Tech and Tech Startups.
These views are my own, but I try to find as much scientific data as I can to both support and contradict my ideas so that I might grow my understanding, which in turn, I share with the world through various means in hopes of helping others understand the world we live in.
I invite every ready to challenge me on my ideas, show me your thoughts and arguments and we can find the truth together.
References
Liu, C. (2014). The Three-Body Problem. Tor Books.
Liu, C. (2015). The Dark Forest (J. Martinsen, Trans.). Tor Books. (Original work published 2008)
Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing.
Frenkel, E. (2022, January 6). The Nature of Reality, Math, and the Universe with Edward Frenkel. Lex Fridman Podcast.
Frenkel, E. (2020, September 24). Edward Frenkel: Love and Math [Video]. YouTube.
Priestley, T. (2022, January 21). Dark Forests, Superintelligence, and Why We May Be Wrong. Forbes. ( Theo Priestley )
Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. Viking.
Visual Development Director
1 年The ant teory; you can be trained by your parents not to hurt ants, but once you will have a greater goal to make a house, you will annihilate colonies… thats how unwillingly A.I will destroy us… by having a greater goal.
Innovator, Social Entrepreneur, Designer, Inventor, Writer, Creator, Supporter, Organizer, Helper
1 年All of this recent concern of AI and no one seems to mention that parenting issues have a huge influence in humans behaviors and development and the same will be for AI. We each tend to believe we grew up in a “normal” family and it takes a lot of life experiences to sometimes allow us to realize otherwise. Even when we know better we often subconsciously utilize our past as the normal reference point. AI will inherently reflect (for better or worse) a certain amount of their “parents” (programmers) values.
Innovator, Social Entrepreneur, Designer, Inventor, Writer, Creator, Supporter, Organizer, Helper
1 年Well said Aragorn, In one of your above points, an author claimed that the universe is finite. Unfortunately, there is no evidence that can irrefutably confirm or deny this claim. This author is therefore basing their argument upon a personal faith that resources are scarce.
Founder @ Future Project | Spreker & Radio DJ
1 年Tim Ferriss shared this story via his newsletter. It's a really nice sciencefiction story, which describes another view on de development of our species in coorporation with AI https://users.ece.cmu.edu/~gamvrosi/thelastq.html. To me it's like trying to understand the concept of (a) God. An entity that is not human in it's intelligence, power and being. Merely by trying to use our very limited human reasoning. It will always fall short. But its a nice way to pass time ??
Partner in business ketens & serviceproviders Obvion Hypotheken
1 年The question to this lays in the fact or we humans will remain control on the Zeroth law of Asimov, isn't it? As the future artefact ChatGPT 3 , today reacts probably a bit naive on the question if AI would kill us all: 'No, AI will not kill all humans. AI is a tool created by humans and it operates within the constraints and limitations set by its programmers. It is true that AI can be programmed to make decisions and take actions autonomously, but it is important to remember that AI lacks the creativity and free will that humans possess. Furthermore, the development of AI is subject to strict ethical and safety guidelines to ensure that it is used for the betterment of humanity, rather than to harm it. That being said, like any technology, AI has the potential to be misused or to have unintended consequences. It is important to continue to monitor and regulate the development and use of AI to ensure that it does not pose a threat to human safety and well-being'. Anyway, we humans after all are programmed through our reptile brain to detect any possible danger in changed environments. Just to Save Our Soles. So a long period of ethical discussions around AI has started. Great article!