Life with AI @ 3.0
AI could represent the future of life, but it's a controversial subject.
The story of how life emerged on Earth is well known. Some 13.8 billion years ago, the Big Bang brought our universe into being. Then, about four billion years ago, atoms on Earth arranged themselves in such a way that they could maintain and replicate themselves. Life had arisen. Life can be classified into three categories according to levels of sophistication:
The first stage of life, Life 1.0, is simply biological.
Consider a bacterium. Every aspect of its behaviour is coded into its DNA. It can't learn or change its behaviour over its lifetime. Evolution is The closest to learning or improvement, but that takes many generations.
The second stage is cultural, Life 2.0.
Humans are included here. Just like the bacterium, our "hardware" or bodies have evolved. But unlike simpler life forms, we can acquire new knowledge during our lifetimes. Take learning a language. We can adapt and redesign ideas that we might call our "software." And we make decisions using this knowledge.
The final stage is the theoretical Life 3.0
A technological life capable of designing hardware and software. Although such life doesn't yet exist on Earth, the emergence of non-biological intelligence in AI technologies may soon change this.
Those who hold opinions about AI can be classified by how they feel about the emerging field's effect on humanity.:
First up are the digital utopians.
They believe that artificial life is a natural and desirable next step in evolution.
Second, there are the techno-skeptics.
As the name suggests, they don't believe that artificial life will have an impact anytime soon.
Finally, there's the beneficial AI movement.
These people need to be sold on the idea that AI will necessarily benefit humans. They, therefore, advocate that AI research be directed explicitly toward possible universally positive outcomes.
Capabilities for memory, computation, learning and intelligence aren't distinctly human attributes.
What makes us human? Our ability to think and learn? One might think so.
Researchers in AI, however, are generally opposed to such a notion. They claim that the capability for memory, computation, learning and intelligence has nothing to do with human flesh and blood, let alone carbon atoms. Let's begin with intelligence. Though there's no universally accepted definition, think of intelligence as the "ability to accomplish complex goals."
Machines might be increasingly able to outperform us in defined tasks such as playing chess, but human intelligence is uniquely broad. It can encompass skills like language learning and driving vehicles. However, even though artificial general intelligence (AGI) doesn't yet exist, it's clear that intelligence isn't just a biological faculty. Machines can complete complex tasks, too.
Like capacities for memory, computation and learning, intelligence is substrate-independent. That independent layer does not reflect or depend upon an underlying material substrate. So, for example, human brains can store information, but so can floppy drives, CDs, hard drives, SSDs and flash memory cards, even though they're not made of the same material.
But before we get to what this means for computing, we need to understand what computing is.
Computing involves the transformation of information. So, "hello" might be transformed into a sequence of zeros and ones. However, the rule or pattern that determines this transformation is independent of the hardware that performs it. What's important is the rule or pattern itself. This means that not only humans can learn – the same rules and patterns could also exist outside of the human brain. AI researchers have made considerable strides in developing machine learning: machines that can improve their software.
So, if memory, learning, computation and intelligence aren't distinctly human, what makes us human? As research in AI continues apace, this question will only prove harder to answer.
AI is advancing rapidly and will impact human life shortly.
Machines aren't anything new to humans. We've been using them for manual tasks for millennia. These machines pose no threat if you define your self-worth by your cognitive skills, such as intelligence, language and creativity. However, recent breakthroughs in AI might begin to worry you.
One researcher had his "holy-shit" moment in 2014 when he witnessed an AI system playing an old computer game named Breakout. That's the game where you hit a ball against a wall by maneuvering a paddle. At first, the AI system did poorly. But it soon learned and eventually developed an intelligent score-maximizing strategy that even the developers had yet to think of when they played themselves. It happened again in March 2016, when the AI system AlphaGo beat Lee Sedol, the world's best Go player. Go is a strategy game that requires intuition and creativity because there are many more possible positions in the game than atoms in the universe, so more than merely brute force analysis is needed. But the AI system still sailed to victory, displaying the creativity required. AI systems are also advancing quickly in the field of natural languages. Just consider how much the quality of translations provided by Google Translate has improved lately.
AI will impact all areas of human life shortly. Algorithmic trading will affect finance; autonomous driving will make transportation safer, intelligent grids will optimize energy distribution, and AI doctors will change healthcare. The big issue to consider is AI's effect on the job market. After all, as AI systems can outperform humans in more and more fields, we humans may even become unemployable.
Let's turn now to other potential impacts of AI development.
Creating human-level AI could result in a superintelligent machine taking over the world.
Until now, we have seen AI being applied narrowly in limited fields like language translation or strategy games. In contrast, the holy grail of AI research is the production of AGI, which would operate at a human level of intelligence. But what would happen if this holy grail were found?
For starters, the creation of AGI might result in what's known to AI researchers as an intelligence explosion. An intelligence explosion is a process by which an intelligent machine gains superintelligence, a level of intelligence far above human capability. It would achieve this through rapid learning and recursive self-improvement because an AGI could design an even more intelligent machine, which could create an even more brilliant machine. This could trigger an intelligence explosion, allowing machines to surpass human intelligence.
Moreover, superintelligent machines could take over the world and cause us harm, no matter how good our intentions are.
Let's say, for example, that humans program a superintelligence concerned with humankind's welfare. From the superintelligence's perspective, this is akin to a bunch of kindergartners far beneath your intelligence holding you in bondage for their benefit. You may find this a depressing and inefficient situation and take matters into your own hands. And what do you do with incompetent, annoying human obstacles? Control them, or better yet, destroy them. But we're getting ahead of ourselves; let's look at other, less terrifying scenarios that might occur.
Various AI aftermath scenarios are possible, ranging from the comforting to the terrifying.
Whether we like it or not, the race toward AGI is underway. But what would we want the aftermath of attaining it to look like? For instance, should AIs be conscious? Should humans or machines be in control?
We have to answer basic questions, as we want to avoid ending up in an AI future for which we're unprepared, especially one that could harm us. There are various aftermath scenarios. These vary from peaceful human–AI coexistence to AIs taking over, leading to human extinction or imprisonment.
领英推荐
The first possible scenario is the benevolent dictator.
A single benevolent superintelligence would rule the world, maximizing human happiness. Poverty, disease and other low-tech nuisances would be eradicated, and humans would be free to lead a life of luxury and leisure.
In the same vein, there's a scenario involving
a protector god
where humans would still be in charge of their fate, but there would be an AI protecting us and caring for us, rather like a nanny.
Another scenario is the
libertarian utopia.
Humans and machines would peacefully coexist. This would be achieved through clearly defined territorial separation. Earth would be divided into three zones. One would be devoid of biological life but full of AI. Another would be human only. There would be a final mixed zone where humans could become cyborgs by upgrading their bodies with machines. However, this scenario is a little fantastical as there's nothing to stop AI machines from disregarding humans' wishes.
Then there is the
conquerors’ scenario
This would see AIs destroy humankind, as we'd be seen as a threat, a nuisance or simply a waste of resources.
Finally, there is the
zookeeper scenario.
Here, a few humans would be left in zoos for the AIs' entertainment, much like we keep endangered panda bears in zoos.
Now that we've examined possible AI-related futures, let's look at the two most significant obstacles to current AI research: goal-orientedness and consciousness.
Nature, humans included, has goals, and researchers are striving to simulate this behaviour for AI.
There's no doubt that we humans are goal-oriented. Think about it: even something as small as successfully pouring coffee into a cup involves completing a goal. But actually, nature operates the same way. Specifically, it has one ultimate purpose: destruction. Technically, this is known as maximizing entropy, which means increasing messiness and disorder in a layperson's terms. When entropy is high, nature is "satisfied."
Let's return to the cup of coffee. Could you pour a little milk in, then wait a short while? What do you see? You now have a lukewarm, light brown, uniform mixture thanks to nature. Compared to the initial situation, where two liquids of different temperatures were separate, this new particle arrangement indicates less organization and increased entropy.
On a bigger scale, the universe is no different. Particle arrangements tend to move toward increased entropy levels, resulting in stars collapsing and the universe's expansion. This shows how crucial goals are, and currently, AI scientists are grappling with the problem of which goals AI should be set to pursue.
After all, today's machines have goals too. Or rather, they can exhibit goal-oriented behaviour. For instance, if a heat-seeking missile is hot on your tail, it's displaying goal-oriented behaviour.
But should intelligent machines have goals at all? And if so, who should define those goals? For instance, Ratna [he is my mentor] and I had a distinct vision regarding the future of the economy and society, so we would undoubtedly set very different goals for AI.
Of course, we could begin with something simple, like the Golden Rule that tells us to treat others as we would ourselves. But even if humanity could agree on a few moral principles to guide an intelligent machine's goals, implementing human-friendly goals would be trickier.
First, we'd have to make an AI learn our goals. This is easier said than done because the AI could easily misunderstand us. For instance, if you told a self-driving car to get you to the airport as fast as possible, you might well arrive covered in vomit while being chased by the police. Technically, the AI adhered to your wish but needed to understand your underlying motivation.
The next challenge would be for the AI to adopt our goals, meaning it would agree to pursue them. Just think of some politicians you know: even though their goals may be clear, they still fail to convince large swaths of the population to adopt the same goals.
Finally, the AI would have to retain our goals, meaning that its goals wouldn't change as it undergoes self-improvement. Vast amounts of scientific research are currently being devoted to just these ideas.
AI researchers are deliberating the meaning of consciousness and the subjectiveness of AI experience.
The question of consciousness and how it relates to life has been introduced previously. AI researchers are faced with the same age-old issue. More specifically, they wonder how lifeless matter could become conscious.
Let's come at it from a human perspective first. A physicist puts it this way: conscious human beings are just "food rearranged," meaning that the atoms we ingest are simply rearranged to form our bodies. Consequently, what interests AI researchers is the rearrangement that intelligent machines must undergo to become conscious. It shouldn't be a surprise that no one has an answer right now. But to get closer, we must grasp what's involved in consciousness.
It's tricky. We might like to imagine consciousness has something to do with awareness and human brain processes. But then, we must be more aware of every brain process. For example, you typically need to be made aware of everything in your field of vision. It needs to be clarified why there's a hierarchy of awareness and why one type of information is more important than another.
Consequently, multiple definitions of consciousness exist. However, there are favours to another broad definition known as subjective experience, which includes a potential AI consciousness in the mix. Using this definition, researchers can investigate the notion of consciousness through several sub-questions. For instance, "How does the brain process information?" or "What physical properties distinguish conscious systems from unconscious ones?"
AI researchers have also deliberated how artificial consciousness or the subjective AI experience might "feel." It's posited that the emotional AI experience could be more prosperous than the human experience. Intelligent machines could be purposed with a broader spectrum of sensors, making their sensory experience far fuller than our own.
Additionally, AI systems could experience more per second because an AI "brain" would run on electromagnetic signals travelling at the speed of light, whereas neural signals in the human brain travel at much slower speeds.
It might seem like a lot to wrap your head around, but one thing is clear: the potential impact of AI research is vast. It points to the future but also entails facing some of humankind's oldest philosophical questions.
The race for human-level AI is in full swing. It’s not a question of if AGI will arrive, but when. We don’t know what exactly will happen when it does, but several scenarios are possible: humans might upgrade their “hardware” and merge with machines, or a superintelligence may take over the world. One thing is certain – humanity will have to ask itself some deep philosophical questions about what it means to be human.
If you have come down now, please subscribe to my newsletter, and you can visit my profile to read the previous editions. Please leave feedback; I would like to hear your thoughts. If you have subscribed already, please share it with your fellow members.
Thanks for your support, and Happy New Year!
Sounds fascinating! Excited to learn more about the future of AI and its potential impact.