What it means to be human in the age of Artificial Intelligence?
Finished reading another fabulous book “Life 3.0” by Max Tegmark, who is a professor at the MIT. Through this book, Max Tegmark explores many few seminal questions such as what makes us human? What is consciousness? What is intelligence? What is the future of human evolution? These questions are fundamental to understand what our future will be in a post-Artificial General Intelligence world.
Before I share my key takeaways from the book, those of you who may be new to the term Artificial General Intelligence (AGI), let me make an attempt to introduce you to the concept of AGI. AGI is the inevitable future of Artificial Intelligence (AI). Current AI capabilities are narrow in scope and hence known as "narrow AI". In narrow AI, AI capabilities are focussed around specific type of applications such as recognising human face, or human emotions, or recognising vehicle number plates, etc. With the AGI however, machine will have the ability to think, understand, learn and apply its intelligence to solve any problem as humans do in any given situation. AGI is equivalent to a human generalist, who also far exceeds the expertise of specialists in each and every areas known to humans.
Here are my key takeaways from this book:
Life 1.0: Simply biological. Example: A bacteria. Every aspect of its behaviour is coded into its DNA. It is impossible for it to learn or change its behaviour over its lifespan. The only way it evolves is through natural process of evolution, which takes many generations.
Life 2.0: Cultural Life. Example: Humans. Our hardware i.e. our bodies have evolved. We can acquire new knowledges (our software) during our lifetime. We can adapt and redesign ideas and we make decisions using this knowledge.
Life 3.0: Theoretical Life. A form of technological life capable of designing both its hardware and software. Although such life doesn’t exist yet, but there is a tremendous possibility that such form of non-biological intelligence in the form of Artificial General Intelligence (AGI) may soon change this
3. Life on earth has been continuously evolving. No other species exemplifies this more than Humans does. If we move forward with this continuous evolutionary process, there will be a time when technology will live independently, designing both its hardware and software, and it will be an inflection point that will have a tremendous repercussion for the very existence of humankind. Such artificial life does not exist yet on earth. However, we are faced with the emergence of non-biological intelligence, commonly known as artificial intelligence (AI).
4. People who holds opinions about AI can be broadly classified based on how they feel about AI’s effect on humanity:
Digital Utopians: Those who believes that artificial life is a natural and desirable next step in evolution
Techno-skeptics: Those who don’t believe that artificial life will have an impact anytime in the foreseeable future
Beneficial AI movement believers: Those who believes that AI will necessarily bring benefits to humans. They advocates that AI research should be specifically directed towards possible universally positive outcomes
5. Ability to think and learn are not necessarily human only attributes. AI researchers claims that the capability for memory, computation, learning, and intelligence has nothing to do with human flesh and blood, let alone carbon atoms
6. All life has intelligence. But what exactly is intelligence? Although there is no universally accepted single definition, the author suggests that intelligence is an ability to accomplish complex goals. But machines often outperforms human intelligence in a defined task such as chess. As of now human intelligence is uniquely broad. It encompasses skills like language learning, composing musics, and driving vehicles.
7. It’s clear that intelligence isn’t just a biological faculty. Just like capacities for memory, computation, learning, and intelligence are substrate independent i.e. an independent layer that does not reflect or depend upon an underlying material substrate. Example: Human brains can store information, but so can a hard drives or SSDs or flash memories, even though these are not made from biological materials.
8. Computing involves the transformation of information. Example, the word “Hello” might be transformed into a sequence of zeros and ones. But the rule or pattern which determines this transformation is independent of the hardware that performs it. What’s important is the rule or pattern itself. This means that it’s not only humans who can learn, the same rules and patterns could exist outside of the human brain too. AI researchers have made huge strides developing machine learning i.e.? machines that can improve their own software.
9. So, if memory, learning, computation and intelligence aren’t distinctly human, then what exactly makes us human? As research in AI continues apace, this question is only going to prove harder to answer.
10. It’s clear that AI will impact all areas of human life in the near future. Algorithmic trading will affect finance. Autonomous driving will make transportation safer. Smart grids will optimize energy distribution. AI doctors will change healthcare. The big issue to consider is the effect AI will have on the job market. As AI systems continue to outperform humans in more and more fields, sooner we humans may even become unemployable.?
11. The holy grail of AI research is Artificial General Intelligence (AGI) that would operate at a human level of intelligence. The creation of AGI will lead to intelligence explosion - a process by which an intelligent machine gains super-intelligence i.e. a level of intelligence far beyond human capabilities. AGI could potentially design even more intelligent machines, which could design even more intelligent machines and so on.
领英推荐
12. The race towards AGI is underway. We don’t want to end up in an AI future for which we are unprepared, therefore we need to answer few seminal questions such as should AGI be conscious? Should humans OR machines be in control?
13. There are various possible world orders post-AGI:?
Benevolent dictator: A single benevolent superintelligence would rule the world, maximizing human happiness. Poverty, disease and other low-tech nuisances would be eradicated, and humans would be free to lead a life of luxury and leisure.?
Protector god: Where humans would still be in charge of their own fate, but there would be an AI protecting us and caring for us, rather like a nanny.
Libertarian utopia: Humans and machines would peacefully coexist. This would be achieved through clearly defined territorial separation. Earth would be divided into three zones. One would be devoid of biological life but full of AGI. Another would be human only. There would be a final mixed zone, where humans could become cyborgs by upgrading their bodies with machines.
Conquerors’ scenario: Super-intelligent machines could take over the world and cause us harm, no matter how good our intentions are. For example, humans program a super-intelligence that is concerned with the welfare of humankind. From the super-intelligence’s perspective, this would probably be akin to a bunch of kindergarteners far beneath its intelligence holding it in bondage for their own benefit, which is a depressing situation. And what does it do with incompetent, annoying human obstacles? Control them, or better, destroy them?
Zookeeper scenario: Few humans would be left in zoos for the AGIs’ own entertainment, much like we keep endangered panda bears in zoos
14. Nature, humans included, has goals.The goal is maximise entropy, which means increasing messiness and disorder. When entropy is high, nature is satisfied.
15. Researchers are striving to simulate goal oriented behavior for AI, but they are struggling to finalize of which goals AI should be set to pursue. After all, today’s machines have goals too. Or rather, they can exhibit goal-oriented behavior. For instance, if a heat-seeking missile is hot the tail of a fighter jet, it’s displaying goal-oriented behavior. But should intelligent machines have goals at all? And if so, who should define those goals?
16. But even if humanity could agree on a few moral principles such as the golden rule to treat others as we would ourselves, to guide an intelligent machine’s goals, implementing human-friendly goals would be trickier.
First of all, we’d have to make an AI learn our goals. This is easier said than done because the AI could easily misunderstand/misinterpret a goal. For instance, if you told a self-driving car to get you to the airport as fast as possible, you might well arrive covered in vomit while being chased by the police. Technically, the AI adhered to your stated wish, but it didn’t really understand your underlying motivation.
The next challenge would be for the AI to adopt our goals, meaning that it would agree to pursue them. Just think of some politicians you know: even though their goals may be clear, they still fail to convince large swaths of the population to adopt the same goals.?
And finally, the AI would have to retain our goals, meaning that its goals wouldn’t change as it undergoes self-improvement.
19. AI researchers are trying to find an answer to how lifeless matter could become conscious. Conscious human beings are just food rearranged, meaning that the atoms we ingest are simply rearranged to form our bodies. Consequently, what interests AI researchers, then, is the rearrangement that intelligent machines would have to undergo to become conscious. It shouldn’t be a surprise that no one has an answer right now. But to get closer, we have to grasp what’s involved in consciousness.
20. We might like to imagine consciousness is something to do with awareness and human brain processes. But then we’re not actively aware of every brain process. For example, you’re typically not consciously aware of everything in your field of vision. It’s not clear why there’s a hierarchy of awareness and why one type of information is more important than another.?
21. Although there are several definitions of consciousness, the author favours a broad definition known as subjective experience, which allows a potential AI consciousness to be included in the mix. Using this definition, researchers can investigate the notion of consciousness through several sub-questions. For instance, How does the brain process information? or What physical properties distinguish a conscious systems from an unconscious one?
22. AI researchers have also deliberated how artificial consciousness or the subjective AI experience might “feel”. It’s postulated that the subjective AI experience could be richer than human experience. Intelligent machines could be purposed with a broader spectrum of sensors, making their sensory experience far fuller than our own. Additionally, AI systems could experience more per second, because an AI brain would run on electromagnetic signals traveling at the speed of light, whereas neural signals in the human brain travel at much slower speeds.
23. The race for AGI is in full swing. It’s not a question of if AGI will arrive, but when. We don’t know exactly what will happen when AGI arrives, but several scenarios are possible: humans might upgrade their “hardware” and merge with machines, or a superintelligence may take over the world. One thing is certain – humanity will have to ask itself some very deep philosophical questions such as what it means to be human?
Please feel free to share your thoughts on what you think will be the future of humans in a post-AGI world. Until then, enjoy being human.
Principal Consultant at Infosys
2 年Thanks a lot for sharing your perspective!! Looks intresting. Already I have kept the book in my wish list.
Award Winning Hedge Fund Manager | Delivered 35%+ Annual Returns Over Past 14 Years Using Prop Systematic Stocks Selection Process (SSSP) Framework | Expert in Multi-bagger & Small Cap Stocks | Wall Street Professional
2 年Good points Harish DASH