How will Super Intelligent AI redefine our  identity

How will Super Intelligent AI redefine our identity

When humans transcend biology, technological singularity is defined as the point at which machines become more intelligent than humans, triggering the merging of man and machine. Singularity is near and is expected to occur in the mid-21st century. It defines a time when the pace of technological change will be so rapid and human life will be irreversibly changed.

The exponential growth of machine intelligence will continue until the computing power of rapidly evolving computers exceeds that of the evolving human brain. At this point, the development of artificial intelligence (AI) will lead to technological advancement in the form of AI and the creation of super-intelligent machines capable of producing human intelligence.

There are some futurologists who optimistically predict that technological singularity will be a blessing for humanity, and that superior forms of intelligence will help us solve the problems that humans have not yet solved.

Ray Kurzweil (American inventor and futurist) believes that this singularity will occur around 2025, when technology has reached the point where it can be used to counteract ageing and give people almost eternal life, and then we will be able to build the first artificial brains. Unless we are hit by wars or violent global disasters, we believe that the brain-machine interface (genomics and nanotechnology) will improve and create a brain with a super intelligence similar to the human brain, given a sufficiently long time horizon. But for "singularity" to enter something like its timeline, huge research is needed to understand all facets of the human brain and to develop artificial intelligence.

The launch of AI, also known as "singularity," is likely to cause AI to fly past us and even draw us in with human intelligence. We will see the first human-machine interaction in the near future, but this will not be a step towards singularity. Driven by artificial intelligence and other technological means, this singularity will inevitably trigger a technological tsunami that will lead to a massive increase in human life expectancy and the creation of super-humans.

Kurzweil believes that by 2045 we will have experienced a technological revolution that could overturn the institutions and pillars of society and completely change our view of humanity in just a few years. The acceleration of technology is likely to lead to a singularity that will arrive as quickly as it should. The technology that the human mind alone has created could never have led to the explosive discontinuity that accompanies the creation of a mind that is super-intelligent, runs on superior hardware, and recursively improves itself. We can therefore argue that if super intelligence is technologically feasible and artificial intelligence exists at the human level, it will not be long before we reach "singularity" and reach our physical limits, so that we can create machines that can think just as well or better than humans.

How do we survive in the post-human era and how will the future of humanity change? It is important to remember that the concept of singularity is not only related to technology, as technology will ultimately enable the creation of super-intelligent machines with superhuman intelligence, such as nano bots. Kurzweil is the first to produce technological singularity, but even if we have a deep scientific understanding of cognition, we cannot develop software that could trigger these singularities. If the super-intelligent machine becomes more intelligent than a human being, and if superhuman intelligences are developed tomorrow, it will pose the greatest problem to humans - the ability to solve, the greatest problem that the technological revolution has given us so far. So, while technological singularity is inevitable, we will not have the computing power to develop it by then. 

While there is no doubt that AI will one day be much more intelligent than humans, the challenge is to assess whether artificial intelligence-driven singularity will foster our own survival through evolutionary pressures. Just because AI is kind to people, it doesn't necessarily mean it will be superior to them.

The very idea of creating an intelligence that is superior to us raises the possibility that superior forms of intelligence will be unkind to man and, individually, collectively, and seriously force us to think about what we want as a species. Should we not prepare ourselves for the coming technological singularity, and in particular for it to be near, far, probable or impossible? 

要查看或添加评论,请登录

社区洞察

其他会员也浏览了