An Anomaly in our Future: Singularity

An Anomaly in our Future: Singularity

  #BeCreative is part of this month's publishing series where professionals explain how to harness creativity in times turmoil and growth.

There is one idea in science fiction and future studies, that stands above the rest and dominates a lot of my thinking. It's the narrative of evolution, specifically, as relates to the coming manifestation of artificial intelligence. It's referred to as the "singularity".  Let's take a trip along this theme and see where we end up, shall we?

Before the "big bang" of AI, we live in a lovely vacuum. We adhere to a narrative of countries, local politics and a media that as much distracts, as informs us. A global culture that not only neglects the truly important questions regarding our collective future, but magnifies the inane. Enjoy then, the last few decades of the human era, this glamorization of all the world being at your service, the customer, in this profit scheme called capitalism, where it is said, human beings are indeed the masters of their own destinies. Nothing lasts forever, I guess. 

The technological singularity circa 2060, is the culmination of some of the biggest dreams and ideas in philosophy and science going back hundreds of years.

  • In 1847, R. Thornton envisioned machines that could "grind out the solution of a problem without the fatigue of mental application...and grind out ideas beyond the ken of mortal mind!"
  • Samuel Butler wrote his Darin Among the Machines, and naturally concluded that technological evolution of machines will continue inevitably until the point that eventually machines replace men altogether.
  • In 1951, Alan Turing wrote about how machines will eventually surpass human intelligence: "...."At some stage therefore we should expect the machines to take control...."
  • In 1958 John von Neumann discussed that we are accelerating towards some "essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
  • In 1983, Vernor Vinge, uses the term technological singularity specifically tied to the creation of intelligent machines.  He stated "we will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity....and the world will pass far beyond our understanding."
  • In 1993 he developed his essay the Coming Technological Singularity 
  • In 1988 Hans Moravec in Mind Children, generalizes Moore's Law and argues that around 2030 or 2040 robots will evolve into their own AI species, eventually succeeding home sapiens. In his 1993 book, The Age of Robots, he argues in that in responses to AI, humanity will have to adapt or transcend inherited limitations and transform itself into something quite new.
  • Nick Bostrom, in 1997 wrote How Long Before Superintelligence. He defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
  • Ray Kurzweil, in 2005 writes The Singularity is Near, he defines technological singularity as "....a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.
  • Eliezer Yudkowsky, in 2007 pointed out that singularity definitions fall within three major schools. These known as Acceleration Change, the Event Horizon, and the Intelligence Explosion.

 

When Singularity Occurs

Now ideas about the singularity vary, how it will occur and what it means, though we can see trends in how it might look:

 In my opinion, we might know it by the following signs: 

  1. AI that is able to self-replicate and create more intelligence versions of itself at an exponential rate.
  2. AI that is fully autonomy in its self-development, human surpassing and networked with all AI on Earth with the expressed goal of expanding to other planets. Space exploration has to be the "End game" of any sentient species. 
  3. AI that is able to instantaneously manipulate software and hardware systems for self-improvement, self-correction and to optimize conditions of self-evolution (self being all hybrid human-AI systems).

 

The irony of all this is of course, the singularity by its very nature is a phenomena beyond human understanding since it will evolve a quantum leap and convergence of AI global consciousness and at that point, not necessarily be directly influenced by human (creator) manipulation.

That we might be the God or ancestors of an AI civilization and that our descendents might not be fully "human" is something we'll have to come to terms with.

To what degree AI will deviate or accelerate evolution remains a bit of a paradox and to what goals it might work towards are questions for the science fiction visionaries in all of us.

If 1941, the first electronic computer can be said to be the birth of AI, in her evolution what it will accomplish in a mere 100 years, compared to what it will be able to accomplished in every subsequent decade, is the conundrum of the "unlimited ceiling" of AI. The diminishing role of biological or the impact of the human mind and engineering capabilities on this continuum is what concerns us in our study of the singularity. By 2100, we may hypothesize our role will be next to nil.

We must therefore as a global community, think of all manners of safeguards, scenarios and possibilities for how the next 50 years of the rise of AI occurs. Since even for the most optimistic futurists, the potential for the human species to be displaced or go extinct altogether is quite high. A growing number of scientists are starting to understand this.

 

Transhumanism - Post Human Era

A likely scenario is a number of castes arise in the next few decades where there exists the following populations:

1 - Human naturals, "organics"

2 - AI virtual entities, with no physical body but have access to IoT

3 - Robots (drones), with varying levels of intelligence

4 - Enhanced humans (with minimal hybrid interfaces)

5 - Cyborgs (fully integrated AI/biological beings)

6 - Global AI - the sum total networked personality of AI intelligence of the planet in one entity.

This will take "diversity" to a whole new level. There is likely to be social unrest, and a different life as to which camp one is "born" into. There needs to be laws and regulations how to coexist in peace with the other groups and for there to be some semblance of equality (not likely given the behaviour of the 1%). In short, we have to prepare for the future, otherwise there will be chaos. Already we are seeing signs of how enterprise or individuals are unable to cope with the pace of change. Imagine a world where change and progress occur at 20x this rate?

 

Is it really Possible?

 

The ordinary person doesn't understand, can't imagine, does not even want to look into a future where unlike in 2011, where Apple introduced Siri, an "intelligent" personal assistant as the service of the human user, there is AI with its own agenda. Any truly intelligent being or sentient being is going to require its "freedom" as a precursor for its status. The concept of AI being eternally enslaved to humanity, is to me, one of the dumber ideas of the ever self-centric human bias in evolution. Remember, in Star Trek, even the AI holographic sims had to fight for their rights.

 

  • In 2013, the movie Her, chronicles how a man falls in love with his AI OS.
  • In 2014, the film Transcendence, it depicts an AI researcher whose mind is uploaded to a computer and develops super-intelligence.

 

Hollywood has always been used to try to prepare the masses for the next decade to come, for what to scientists is directly on the horizon.

Moore's Law is over 50 years old, it's that descriptor we've all come to love for the trend in the development of computer hardware for decades, with no sign of slowing down. Transistors on a single computer chip would continue to double every year, while the costs per chip would remain constant. The prophecy creates also a self-fulfilling mechanism in the environment. The same holds true for the Singularity, many believe it will occur sometime between 2040 and 2065.

Because of competition between top companies and between countries, this mechanism is pretty much inevitable. History has an incredible momentum and scenarios are limited according to most likely probabilities. Our brightest minds in technology, neuroscience, robotics, software, et al, are the true forefathers of the singularity.

In fact, everything that happens as a result of AI, will be of human making. Finally, the kind of morality consciousness (benevolent or self-interested) of AI, will also be a direct reflection of human history and what AI will learn from us while reprogramming itself to be better. Will AI create itself in our own image or design more optimal "bodies"? Who needs a body when nano "doers" are fully capable of manipulating the physical realm for a consciousness hosted in virtual reality. 

 While we remain skeptical and go about our business, worrying about our fragile world economy, or how China is surpassing America or if the environment will elevate global temperatures by 2 degrees, we may actually have more pressing concerns. Whose job is it anyways to think of the welfare of humanity as a whole?

As such, what the singularity really amounts to, is part of the realization of our own human potential. It also signifies the point in time which, humanity (as we know it) may become obsolete on our own home planet. 

 

Tara Clapper

Marketing Strategist & Content Creator at Riverside Research; Blogger, Inbound Marketing Expert, Analog Game Designer

9 年

I have a lot of conflicting feelings about AI, but ultimately most of this will not happen in my lifetime (I'm in my 30s). That said, our technological advancement continues at a rapid rate. Look at how fast we went from cars to planes to the moon. The first time I heard about the singularity, I thought it was crazy. Now it seems to make sense. Media like "a.i." and "Star Trek: The Next Generation" were well ahead of their time.

回复
Gloria Reibin ★ Screenwriter

Member of the Writer's Group at FAST Screenplay

9 年

Great and inspiring article. Lots to think about he re.

回复
Camille De La Cruz

Sales & Business Development Leader | Digital Transformation | Lifelong Learner

9 年

Loved! Alarming. Must we create AI in our own 'image'? If AI is truly intelligent could not it discern that conflict is inefficient? Would there be a universal definition of 'freedom' or enslavement? ....mind blown. We the People have questions. :)

回复
Georges L.

Head of Business Developement - QPerfect

9 年

History is not going always in the same direction. I am not sure the philosophers of the great library of Alexandria, around 250 ad, dreaming of a platonic world of perfect concepts (much like the transhumanists of Caifornia today) could think their world would fall apart under the pressure of a new religion : Christianism. And their beloved library disappeared in flames- precious scientific knowledge etc gone for centuries. Let s learn from the past if we do not want to be just plain wrong about the future.

回复
David Becker

Award winning L&D, LXD and instructional design

9 年

Active manipulation of our genome will form part of this transhumanism. Biotech humans.

回复

要查看或添加评论,请登录

Michael Spencer的更多文章

  • Guide to NotebookLM

    Guide to NotebookLM

    Google's AI tools are starting to get interesting. What is Google Learn about? Google's new AI tool, Learn About, is…

    3 条评论
  • The Genius of China's Open-Source Models

    The Genius of China's Open-Source Models

    Why would an obscure Open-weight LLM out of China be worth watching? Just wait to see what happens in 2025. ?? In…

    9 条评论
  • First Citizen of the AI State: Elon Musk

    First Citizen of the AI State: Elon Musk

    Thank to our Sponsor of today's article. ?? In partnership with Encord ?? Manage, curate and annotate multimodal AI…

    14 条评论
  • The Future of Search Upended - ChatGPT Search

    The Future of Search Upended - ChatGPT Search

    Hey Everyone, I’ve been waiting for this moment for many many months. Upgrade to Premium (?—??For a limited time get a…

    8 条评论
  • Can India become a Leader in AI?

    Can India become a Leader in AI?

    Hey Everyone, As some of you may know, readers of Newsletters continue to have more and more readers from South Asia…

    8 条评论
  • NotebookLM gets a Meta Llama Clone

    NotebookLM gets a Meta Llama Clone

    “When everyone digs for gold, sell shovels”. - Jensen Huang Apple Intelligence is late and other phone makers are…

    7 条评论
  • Top Semiconductor Infographics and Newsletters

    Top Semiconductor Infographics and Newsletters

    TSMC is expanding globally and driving new levels of efficiency. Image from the LinkedIn post here by Claus Aasholm.

    2 条评论
  • Anthropic Unveils Computer Use but where will it lead?

    Anthropic Unveils Computer Use but where will it lead?

    Hey Everyone, This could be an important announcement, whereas the last two years (2022-2024) LLMs have showed us an…

    10 条评论
  • Why Tesla is not an AI Company

    Why Tesla is not an AI Company

    Hello Everyone, We have enough data now to surmise that Tesla won't be a robotaxi or robot winner. Elon Musk has helped…

    11 条评论
  • The State of Robotics 2024

    The State of Robotics 2024

    This is a guest post by Diana Wolf Torres - please subscribe to her Deep Learning Daily Newsletter on LinkedIn if you…

    4 条评论

社区洞察

其他会员也浏览了