Machine Learning ≠ The Singularity

Machine Learning ≠ The Singularity

Throughout my years spent taking in movies, television and books I have been exposed to the idea of machines taking over from the human race. This tends to involve a computer program becoming both omnipotent and omniscient in some way shape or form. These same stories always seem to identify a moment in time, a tipping point (think Skynet and Judgement Day in Terminator 2). This moment is essentially the point the human race reach "the singularity".

So what is "the singularity"?

Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light. 

Ray Kurzweil

First things first, when looking at the high level theory behind "the singularity" we find that Vernor Vinge explained how it as the point in time that would signal the end of the "human era". It would be the moment a new superintelligence would be created. This superintelligence would be capable of upgrading itself, advancing at an unfathomable rate. Adding to Vigne's explanation, American inventor Ray Kurzweil predicted that it would cause existential changes for the human species. Basically, good or bad (we usually assume bad), it will be a point where the human race would be drastically changed forever.

Putting the thought of Siri shooting missiles into the atmosphere aside, currently all of the advances in Artificial Intelligence that we see and hear about are examples of weak AI, rather than strong AI. Whether it’s machines beating humans at Gogreat advances in Machine Translation, self-driving cars, or anything else. They are each focused on, usually very specific, defined tasks.

Some of the most intelligent minds believe that these advances will inevitably lead to strong AI if not managed carefully. They also believe that this will in turn, eventually lead to a superintelligence we’ll no longer control (the singularity). It is this lack of control that has caused many to fear that such an entity will, at best, subjugate the human race and take over the world (think Harlan Ellison’s 1967 story I Have No Mouth, and I Must Scream)!

No alt text provided for this image


"Success in creating AI would be the biggest event in human history...Unfortunately, it might also be the last, unless we learn how to avoid the risks."

Stephen Hawking

I wouldn't dream of disagreeing with the likes of Stephen Hawking, Elon Musk, and Bill Gates about whether, eventually, AI could pose a potential threat. It probably would if rigorous safety measures are not observed. However, I am minded to refrain from buying the likelihood that weak AI, whether using Machine Learning or other techniques, is going to be the path to the singularity they warn against.

Different takes on the singularity generally all have something in common. This commonality is a lack of clarity, or attention to detail, in defining the specifics of the "intelligence" being referred to. Perhaps wisely, many never even attempt a definition of intelligence in relation to the singularity in the their hypothesis.

If we play out the scenario many have explained, basing it on how we are led to believe AI would "evolve", there is a flaw in the theory of weak AI, alone, being a precursor to the singularity.

Lets start by winding the clock forward from the current day. As we are winding the clock forward, we begin see the form the superintelligence has become (think matrix for example). Based on the scenario above, these have developed as a product of the progression in weak AI. Whatever the interests/wants/needs are of the entity, it would find that with it's increased information processing capabilities, it is now better equipped to serve these needs alone, the human race is now defunct in relation to the entity.

From this future point in time, lets rewind backward toward the current day… As with any progression/evolution, traveling back in time causes the ability of these forms to process information to regress.

Now, despite the regression, no matter the point in time, we should find an entity that has its own interests/needs etc. As we move closer to the present, the needs would still remain, however the ability serve these needs would continue to diminish to nothing in parallel with its intelligence.

However, this is where we find the flaw in the theory. The above scenario does not align with current weak AI . We know this because, in the present day, all we find is machines built by humans, with algorithms written by humans, all designed to serve the interests of humans. This means they have no self interest to serve, and can only exist based on the input of humans. This current lack of drive/interest means that regardless of the intelligence level, the AI still exists solely to enable us to complete tasks.

In the race to improve AI, we have to temper the concern with the knowledge that intelligence isn’t the end in and of itself. It is however the means to an end. Our own evolution has created a supercomputer wired very differently to the machines we create currently. This superior, or at the very least different, processing ability has enabled humans to thrive in the world. However, our "needs" in some form have always existed throughout our evolution.It is these needs / self interests that has led to us creating weak AI systems in the first place. These interests are the same things absent in the weak AI we see today.

Ultimately weak AI systems are essentially tools, like the hammer, electric drill or production line. On there own they are great for certain tasks, and form a useful part of our tool kit. However, they’re never going take over the world.

All in all, whilst we do experience massive advances daily, AI is still heavily reliant on human assistance in various forms. This is something that doesn't appear likely to change soon, even if we continue to see advances in Machine Learning.

要查看或添加评论,请登录

Ian Cartledge的更多文章

社区洞察

其他会员也浏览了