Part Eleven of Natural Intelligence - How Artificial Intelligence could spiral downward into real stupidity

Part Eleven of Natural Intelligence - How Artificial Intelligence could spiral downward into real stupidity

Part 11 of 14: Deep Thought - The implications and dangers of trying to form a replica of human thought without context

The word of the day is “Sentient” or this context of a “Point of Singularity”. Singularities can happen anywhere, and they are surprisingly common in the mathematics that physicists use to understand the universe. Put simply, singularities are places where the mathematics "misbehave," typically by generating infinitely large values like at the “Big Bang” Tor in “Black Holes”. More recently, the use of the term Singularity has risen to fame because of two thinkers. The first is the scientist and science fiction writer Vernor Vinge, who in 1993 wrote

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

Blimey…

The other prominent prophet of the Singularity is Ray Kurzweil. In his book “The Singularity is Near”, Kurzweil basically agrees with Vinge but believes the later has been too optimistic in his view of technological progress. Kurzweil believes that by the year 2045 we will experience the greatest technological singularity in the history of mankind. The implications would be that this could have a capability that could, in just a few years, overturn the institutes and pillars of society and completely change the way we view ourselves as human beings. Just like Vinge, Kurzweil believes that we’ll get to the Singularity by creating a super-human artificial intelligence (AI). An AI at a level that could conceive of ideas that no human being has thought about in the past, and will invent technological tools that will be more sophisticated and advanced than anything we have today.

Blimey squared…

Since one of the roles of this AI would be to improve itself and perform better, it seems pretty obvious that once we have a super-intelligent AI, it will be able to create a better version of itself. And guess what the new generation of AI would then do? That’s right – improve itself even further. This kind of a race would lead to an intelligence explosion and will leave old poor us – simple, biological machines that we are – far behind.?

Blimey cubed…

So what does this really mean? A number of bodies have taken this as an indicator of equal challenge and opportunity. Peter Diamandis, who was one of the co-founders of the Singularity University, states: “Creating abundance is not about creating a life of luxury for everybody on this planet; it’s about creating a life of possibility.” What the Singularity University and others teach is that we need to be involved in the progression that will inevitably take place, what Joseph Voros called "A generic foresight process framework" or more simply a “Cone of Possibilities”. Enough of the long explanations though, if you have followed the train of thought developed over the previous articles in this series, then I hope you will already have come away with the thought that we have to use technology for good. Whilst we may not know everything about how aspects of technology work, we need to understand its purpose and origin, and somewhere in our midst we have to have the capability, in trusted and visible hands, to maintain a relationship with technology that makes it a tool rather than dominating thought. The only way we can do this is to maintain critical thinking capability.

Albertism: “A clever person solves a problem. A wise person avoids it.”

So let us consider this very carefully, is that the extent of our ambitions for technology, to make our life better, or are we striving to make something in our own image in a god-like manner, that will eventually spiral out of our control? Are we striving to evolve our very being, or are we trying to create a parallel evolution that frees us from our labours, both physically and mentally? Do we want to replace ourselves and if so why?

This leads into what remains as an ambition to be replaced. If we have already replicated the neural process of our brains to make connections and learn, what remains is what we categorise as making us human, the emotions, empathy and “soul” of an individual. This is sometimes called making a machine “sentient” meaning to make it “feel”. However, in a digital sense we are not talking about making machines feel physical sensory items (like pain) that define what a sentient being is. In the digital sense we are talking about self-awareness, the thing that you can assess in animals when they can tell that an image in a mirror is them and not another animal.

There are a couple of problems with this when paralleled with human development. Humans are not born with self-awareness, it develops in the first few years of childhood. There are many things that evolve over time in our brains, in this manner. The true ability to assess risk doesn’t develop until you reach your mid-twenties (unless you are an F1 driver when it never does). This is why the young can appear to be much more reckless than their older peers. But let’s consider why that is the case….?

The reason the brain doesn’t go full on from day one is that it wants us to intertwine experiences into this development. Attitudes, learnings and even mistakes are a critical part of how we develop. The imperfections that we have and the errors that we make are a critical part of what makes us what we are. That may not be totally unique as we have common experiences in groups or nations that are very similar, we define peer groups that have collections of ideas and protocols. But imperfections remain and sometimes this is what draws us back from the brink. Think back to the game of noughts and crosses (or for some reason tic-tak-toe in America) in the 1983 film “War Games”, which taught the computer (called WAPR – as in whopper) that there was such a thing as a draw and hence there was no point in trying to win? Actually, this was the wrong lesson in a greater sense, as winning should not be ruled out in such a simplistic manner. If the computer really needed to be taught futility, it should probably have played Monopoly or “JackStaws”. I had to look up the origins of Jackstraws otherwise known as “pick-up sticks”, “pick-a-stick”, “spillikins”, or “fiddlesticks” and became popularised from 1801 in Europe following a much older game in China using Yarrow Stalks – and I don’t know what they are – point is though, would WAPR have known that, or even more importantly would it have cared?

Therein lies the quandary, the idiosyncrasies of human thinking is important to give variety, diversity and, sometimes, boundaries to thoughts and related actions. With AI we have the potential to accelerate thinking beyond evolution, to make something switch on and be ready rather than really learning with all the emotions that are attached to each and every decision we make. Lots and lots of mini mistakes create an antidote to big mistakes where we judge the balance between caution and ambitious actions. AI can certainly learn and it can do that from errors but those error based leanings could be somewhat binary in nature rather than nuanced.

The core question is why we want machines to be better in all senses than a human in the sense of ultimately creating its own race? What really would be the purpose of that? What that requires goes back to the fact that we, as humans, need to retain the capability to constantly ask why and to also be able to see and act in relation to the answer we generate. Some aspect of humanity always has to remain more developed and able to provide better context than pure digital thought can.

Coming Next - Part 12 of 14: Consequences of being passive - The downward spiral of outsourcing thinking.

Gavin S. Fraser

Consulting, Engineering & Technical Professional

7 个月

Good to question why “Artificial” machines are required. I won’t use the term Intelligence, as machine learning starts with the parameters that programmers have come up with. One can’t copy true intelligence by design, and the sad fact is that companies will spend multi ¥$€£100’s millions on hardware and new AI software, yet not on employees, pensions, communities, facilities, or taxes. I fear a ‘meta’ world could be a sad augmented unreality, a long way from where money should be creating a tangible health, wealth & prosperity in local and national communities. Where ‘levelling up’ money should be lasting. Instead of billions squandered by big tech selling what AI glasses can show, or what pictures or stories Chat GPT can come up with. AI could be more counter productive than useful.

回复

要查看或添加评论,请登录

Kurt Roosen的更多文章

社区洞察

其他会员也浏览了