Part Twelve of Natural Intelligence - How Artificial Intelligence could spiral downward into real stupidity

Part Twelve of Natural Intelligence - How Artificial Intelligence could spiral downward into real stupidity

Part 12 of 14: Consequences of being passive - The downward spiral of outsourcing thinking

If you have read all the other articles in this series you will have picked up the pattern by now. A bit of a rant, a little reminiscing, a spattering of factoids and then something to think about. So you have the measure of me a little, or at least the public persona that I choose to project, in just over 17,000 words (according to the little word counter at the bottom of my screen). I have opened up bits of my mind to you to illustrate points and (hopefully) entertain, but amidst that I should have demonstrated some of my biases, and I would guess that without clever algorithms helping, you could probably figure out some things about me as a person, my demographic my upbringing and education level. That’s not a bad thing, knowing who you are dealing with, as it adds to the credibility of whether you equate to that person, and ultimately whether you trust them and value their opinion.

However, I do that without knowing anything about my audience – you are all (sort of) anonymous to me. Of course, I can see if you subscribe to my newsletter and some of you are kind enough to like what I am doing here, but I don’t harvest that information to target things at you, rather I use that to feed my inquisitive nature and see what interesting things other people have to say. I use it to feed my knowledge so my next opinion can be a more informed one. That is a commonality that I have with AI and its method of absorbing information.

However, with me that is not systematic and can be flawed, and that lack of precise definition is a good thing. I started off by saying I was going to do this in the style of “ChatKHR” but actually that was not entirely true. I hope you will notice that I am not endlessly referencing other people or principles, I have been trying to use the original Quantum computer, my brain, to recall things in what may be an entirely imperfect way. Have you ever considered what dreams are all about? “Do Androids Dream of Electric Sheep?” was the question posed by Philip K Dick in his 1968 (the date I did have to look up, but makes it nearly as old as me!) novel which ultimately was adapted into the film Bladerunner. But for us, the downtime of sleep is used by the brain to make random synaptic connections to see what happens. It is experimenting with memories, which is why you have some totally random collections if things that you have seen and done before, but not the way that you did them. How do you represent that element of “playing” into structured instructions that would be implied in a machine?

Anyway, what I am trying to illustrate, and it has been my thread through all my comments that I hope you have picked up, is that thinking in the old fashioned, flawed and sometimes random manner is not just a good thing but, it is a necessary controlling mechanism for decision making machines.

Albertism: “Unthinking respect for authority is the greatest enemy of truth.”

Healthy scepticism and critical thought is my key takeaway phrase. We can ask machines (and machines even a correct definition any more?) to think on our behalf, but we should never abdicate responsibility for thinking, because to do that will genuinely lead to our demise. We need the machines to have input and accountability always with those who can dream, and, perhaps even more importantly can have nightmares.

Albertism: “The most beautiful experience we can have is the mysterious. It is the fundamental emotion that stands at the cradle of true art and true science.”

Right on Albert – we need to retain that wonder at things that are new, we need to have randomness in life and perception of beauty, love, passion and fear as all of those things are what contribute to the human condition. In automation, we are seeking a betterment of life. I would maintain that undermining our ability to think, or leading us to believe that we don’t have to think for ourselves anymore, ultimately will undermine our quality of life and actually not be a healthy way forward for the very automation we are trying to promote. Let me elaborate on that a little…

In relatively simplistic terms, at the moment, we are teaching AI as we would a child. It is learning from things that we give to it. It is absorbing our original thoughts and actions to try and piece together an order and “opinion” from a myriad of sources but with precision of memory. Nonetheless, all of this learning is from the collective experience that we have accumulated the old fashioned way. Now, if we get to a point where we consider that we don’t have to have original thought any more because machines much cleverer than us will figure everything out for us, then we remove the generation of new original human thought. So how does AI progress from that point, like we do now, it has to start making stuff up itself and that cannot be a good thing. For us to get the most out of what technology can achieve we have to be an ongoing active participant in the ideas process.

Without that “partnership” or segregation of duties, then one of two things will happen, either the “Skynet” moment from the Terminator film series, or perhaps more likely, that AI gets dumber because of lack of original input. Consider nature’s version of this -Genes. We all are aware of the fact that if you limit the gene pool then the lack of diversity of genes means that bad mutations are much more likely and that diseases can become more prevalent because the random element of how we are made up actually provides some statistical protection. So if you apply this to a closed loop of AI where new “randomness” is not being input from outside its own ecosystem, then potentially the same will happen. The AI will create more and more anomalies or “hallucinations” to use the technical term (yes really it is called that), and, actually the AI will get more stupid over time or diverge away from its original precepts.

I am not the harpenger of doom here. I am a technologist and appreciate (and encourage) the huge advances that we can make in the use of technology because we have already demonstrated this. However, my plea is not that we do this by totally outsourcing thought to technology. We need a whole bunch of abstract things that we will always do better as humans to both temper progress and make it relevant to why we wanted it in the first place. This intent should not just be about productivity and profit, but genuine human progress, assisted by technology. This does not deny the inevitability of machines doing things done by humans now, but the human element of that should actually be that this allows humans to move up to a higher plane of thought, not a lower one. We need to teach people to think more not less.

Albertism: “Weakness of attitude becomes weakness of character.”

Coming Next - Part 13 of 14: The Final Frontier - Unchallenged sentient thinking

Just a quick note to say that I'm enjoying your 'ramblings' on this topic, Kurt. It's nice to see a slightly tangential, pragmatic, narrative on the subject of AI from a technologist, rather than just the usual 'gee whizz guys, look at those algorithms' one. Keep well, Colin.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了