Part Thirteen of Natural Intelligence - How Artificial Intelligence could spiral downward into real stupidity
Kurt Roosen
Head of Innovation, Isle of Man Govt Digital Agency, BCS Fellow, Freeman WCIT, Member of ISACA, Member of ODI
Part 13 of 14: The Final Frontier - Unchallenged sentient thinking
Here we go then, the last real chapter before the blockbuster conclusion, and it’s about sentient thinking. Now let’s not start by getting into a big convoluted debate about what sentient means. If you want to do that, just go back a few articles and you’ll find it there. Let’s work on the basis that for me it means two things – self-aware and self-evolving. For me these are the crux of what makes people different from machines together with a spattering of emotion, empath, conscience and fallibility. Put all these together and you might want to refer to them as the “soul” – the intangible elements that make us unique, and also makes us consistently cling to beliefs that these elements transpose life and give us a purpose to life beyond simple biodiversity.
So, that’s all good and dandy then. These are all the things we need to design into machines to make them sentient. But have you noticed that all of these elements we are talking about are very subjective? If we were asked to even define ourselves in these terms we would often not be able to do this. That being the case, how can we define within logic machines what these are and even if we could, that the interpretation would be the same. Sometimes our ability to forget or be inconsistent is our greatest asset and also our firebreak to stop us from doing certain things that, if laid out in pure logic terms would makes sense. Remember the logic bomb that was the basis for the film I, Robot where the robot saved the adult rather than the girl from drowning in a sinking car, based on probability of survival? As humans I wonder how many people have drowned trying to save a stricken dog – emotion taking precedent over logic – but not a bad thing to have in your armoury.
Albertism: “Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”
Now let’s take that thought a bit further. We have over 8 billion sentient beings roaming this earth with the most sophisticated biological quantum computers nestled between their ears. Yet with all of this processing capability have we really figured out how we function as a unit? In fact, the difference of location and circumstance, even without tribal and racial differences have never been ironed out over millennia of evolution. In some senses, volume has only made things worse, with differences highlighted and derided rather than celebrated.
With this as a context, if we were trying to model these sentiments, in whose reality would that be? It is likely that this modelling would come from those building and controlling the development of the devices and that is not the entirety of humanity. Even worse, divided we are already, but is it possible that in the creation of a sentient machine that we will create another tribe, another view of the world, another potential to be prejudiced against those that are not if its kind. I know that sounds very like the Skynet view of the world from the Terminator films (don’t know what it is about films today but there will be more so brace yourselves – seems I am not building a bibliography to this but a filmography) but there is an interesting path there which we have not refuted in human existence. Namely, one tribe acquires some knowledge, it uses that knowledge to gain attributes, it decides other tribes who do not have the knowledge are stupid and subjugates them. This is fundamentally how the British Empire was built – an organised party subjugated other nations because they didn’t know they needed to be organised. Although I do prefer the Eddie Izzard viewpoint that the British Empire was built on the basis that we had a flag that we could plant and if you didn’t have one you lost…
So, given that making something sentient is inherently difficult and potentially dangerous why do we want to do this in the first place? For me technology and technological development is a wonderful thing, it is a natural part of our evolution. When you look at the possibilities for betterment of the human condition, the curing of diseases, the removal of danger from our lives, the enrichment of education, the solving of environmental problems, all of this can be massively assisted by technology and the ability of AI to increase the speed at which things are done. But that is about processing and analytics and machine learning (very quickly). Making something self aware and giving it the ability to define its own future evolution is a different matter.
领英推荐
What you have to consider is that alongside all of that is a commercial imperative. Unlike biology, technology does not (yet) evolve without commercial impetus – someone has to pay for it. That paying for it requires a return of some type and that will often be in terms of productivity that displaces people. Create an automated AI driven chatbot and you don’t have to employ a person to do this any more. Now this is just a different displacement to the call centre offshoring that has taken place over a number of decades across nations (and one might argue that in comparison the AI chatbots can actually be more helpful). But this is about machine learning and pseudo intelligence. When we talk about sentient machines then we are not just talking about creating a function that a person once did, that is the natural order of things. We are talking about replacing entire people and that is where we have to pause and think why we are doing that – are we running out of people to use – clearly not…
So the argument that prevails here is that “menial” tasks can be taken away and therefore human life can be devoted to higher and better things. Do we really have such a balanced world right now where everyone can have choices like this? We have massive global disparities already and a lot are based on wealth. If we are truly going to create something to elevate the human experience, how are we going to make that happen when technology will always cost more and be more usable in rich economies and displace more people in poor economies. Would we simply be enhancing the disparities that already exist and make the rich richer and the poor poorer because they cannot play in the same game?
For me, this is not an inevitability, but it is a point of inflection where we need to consider both consequences and motivations. Ideally, technological advancement should define a balance between commercial expediency and human benefit. If there is commercial interest but humanity (or at least human customers) have no advantage, then this is a form of exploitation. There are things that would be to the detriment of people, from both the wider and individual perspectives and that is what we have to consider as a society. Sometimes, when asked why to do something, the answer “because we can” is not always the right one.
I want to progress forward and take every advantage that human ingenuity can afford us to benefit the widest range of use cases possible. But, handing over that ingenuity without very serious consideration is a step that I think should be taken very seriously and not happen by accident of design. In conjunction with this, the premise of this whole series of articles is that, to participate properly in that debate you need to be able to understand all sides of the debate, question the assertions, and be aware of the risks of particular actions. We always take risks, as I was reminded by film (there I go again) Oppenheimer recently, but we have to know the percentages of risk/reward to know what to do. I don’t see the sentient argument being compelling enough, in human advancement terms yet, to be worth the risk. However, I stand prepared, and armed with a sword of scepticism, to be proven wrong. But let’s have the debate…
And I finish off with one last film reference – for a wonderfully worked piece that goes through both the human condition and the development of a sentient being you probably can’t get any better than watching Bicentennial Man with Robin Williams (no not the singer of Angels but the actor who started off life as Mork). A slight spoiler alert – in the end the one human attribute that ultimately completed the ambition of a robot to be sentient was to be able to die…?
Na-nu Na-nu (look it up… Mork and Mindy)
Albertism: “Everything should be made as simple as possible, but not simpler.”
Coming Next - Part 14 of 14: Conclusion - What will I do with my Saturday mornings after this?