AI: the Hammer and the Invisible Hand
Will AI surpass human capabilities? Yes, it already has.
Will AI replace humans? No, but it doesn't matter.
Wrong questions don't give us the right answers. The right question is: how will we be controlled?
Does the hammer replace its wielder?
AI is a tool. All tools – not only computers and other machines – exist because they allow to surpass human capabilities. And not only human: a chimpanzee using a twig surpasses other chimpanzees' termite picking capabilities. The same goes for all tools all the way since the lever and the wheel to the computer and its AI applications. They allow us to do things we could do even without them (move heavy objects, calculate with many numbers, recognise patterns in data) but with efficiency that would otherwise be way beyond our reach.
Different tools are good for solving different problems. If you try to use a tool to solve a problem which is not a good fit for the tool, the results may not be as good as you hoped for. Unfortunately as the choice of tools available for us is often limited for a reason or another, we end up solving problems with wrong tools. In other words: the tools we have shape the way how we see our problems. "When you only have a hammer, everything looks like a nail" is an example of a cognitive bias called Law of the instrument, over-reliance to a familiar tool.
AI is a marvellous tool. It seems it can do anything we humans can, only faster and better. When we have this kind of magic hammer we want to bang everything with it. But as any tool, so too AI has confined area of usefulness: it only works with clearly specified goals. What often confuses us is the fact that something exceedingly complex can still be clearly specified, if there is enough capability to handle very high levels of complexity, as is the case with computers – less so with the human brain.
"Is this good or bad" looks like a simple question and sometimes it might even be simple, but very rarely it involves clearly specified goals. The more relative things get, the less clear they can be specified. Dichotomies like "good or bad" are inherently relative within themselves, but also relative to correlating dichotomies and especially to observing subjects. Not a good nail for an AI hammer.
Intelligence of course very much distinguishes us from the other species, but it would be a gross oversimplification to say that to be human is defined by intelligence. While our intellect has allowed our species to populate the whole globe and exploit other species, it is our emotions that actually drive our decisions. We also have imagination: it allows us to create models of how things may lay in the future or what emotional state another individual might be in. We create goals for ourselves and we have capacity for empathy. Both of these are very far from clearly specified systems, and as such very far from the machines' domain.
No, the hammer will not replace its wielders – but will it control them?
Invisible hand in control
The laws and moral control us – among many other things. Control restricts the scope of choices we can choose our actions from. It can also force us to take actions that we wouldn't want to. Very often control goes unnoticed: we just act without giving a second thought as to why we chose that action.
The laws and moral more or less evolve from common consensus. Even if not all members of a population take part in forming the rules, at least the makers and motivation of the rules are often visible to everybody. Then you have a possibility to influence the decisions of the law makers: you can vote in an election or gather a crowd for a demonstration.
The source of control is not always visible or tangible. One such source is the "Invisible Hand": the mechanism which automatically maximises the benefit to the society as a whole if the market operators have maximal freedom (the interpretation and very existence of the Invisible Hand is a subject of much debate, but that is not relevant in this discussion). It would seem that when there is no regulation there wouldn't be any control either. However, that's not the case – the control is just hidden. The changes of demand and supply, and how the prices are formed, limit the available choices of both consumers and producers. But as the source of this control cannot be identified or addressed, it is impossible to influence it.
Even if the concept of Invisible Hand originates from the economics, it describes very well the systems where machine algorithms are the source of control. Whether it is the price we pay for our flight tickets or groceries, the selection of the news articles we get to see, or the choices of autonomous vehicles, the control comes from a very complex and distributed system, whose logic is beyond our comprehension and which is unidentifiable. As such systems emerge, they become part of a larger, deeply interdependent whole and can be very difficult or impossible to remove.
These systems of algorithms that control us don't have any evil master plans to enslave the whole humanity – they don't have emotions or imagination and they don't create goals for their lives. The creators of those algorithms then may have all sorts of plans – benevolent or malicious, it might not even matter. The more complex a system is, the more chaotic its behavior is – small changes in one place may cause very large changes elsewhere. What matters however is that we are creating systems that very much control us – and we have very little means to influence that control.
We need (transparent) AI
AI can be said to be just hugely efficient optimisation algorithm. And for sure there is a lot to optimise! The size of the human population and especially the ways it consumes the planet's resources have put us on an utterly unsustainable path. If we want to avoid dramatic and tragic outcomes in the coming decades we must optimise the production of raw materials and goods, distribution and transport, services and societies.
We don't need transparency in all situations. When we use AI to optimise power plants or energy distribution we don't need to know how the results were achieved – as long as they are the best possible. But when you are denied a bank loan or insurance, government benefit or organ transplant, then "computer says no" is not good enough explanation.
When deciding how and where to implement AI, it's worth to think whether you are happy to be controlled by your hammer wielded by the invisible hand?
Product Manager at Ardoq
5 年Kasimir, you pose a good question, but I'm not sure if you see far enough ahead when answering it. Humans will be the hammers. Or rather, the nails. A nice read that elaborates on this and backs it up better than I am capable of is?https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html