What Price AI?
An old idea
Moore's Law has been strictly followed for the last 40 years: an iPhone 14 stores a hundred thousand times more than the first IBM PC and its processor is three hundred times faster. Programming, which is nothing but human intelligence behind the results of the machines, took advantage of this constant increase in capacity and speed to create an infinity of applications. They soon made obsolete several objects previously considered irreplaceable: who still uses fax , paper maps or telephone directories? And what about collections of VHS tapes, or audio and video discs?
It was inevitable that we reach the current stage of Artificial Intelligence. The idea was in fact already present in the first robots of 19th-century literature and has been an ongoing project since the beginnings of electronic computing. The fundamental Turing Test was created in 1950 to answer the question: “Can machines think?”. The question remains just as pressing today.
Is that thinking?
Chess champion Garry Kasparov’s defeat by IBM's Deep Blue computer in 1997 was not cause for much alarm, probably because the computer's victory had been determined by the simple ability to calculate millions of possibilities in seconds — that is, basically by brute force. But thinking, creating, learning? Not so much, people said. That changed when computers began to win even in the extremely complex game of Go, when it became clear that this was not just about simple processor expansion, but about deep analysis.
Neural networks, machine learning and deep learning, based on reproducing the functions of the human brain, turn computers into machines that reason, analyze and learn. For many, there is no difference between these activities and an acceptable definition of "thinking".
Just like real people
What's more, advances in the ease of communication, vocal or written, have dispensed with keyboards and coding experts for quite some time now. ChatGPT puts us very close to Stanley Kubrick's HAL 9000 or the Star Trek computers. ?Machines now write poetry, compose songs and produce paintings — heck, they are even being sued for plagiarism.
Here is a (interim and non-exhaustive) list of what AI can already do like better than we can: medical diagnostics, machine handling — including driving cars —, data searches and comparisons, personalized recommendations, fraud prevention, administrative tasks, spam filters, facial recognition, inventory management, agricultural tasks, financial and economic analysis and forecasts — and it is an ever growing list. We are talking about areas that are particularly susceptible to conclusions based on known data, which clearly constitute a large part of human activity.
Another huge advantage: a machine doesn't complain about salary, doesn't care about overtime and does not need to sleep during breaks. It is no surprise that several movements are trying to limit the effects of AI on the job market. If machines start to replace humans, only a more creative and exceptional minority will preserve their utility; the others will not even become proletarians, they will simply be considered useless.
Don't worry, we're not there yet
All right, let us assume they think. But intelligence is something else. While AI excels at performing specific tasks, it is unable to replicate human intuition and often struggles with common sense reasoning and understanding context. A complex situation that requires judgment, ethical assessments, or iterative decisions may be impossible to resolve, even for an AI programmed at the highest level.
Like everything, it is Doctor Frankenstein's finger that will define his creature’s face. And good intentions are not enough: imagine a machine whose mission would be to eradicate the pests attacking the terrestrial ecosystem; way up there before incinerating weeds or exterminating locusts, its first order of business would be to destroy the human species, arguably the greatest environmental criminal in our planet’s history.
领英推荐
Inserting ethical considerations into a list of instructions or algorithms is notoriously difficult. ?How can we avoid apparently mandatory conclusions based on real distortions? We know that machines programmed for asset protection have shown racial biases, for example. The simple fact of feeding the program with prison statistics, in which the non-white population is unequally represented, leads to prejudice within the system itself. How do we explain that correlation does not indicate causality?
More than intelligence, what about consciousness, which so far has no common definition even for humans? The risks pointed out by AI critics stem precisely from a lack of ethics, or the lack of a value system clearly based on our conscious perception of the world: misinformation and manipulation, impossibility of human control, loss of privacy — all of which well under way. Other possible dystopian developments are increased inequality in the access to new technology, a new arms race or, in extreme cases, the disintegration of society itself. You don't have to believe in Murphy's Law to fear the possibility of any of these risks. But are they inevitable?
Saved by incompleteness
Think of the following sentence: “This statement you are reading is false.” It is of course the famous Liar’s Paradox, a statement that contains its own contradiction. It is also a case of self-reference, as the sentence speaks of itself. Countless paradoxes can be constructed and analyzed by the human mind, but, like humor, they are absolutely beyond the reach of any computer, however intelligent it may become someday.
Any doubt about it has already been resolved by Kurt G?del and his Incompleteness Theorems, which basically prove the impossibility of a system containing all the statements necessary to its own coherence. In other words, any theoretical construction (a computer program, for example) will always depend on an external element accepted as true. A computer can therefore prove theorems of incredible complexity, but it will only do so if it is properly fed with some axiom — which by definition is a true and unprovable statement.
So it is proven: the human mind is superior to the machine’s, which will always need us to evolve, if only because we will always be the ones to feed it with data, axioms, and programming codes. That is precisely the point where we can avoid the worst possible scenario: the definition of fundamental rules for the development, operation and use of Artificial Intelligence.
The Laws of Robotics elaborated by Isaac Asimov, which in theory would prohibit any harm to the human species (and have already proven insufficient), are not enough: we need a global agreement that would prevent the creation of systems based on unethical postulates or simply aimed at the greatest possible good for the smallest number of people. Using AI solely for the profit of a few or for the domination of one group by another should be considered as a predatory activity.
No Descartes, thank you
This is precisely where the greatest difference between humans and machines must come into play: the ability to make decisions in the best combination of reason and emotion, as already demonstrated by Antonio Damasio in Descartes' Error. Simply left to its infinite capacity for rational processing — its thinking, so to speak —, the machine will always arrive at the most efficient solution, but this could mean, as we have already mentioned in the case of environmental protection, the pure and simple eradication of humanity. Even without going that far, the simple decision on resource distribution based on mere efficiency algorithms can condemn vast populations to misery or starvation.
Human history is filled with examples of seemingly counterintuitive actions: saving minorities from disaster, even at the expense of scarce resources; fighting superior military forces just to defend principles; dedicating one's lifetimes to research that will only benefit future generations — it is an endless list. Just as extreme rationality can be destructive, there is something purely emotional about human evolution.
We are not superior to machines in most respects, only in the fundamental one which ensured our survival as a species: we know that our most crucial decisions are not demonstrable, only true.
If AI is to serve the common good, it must never dictate our paths, but follow them.
(Excerpts of this article were first published in Portuguese in David Gotlib's acertarnamosca excellent blog)
Proprietário(a), Sunreal
1 年Love this