ARTIFICIAL INTELLIGENCE: POTENTIAL AND DANGERS IN THE SHORT AND MEDIUM TERM
When Technology is truly disruptive, it is necessary to be humble enough to recognize that we do not have even a faint idea of where its evolution may lead us. For example, in the distant past, some people forbade the adaptation of X-rays to theater binoculars.
In the field of I.T. and the Internet, in particular, we have interesting examples of failed predictions made by experts:
It is hardly surprising, therefore, that celestial and apocalyptic visions coexist when talking about the future of artificial intelligence (A.I.); we have among these visions humans dedicated to a contemplative life while robots equipped with artificial intelligence work for him, killer robots and a full loss of jobs... everything is possible when we let our imagination run wild in a field whose long-term evolution is difficult to foresee.
However, with the technological capacity available today, we can anticipate events that are already occurring or will occur in the short and medium term. These are probably the ones that should concern us most at this time:
Artificial intelligence, when it goes beyond a laboratory exercise and is applied, has two clearly differentiated parts:
Assuming that the learning algorithms are adequate, this would be a very simple outline of how A.I. works. However, some things get lost in the way:
The biggest problem, however, is related to the interaction between the system and the human who is legitimately using it: There may be a disincentive to learning.
Hubert Dreyfuss established a learning scale -shown at the beginning- starting from novice and ending at expert. He proclaimed that a system endowed with artificial intelligence could not go beyond an intermediate stage that he called "competent".
When we find cases such as AlphaZero, capable of beating world-class players in games such as chess or Go, it would seem that this objection is difficult to defend and that this statement has not stood the test of time. However, Dreyfuss himself gives a way out of the apparent contradiction by pointing out that the scale does not refer to the level of performance but to the quality of knowledge. In other words, a competent system that simply performs a set of automated procedures may perform better overall than an expert, despite having a lower quality of knowledge.
However, if we accept this, we would have two problems:
We can answer both questions, but we will see how, by solving this problem, all we do is move it one step further: The human being does not have the separation between the learning module and the execution module that A.I. has. This means that, while A.I. cannot solve in real time an anomalous situation that has not been foreseen in its learning process, a human being can. Numerous situations have been solved by humans in which, if asked beforehand what they were going to do, they would have to confess that they did not know...but, later, they did not get blocked or enter an infinite loop, but simply solved them.
Of course, this is a huge difference that makes the adequacy or inadequacy of substituting humans for systems in high-risk environments like Aviation or NPPs. However, as mentioned, by claiming a place for the human in activities where serious and unforeseen learning contingencies may arise, we are moving the problem one step further:
Who teaches a system endowed with Artificial Intelligence? It depends:
领英推荐
In categorizing a system based on A.I. as "merely competent", Dreyfuss was basing his argument on the fact that the expert learns far beyond procedural knowledge and that, precisely the knowledge that makes him an expert, is difficult to verbalize and, therefore, difficult to transmit.?This difficulty of transmitting knowledge beyond a set of rules meant that the ceiling of a system was at the level qualified as "competent" without any hope of reaching the "expert" level, although the handling of large amounts of data and the speed of this handling can achieve that, in some areas, the result of the "competent" surpasses that of the "expert".
Steven Pinker in "The stuff of thought" gives some interesting clues about how language can, at the same time, act as an enabling vehicle in the transmission of information while, in other cases, it can be a limiting factor for progress.
Thus, A.I. may find a barrier to improving the knowledge level, and that is precisely the ability of the expert to transmit his knowledge. The translation of knowledge to rules susceptible of being included in a system implies an inevitable loss.
We have, therefore, a problem in the expert's ability to transmit his level of knowledge without degrading it, but, in addition, a no lesser problem would be in the availability of such experts. The difficulty in making them available would not only be in the scarce incentive to become an expert in an environment dominated by A.I. but, even more, the difficulty in going beyond mere operational knowledge in that environment.
Right now, we can see many activities with highly automated processes, with or without A.I., where the human operator ends up having procedural knowledge rather than an in-depth understanding of the system's operation...i.e., just as it is difficult for a human to teach a system to go beyond the level qualified as "competent", it is difficult for a human to go beyond that same level in an A.I. environment.
These problems exist today in some fields, while in others they are foreseeable in the short term.?However, without entering the realm of SciFi, there are some things that could change in the medium term and could paint a somewhat different picture.
See the Apple-Google example and their difference in behavior in terms of privacy: Apple decided that photo recognition would not be performed on its systems but on the device taking the photo; Google performs the recognition on its systems. Leaving aside the privacy issue, this has a practical consequence: An Apple device can recognize a person in a photograph and another device of the same owner would not recognize it; in the case of Google, the recognition of specific persons is transmitted between the different devices linked to the same person.
If an increase in processing power and miniaturization were sufficient to break down the division between the part of the system that learns and the part of the system that executes tasks, this would have immediate consequences:
Devices equipped with A.I. would learn on an individualized basis. Thus, for example, an aircraft transmitting operational experience data to the manufacturer would stop doing so to learn on its own. However, it would turn out that the specific aircraft's experience would be as relevant as the pilot's experience in the type of flight. It would learn from its own experience, not from that of other aircraft of the same type. Today it would seem incredible to think of an aircraft whose software would specialize in a high-traffic environment while another identical plane would become a specialist in oceanic flights. However, that is feasible if the I.T. hardware advances sufficiently.
Hacking would be more difficult as the aircraft would not receive external data but would learn from its own experience.
Some other aspects, such as the so-called "deep-fake" would also be facilitated so that, for example, the recognition of voices or images would be enhanced when referring to a group of people or a particular ethnic group and more relevant to the user of the system.
What would happen if, for example, a speaker was able to translate and, moreover, do so using a voice equal to that of the sender? Nowadays, these processes need a lot of data and, in addition, must be processed by the part of the system that has the capacity to do so. Putting it in the hands of the user would be a clear advance... although it would also open the door to many inappropriate behaviors.
In short, there is great potential in A.I. although it is not without risk, without resorting to sci-fi or political fiction. Dreyfuss' mention that the system will never go beyond a "competent" stage without reaching the "expert" level is very relevant, if we do not forget that, in defining such stages, Dreyfuss himself points out that the "competent" could obtain better results than the "expert".
Undoubtedly, this may displace many experts in favor of systems whose quality of knowledge is inferior but whose results are better. What is the incentive to train experts with no other objective than to teach systems and knowing that the ability to teach is limited, among other things, by the vehicle of language? Both, experts and systems would end up seeing their knowledge degraded in the process.
Finally, there are high-risk activities where the ability to integrate learning and execution - specifically human- can be key to addressing urgent and unforeseen situations. If addressed by the system, they would only lead to a blockage or entry into an endless loop. In these fields, the input of A.I. requires extreme care while maintaining the human's learning capacity and, above all, its unrestricted situation awareness and capacity for action.