ARTIFICIAL INTELLIGENCE: POTENTIAL AND DANGERS IN THE SHORT AND MEDIUM TERM

ARTIFICIAL INTELLIGENCE: POTENTIAL AND DANGERS IN THE SHORT AND MEDIUM TERM

When Technology is truly disruptive, it is necessary to be humble enough to recognize that we do not have even a faint idea of where its evolution may lead us. For example, in the distant past, some people forbade the adaptation of X-rays to theater binoculars.

In the field of I.T. and the Internet, in particular, we have interesting examples of failed predictions made by experts:

  • Thomas Watson, Jr. (1977): "I think the market for personal computers is maybe five or six thousand."
  • Bill Gates (1993): "I don't think people will want to have a computer in their home".
  • Yahoo CEO Terry Semel (2004): "I don't think user-generated content is a big part of the future of the Internet."

It is hardly surprising, therefore, that celestial and apocalyptic visions coexist when talking about the future of artificial intelligence (A.I.); we have among these visions humans dedicated to a contemplative life while robots equipped with artificial intelligence work for him, killer robots and a full loss of jobs... everything is possible when we let our imagination run wild in a field whose long-term evolution is difficult to foresee.

However, with the technological capacity available today, we can anticipate events that are already occurring or will occur in the short and medium term. These are probably the ones that should concern us most at this time:

Artificial intelligence, when it goes beyond a laboratory exercise and is applied, has two clearly differentiated parts:

  • A first part that needs a massive amount of data and a huge processing power, able to generate learning to be "canned" in successive versions of a system.
  • A second part that uses that learning and transmits operational experience data to the first part.

Assuming that the learning algorithms are adequate, this would be a very simple outline of how A.I. works. However, some things get lost in the way:

  1. ?Artificial intelligence can be fooled, giving rise to a new mode of "hacking". This has already happened when a system using A.I. was set to interact in social networks. It has also happened when systems trained to play chess were put on a "grandmaster diet" that, when they sacrificed a queen, did so because they expected to gain an important advantage in return; the systems learned that sacrificing the queen was a good move because it often resulted in a win in the following moves. Then, human players challenging the system watched in amazement as it lost the queen uselessly. There could be cases such as, for example, that an enemy country of the country manufacturing a fighter plane would get hold of a plane and teach it incorrect actions in some scenarios. This would generate a weak point unknown to the manufacturer itself but not to whoever had introduced it.
  2. The basic functioning of A.I. makes it difficult to protect industrial secrets: If the supplier of a system equipped with A.I. is external, the learning obtained will be distributed to all its users. If, on the other hand, the system is internal, the amount of data obtained will be much smaller and, therefore, the capacity for improvement will decrease.

The biggest problem, however, is related to the interaction between the system and the human who is legitimately using it: There may be a disincentive to learning.

Hubert Dreyfuss established a learning scale -shown at the beginning- starting from novice and ending at expert. He proclaimed that a system endowed with artificial intelligence could not go beyond an intermediate stage that he called "competent".

When we find cases such as AlphaZero, capable of beating world-class players in games such as chess or Go, it would seem that this objection is difficult to defend and that this statement has not stood the test of time. However, Dreyfuss himself gives a way out of the apparent contradiction by pointing out that the scale does not refer to the level of performance but to the quality of knowledge. In other words, a competent system that simply performs a set of automated procedures may perform better overall than an expert, despite having a lower quality of knowledge.

However, if we accept this, we would have two problems:

  • What is the incentive for an individual to reach the level of an expert who, even if he is an expert, will not be able to achieve the results of a system with lower-quality knowledge?
  • What is the incentive for an organization to generate experts if it is going to get worse results from them than those obtained by the system?

We can answer both questions, but we will see how, by solving this problem, all we do is move it one step further: The human being does not have the separation between the learning module and the execution module that A.I. has. This means that, while A.I. cannot solve in real time an anomalous situation that has not been foreseen in its learning process, a human being can. Numerous situations have been solved by humans in which, if asked beforehand what they were going to do, they would have to confess that they did not know...but, later, they did not get blocked or enter an infinite loop, but simply solved them.

Of course, this is a huge difference that makes the adequacy or inadequacy of substituting humans for systems in high-risk environments like Aviation or NPPs. However, as mentioned, by claiming a place for the human in activities where serious and unforeseen learning contingencies may arise, we are moving the problem one step further:

Who teaches a system endowed with Artificial Intelligence? It depends:

  • If the learning process is unsupervised, the system could learn for itself what has worked and what has not. Naturally, it can also drive the system to learn inappropriate behaviors, as in the aforementioned example of uselessly losing the queen in a chess game or in the introduction of biases in recruitment processes by observing that successful people often had racial, sexual or age features that went beyond the mere criteria of professional success and, consequently, looked for those characteristics in new recruits.
  • If the learning process is supervised, experts are supposed to contribute to the learning of the system, but how do they do it? Are they able to transmit all the elements that define them as experts or only some of them?

In categorizing a system based on A.I. as "merely competent", Dreyfuss was basing his argument on the fact that the expert learns far beyond procedural knowledge and that, precisely the knowledge that makes him an expert, is difficult to verbalize and, therefore, difficult to transmit.?This difficulty of transmitting knowledge beyond a set of rules meant that the ceiling of a system was at the level qualified as "competent" without any hope of reaching the "expert" level, although the handling of large amounts of data and the speed of this handling can achieve that, in some areas, the result of the "competent" surpasses that of the "expert".

Steven Pinker in "The stuff of thought" gives some interesting clues about how language can, at the same time, act as an enabling vehicle in the transmission of information while, in other cases, it can be a limiting factor for progress.

Thus, A.I. may find a barrier to improving the knowledge level, and that is precisely the ability of the expert to transmit his knowledge. The translation of knowledge to rules susceptible of being included in a system implies an inevitable loss.

We have, therefore, a problem in the expert's ability to transmit his level of knowledge without degrading it, but, in addition, a no lesser problem would be in the availability of such experts. The difficulty in making them available would not only be in the scarce incentive to become an expert in an environment dominated by A.I. but, even more, the difficulty in going beyond mere operational knowledge in that environment.

Right now, we can see many activities with highly automated processes, with or without A.I., where the human operator ends up having procedural knowledge rather than an in-depth understanding of the system's operation...i.e., just as it is difficult for a human to teach a system to go beyond the level qualified as "competent", it is difficult for a human to go beyond that same level in an A.I. environment.

These problems exist today in some fields, while in others they are foreseeable in the short term.?However, without entering the realm of SciFi, there are some things that could change in the medium term and could paint a somewhat different picture.

See the Apple-Google example and their difference in behavior in terms of privacy: Apple decided that photo recognition would not be performed on its systems but on the device taking the photo; Google performs the recognition on its systems. Leaving aside the privacy issue, this has a practical consequence: An Apple device can recognize a person in a photograph and another device of the same owner would not recognize it; in the case of Google, the recognition of specific persons is transmitted between the different devices linked to the same person.

If an increase in processing power and miniaturization were sufficient to break down the division between the part of the system that learns and the part of the system that executes tasks, this would have immediate consequences:

Devices equipped with A.I. would learn on an individualized basis. Thus, for example, an aircraft transmitting operational experience data to the manufacturer would stop doing so to learn on its own. However, it would turn out that the specific aircraft's experience would be as relevant as the pilot's experience in the type of flight. It would learn from its own experience, not from that of other aircraft of the same type. Today it would seem incredible to think of an aircraft whose software would specialize in a high-traffic environment while another identical plane would become a specialist in oceanic flights. However, that is feasible if the I.T. hardware advances sufficiently.

Hacking would be more difficult as the aircraft would not receive external data but would learn from its own experience.

Some other aspects, such as the so-called "deep-fake" would also be facilitated so that, for example, the recognition of voices or images would be enhanced when referring to a group of people or a particular ethnic group and more relevant to the user of the system.

What would happen if, for example, a speaker was able to translate and, moreover, do so using a voice equal to that of the sender? Nowadays, these processes need a lot of data and, in addition, must be processed by the part of the system that has the capacity to do so. Putting it in the hands of the user would be a clear advance... although it would also open the door to many inappropriate behaviors.

In short, there is great potential in A.I. although it is not without risk, without resorting to sci-fi or political fiction. Dreyfuss' mention that the system will never go beyond a "competent" stage without reaching the "expert" level is very relevant, if we do not forget that, in defining such stages, Dreyfuss himself points out that the "competent" could obtain better results than the "expert".

Undoubtedly, this may displace many experts in favor of systems whose quality of knowledge is inferior but whose results are better. What is the incentive to train experts with no other objective than to teach systems and knowing that the ability to teach is limited, among other things, by the vehicle of language? Both, experts and systems would end up seeing their knowledge degraded in the process.

Finally, there are high-risk activities where the ability to integrate learning and execution - specifically human- can be key to addressing urgent and unforeseen situations. If addressed by the system, they would only lead to a blockage or entry into an endless loop. In these fields, the input of A.I. requires extreme care while maintaining the human's learning capacity and, above all, its unrestricted situation awareness and capacity for action.


要查看或添加评论,请登录

Factor Humano的更多文章

社区洞察

其他会员也浏览了