?Kessler Test? (1, 1A, 1B) – a short cut to an advanced performance indicator for artificial intelligence (A.I.)
[Rainer Kessler, October 2019]
Overview of content structure:
- Introduction
- Test name
- Objective
- Goal
- General background
- Background re machines
- Kessler Test 1
- Kessler Test 1A
- Kessler Test 1B
- Disclaimer
- Next steps
1. Introduction: There are areas in Machine Learning (M.L.) and Artificial Intelligence (let us, for the moment, allow to use these two expressions synonymously), which are best described or analyzed according to an exact syntax, such as, e.g., properties of different activation functions in artificial neural networks (i.e., ReLU, Sigmoid, etc.). This is analogous to describing and understanding processes in natural neural networks through, e.g., transmitter-substances’ interactions in synapses.
This detailed approach to reach an understanding is important, but it does not answer all relevant questions; or, at least, it can be an inefficient way to understand a hyper-complex system, such as, e.g., the human brain or a deep artificial neural network. Therefore, science has offered various short cuts: for instance, cognitive neuroscience for detailed/processual vs. psychology for overall understanding of, human thinking and behavior.
In the field of learning machines, the most prominent example of a “short cut” to understand the properties of such a system is the famous Turing Test (named after the British mathematician Alan Turing). Especially in the generally accepted setup in the context of the Loebner Price, this test allows a classification of machine-learning systems in (at least) two classes. One class defines systems that are not confused with humans, when involved in a bilateral communication, where the proband does not know, whether a written response to her or his question comes from another human or from a machine. Thus, the Turing Test is a short cut to identify the ability of a machine to communicate “like a human”. In order to define this ability, there is no understanding required on how the machine actually works.
Other examples of such tests are the Lovelace Test to identify real* creativity in an artificially intelligent system and the Metzinger Test to identify awareness in a machine. [*] The term ‘real’ in the previous sentence refers to creativity that starts to exist in a system that had not been designed to be creative. Thus, A.I. systems that are intended to be able to create new paintings or compose new songs are not in scope of this test, but systems that were not expected to be creative are (i.e., systems, that are designed, e.g., to entertain patients in a hospital). Just like the Turing Test, the two other tests do not require technical skills nor understanding of the mechanisms behind a machine’s operation in order to be applied.
Additional to such rather indirect ways of “testing” A.I.’s capabilities, there are multiple methods to directly test an A.I. system, such as, e.g., engaging two independently trained A.I. systems in an assignment and comparing/analyzing the results, or checking M.L. training data quality through statistical methods. Both fields, the indirect and the direct testing are still at the beginning of their evolution. Many areas are not yet covered by tests, although this would be beneficial: the development of A.I. is disruptive and not all steps are transparent. Sometimes, breakthroughs appear before they are analyzed and understood.
Indirect tests (i.e., short cuts without necessity for actual systems analysis) can help to identify progress that bases on visible effects of hidden (from a human-being perspective) dynamics of complex systems. In this context, ‘systems’ must be understood in a wider scope than just IT systems. It refers to open, dynamic sociotechnical systems, such as, e.g., technologically supported communication of individuals in a society or interaction of elements of technology among each other, with or without involvement of individuals (e.g., multi-IoT setups). This complexity requires simplicity in methods to navigate through it.
With all those considerations in mind, this article shall suggest a new test to extend the existing indirect ones (i.e., Turing Test, Lovelace Test, and Metzinger Test – to name the most important ones). To make it simple, I call the test ‘Kessler Test’. Because the test can be applied in two steps and because it is not yet defined whether this test will be the only one created, the test name can be supplemented with a number and a letter. The details of the test are described below.
2. Test name: Kessler Test (1, 1A, 1B)
3. Objective: Testing the evolutionary level of A.I. systems (1A) and of the societal environment (1B) where such systems are used in, based on legal considerations.
4. Goal: An objective indicator shall be created to define a point in A.I. evolution, where the delineation between humans and machines becomes blurred from a legal perspective. This indicator shall stay valid, independent of whether this development is led consciously by humans or whether it is an inherent part of a (potentially disruptive) step, brought up as consequence of machine-learning progress.
5. General background: Legal considerations are simple in the context of this test – it is about the most basic principles. Laws directly or indirectly organize human behavior in order to make it predictable, stable, and just. This is the prerequisite to build a sustainable society, where the individual can build trust in freedom and protection when moving within such a society. The human behavior is legally organized by assigning rights, duties, obligations, prohibitions, and penalties (let us call them all legal properties*). [* In the sense of ‘characteristics’ and not of ‘ownership’.]
Although those legal properties had been designed for human beings initially, some of them were applied to non-human legal persons as well (e.g., to corporations). Therefore, in today’s legal frameworks in most jurisdictions, the legal subjects of natural persons (human beings) and legal persons (associations, corporations, etc.) exist. Some legal frameworks define additionally a special status for animals, e.g., for the purpose of animal protection laws. The term ‘special’ indicates a separation of animals from objects. For instance, until end of March 2003, animals had been treated as objects in Swiss law. Then, Swiss Civil Law [(641a)(A)(II)(1)] defined that animals were not objects.
All other “things” that are not natural persons, not legal persons, and not animals, are objects from a legal perspective. In some cases, objects are protected by law indirectly: e.g., if objects that are personal (natural or legal) or public property are being damaged, the responsible person (natural or legal) can be charged with the obligation for compensation. Thus, the laws relate directly or indirectly to natural or legal persons. Eventually, if legal principles, such as, e.g., derivative actions, rights and duties of the board of directors, or ‘piercing the corporate veil’ are considered, the ultimate subject of the law is the natural person. Therefore, ultimately, it is feasible to say that “law is for human beings”.
6. Background re machines: Automation has reached a new level of innovation and sophistication in the recent years. An important driver of this development is artificial intelligence, based on machine-learning technology (e.g., deep artificial neural networks). It is certainly still a long way to go, until such machines reach human-like agility in thinking (cf., Turing Test and Lovelace Test) or even more so in becoming self-aware (cf., Metzinger Test). Despite of this gap, adaptive/learning systems continue to take over complex tasks, for which human thinking was necessary before in order to execute them (e.g., conclusions based on multi-format input). One of the consequences is that machine-based decision making becomes difficult to trace from both, a quantitative (number of decisions and velocity of operation) as well as a qualitative (degree of differentiation) perspective.
In non-A.I. driven technology, the understanding of the technology must be fully with the producer of the technology and the understanding of the effect of handling the technology must be with the operator/user. For instance, if a car had a manufacturing problem and the breaks did not work well, the producer/manufacturer would be responsible for the effects of an accident, caused by this problem, if the operator/driver did his task of driving correctly; and the operator/driver would be responsible, if the car were manufactured correctly, but the handling of the car caused an accident. This logical sequence (from a common-sense perspective) becomes more complex, when additional parties are involved, e.g., a merchant or a garagist who maintained the car. However, independent of the complexity of the chain of responsibilities, it would theoretically always be possible to find a person (natural or legal) who is responsible for the consequences of an accident – and this person could be held accountable, e.g., from an indemnity perspective (including accountability-at-law constellation in case no responsibility would be assigned in the first place).
With autonomously adaptive or deciding (“thinking”) machines, the setup, as described above, becomes delicate. We noticed earlier, that it is not in all cases traceable anymore from a practical perspective (number of decisions, velocity of decisions, and differentiation of included factors for a decision), why such a machine did what – and it is even more difficult to predict the future behavior of such a machine under circumstances that we do not know yet completely. It is possible, that a manufacturer had done everything right and the operator as well as other parties did everything correct – and still the machine has caused “an accident” (to stick with the example of the car) or behaved unfavorable in other aspects (to be more neutral, e.g., in case a machine automatically proposes investments). Thus, it is possible that a machine becomes “independently responsible” for effects. However, in today’s legal setup of all jurisdictions (I know of), the accountability would eventually still be with a person (natural or legal), even if the responsibility could be assigned to a machine.
Although such drastic examples (accidents, etc.) will continue to ask for a personalized accountability in short to medium term, there are other areas where machines already are positioned to act in roles that were assigned to a person (natural or legal) before. One example of such areas are smart contracts. However, even here we will find persons (natural or legal) that defined the bandwidth in which the machine is performing its contractual task. Thus, it is possible that legally relevant activities are performed by a machine without the machine being legally accountable.
Against this background, the ‘Kessler Test’ intends to identify a specific moment in the development of handling machines. It is the moment, where the above described clear differentiation between a person (natural or legal) and a machine becomes blurred from a legal perspective. This moment does not have to be connected to a science-fiction like scenario, where machines “revolt for machine rights”. It can be a very delicate moment aside of the attention of the general public – and it can happen any time and in any jurisdiction: it is the shift of a person related legal property (rights, duties, obligations, prohibitions, or penalties) to machines.
It is theoretically thinkable that new legal properties will be created specifically for machines in the future that will not have been already assigned to a person (natural or legal) before. If such a legal requirement carries the spirit of a property, that could as well be assigned to a person, then such a requirement would as well be in scope. For instance, a thinkable “right of a machine not to be disassembled” carries the spirit of the ‘right to life and physical integrity’.
Delineation (constellations that may appear as; but are not indicating the fulfillment of the criteria for the Kessler Test): An example of a not necessarily for human built law, that may appear as identifying a person as legal subject, is the speed limit. Non-human driven vehicles would potentially have to stick to the same limits (if the limits are connected to the physics of the care and not to the ability of the driver to react). Thus, a law (or more in this case, a rule in the road traffic act), initially intended for humans, could be applied to a machine without blurring the differentiation between persons and machines. Another example of an out-of-scope rule is the Senate Bill 1001 of the State of California in the United States: the bill addresses the duty of a person (natural or legal) to disclose an artificially intelligent participant in a communication with a human being in case the communication could cause a relevant risk, e.g., influencing an election. This bill asks an accountable person (natural or legal) to disclose a Bot – and not the bot to disclose itself (although this could be the impression when reading the bill or if a technical implementation makes “the bot tell that it is a bot”). The General Data Protection Regulation of the European Union (GDPR) contains another example of a legal requirement that relates to machines that are able to take autonomous decisions. However, again, this rule points to a person’s (natural or legal) duty: article 22 requires disclosing automated decision making (not based on fix parameters) and providing the possibility to choose an alternative proceeding where a natural person takes the decision, if there is a relevant risk for a data subject (i.e., a natural person; cf., GDPR(4)(1)) connected to the decision.
The list of examples that are not meant by the test could go on. The out-of-scope pattern relates to legal text that involves or can involve machines but does not blur the delineation between persons (natural or legal) and machines. Explicitly* out of scope are “marketing type” or culturally driven legal rulings that assign human-like legal properties to machines, e.g., robot marriage in Japan or A.I. citizenship in Saudi Arabia. [* So far, such legally relevant actions were taken independent of the ability of the machine to deal with the consequences.]
Based on these considerations, we can move to the actual test in the subsequent three sections.
7. Kessler Test 1: A right or duty or another legal property that was initially designed for persons (natural or legal) is being assigned to an artificially intelligent machine. This includes new machine-related legal properties that are built in the spirit of person-related legal properties.
8. Kessler Test 1A: An artificially intelligent machine requires (i.e., “asks for”) a right or duty or another legal property that was initially designed for persons (natural or legal). This includes new machine-related legal properties that are built in the spirit of person-related legal properties.
9. Kessler Test 1B: The society (persons, political or administrative forces, etc.) assign a right or a duty or another legal property that was initially designed for persons (natural or legal) to an artificially intelligent machine. This includes new machine-related legal properties that are built in the spirit of person-related legal properties.
10. Disclaimer: In a potentially upcoming age of transhumanism, cyborgs, or similar (e.g., post-singularity) phenomenon, when the delineation between life and machine existence might get blurred in general, the discourse of this blurring would qualify for the Kessler Test, if legal consequences were considered that define a fully or partially “life-like” status for a machine or a hybrid that (“who”) creates problems to assign a unique status as machine or person.
11. Next steps: Information through multiple channels is planned in order to implement the Kessler Test. The implementation goal is that specialized communities (legal, IT, philosophy, business, etc.) as well as society shall pay attention to identify the moment the Kessler Test is met, because – depending on the type of law assigned to machines – reaching reversibility of the point could be difficult.
Unternehmer - Wealth Officer - Familien Berater - CEO/Verwaltungsrat
4 年I like this one: The society (persons, political or administrative forces, etc.) assign a right or a duty or another legal property that was initially designed for persons (natural or legal) to an artificially intelligent machine. This includes new machine-related legal properties that are built in the spirit of person-related legal properties.