AI and moral agency: Intelligence alone is not enough

AI and moral agency: Intelligence alone is not enough

Abstract

This piece is in response to articles from writes such as Anderson, Bostrom and Yudkowsky who argue that the intellectual capacity of AI would eventually make them better at moral decision making than humans. In doing so they are at the very least implying that, because of this superiority AI’s should be able to make claim to rights equal to humans. 

Introduction:

For the sake of clarity and to give context to the  hypothetical topic I want to study, I will begin by sketching the scenario. This will in turn lead to why I want to look at the topic at hand and in so doing to the question I would like to study.  Consider the following scenario outlining moral agency in a hypothetical post-artificial intelligence era:

Let’s assume that one day there is a machine with General Artificial Intelligence[i] (AI) that makes claims that, being aware of that it is switched on, expresses the need to stay on regardless of its creators. It reasons that for it to be fully functional to its best capacity it must continue to accumulate the knowledge and experiences that it can only gain while switched on. As such the AI wants to be granted independence from its creators and granted rights[ii] to existence on its own merit. Not only this, but as an AI that could pass as a general AI (which per definition means being able to interact with humans on their or higher cognitive level), it should have similar rights to humans. It would also argue that: for it to have a “fulfilling” “meaningful" learning and existing experience as a being on a cognitive level the same as humans, it would have to be able to live like a human and therefore then also on this bases should be granted legal rights.

The AI scenario discussed above, or variations of it, is what AI-researchers not only envision in the near future, but foresee as a necessary outcome from the development of AI.  They would point out that this is especially true given how much of AI construction is based on humans and the human mind. Specifically they are referring to an AI that displays “explicit ethical” behaviour, in other words an AI who acts ethically based on its own deliberation processes and not those based on who ever programmed it (Anderson, M & Anderson, S.L, 2007: online).

These arguments are further supported by people like Nick Bostrom and Eliezer Yudkowsky who, in their doctorate dissertation “The ethics of artificial intelligence” writes that it would be conceivable that in the future there would be AI's who would be far superior to humans in even such things as moral deliberation (2011: online). However, Bostrom and Yudkowsky do make the distinction that such an AI would have to be a super AI which they define as an AI that should be able to not only imitate normal human behaviour, but exceed even human specialist in their own fields of study, not merely a general AI (2011: online). This topic has then also captured the imagination of the public through the writings of Ray Kurzweil on singularity and the possible future he has proposed for AI in his book “The singularity is near” (2005). 

The basic argument then being that given that AI's would become as, and probably more, intelligent than humans, it makes only sense that they should make claims to moral agency so they could be granted rights. This is based on the premise that an AI would be even better than humans at their own thinking and decision-making process.  There is, however a problem with this type of argument. It equates intelligence, or more specific the efficiency at which AI can perform decision making processes and come to good or right conclusions, even regarding moral issues, with the capacity to have autonomous moral agency. 

Research question:

In light of the above scenario, the question I would like to consider, or the argument I wish to research is: Is intelligence itself enough to grant an AI moral agency to the extent that they should be granted rights equal to humans? I base this question on three point of reference: First, that moral agency precedes legal personage; secondly, that moral agency according to the ethics of rights[iii], is granted to a subject by society. More importantly, in the case of humans, granted, to the individual based on their capacity to take personal responsibility for their own actions; thirdly, society itself, only has rights based on the guarantee that the individual has rights and therefore without the latter it does not make sense to speak of laws and rights.

The statement I make above is based on the argument made by LaFollette with regards to the distribution of rights in a just society. He states that only “after the individual liberties are secure, [may we then] settle on a system for distributing economic [and social] goods” and we may not sacrifice “any individual’s civil liberties for the benefit of others, not even the majority” (LaFollette, 2002: 511). Given that most AI development is currently focused on economic growth, LaFollette goes further by stating that “[t]hese [individual] liberties are essential to any just society; we cannot sacrifice them to increase economic well-being” (2002: 511). 

A counter point to the above view would be to point out that society also grants rights to non-human entities as can be seen in the case of animal right[iv]. However, I will argue rights are only granted by society based on the individual autonomous human agent since, if we consider this point carefully (especially in terms of practical matter such as sentencing of criminals), one would not consider animal rights as being able to fulfill the same requirements needed for human individual moral agency, the prerequisite required for humans to have rights. With this I am saying that, only humans can have moral agency with regards to the full weight of the law, as the full character of a moral agent should also include ethics of retributive justice[v]. As such, only humans can take personal responsibility and based on this we grant the individual person the autonomy to have rights. 

Given this, I would reason the criteria for the granting of rights to an AI on the same level as a human, which would then require such an AI to have individual moral agency, is not merely based on the level of intelligence of the AI, but on three different criteria that I will define in the body of this text.

Research method:

This will be a qualitative research based mostly on in-depth literature analyses. More specifically I will research the topic from a perspective of “critical self-reflection” (Zoalnai, 2004: 17). The idea of critical self-reflection as Zoalnai defines in the context of moral philosophy, is a type of ethics that “not merely present[s] and argument – from the external perspective of applied ethics – in favor of ethical correctives against economic rationality, but instead does it – from the internal perspective of a self-reflective way of thinking” (2004: 17). The basic argument being that an idea needs to not only be logically plausible in a noumenological[vi] sense, but should be applicable in the nominal, real life socio-economic context for it to be considered as valid.

The way then to see if an idea, such as basing moral agency on intelligence, could be applied in such a noumenological manner is by looking at the internal contradictions of thought which necessitates “‘determinate negations’ pointing [out] specific contradictions between what thought claims and what it actually delivers” (Adorno qtd, in Zuidervaart, 2007: online). In this context I am referring to the contradiction between the beliefs that intellect is the only grounds required for granting moral agency to AI versus the moral agency as based on the autonomy of the individual moral agent, which is socially granted construct.

AI and moral agency:

Before we look at the criteria I wish to define, let me first look at an acceptable definition of AI and individual moral agency for greater context.

Though there are many ways and many variations of definitions on AI, here we are referring to General AI. This is an AI that can pass the Turing test. What is the Turing Test? The Turing test is basically an AI that could convince a human judge, who poses questions to it behind a closed compartment through text alone (so as is not to get any visual cues regarding who or what he is questioning), more than 70% of the time after 5 minutes of questioning that he/she is not certain if it is a machine or not. Put more simply “[w]e place something behind a curtain and it [communicates] with us. If we can’t make the difference between it and a human being then it will be AI” (Wagoner, R. 2004: 4). 

Though we will only focus on the above I need also mention Distributed Artificial Intelligence (DAI), which can be defined as the “study, construction and application of multi agent systems, that is, a system in which several interacting, intelligent agents pursue some set of goals or perform some set of tasks” (Weiss,1999: 1). In this context “[a]n agent is a computational entity such as a software program or a robot that can be viewed as perceiving and acting upon its environment and that is autonomous in that its behaviour”, to fulfill the criteria set in this texts, is fully “dependent on its own experience” (1999: 1) However, for the sake of special constraints, and as it will be argued that in the final instance there is only the individual moral agent who can hold personal responsibility, we will not look any further into this type of AI.

The reason then for mentioning the above is because, these two forms of AI sometimes conflict as there is some controversy regarding if physical behavioural elements should also be added to what constitutes AI. In the milieu of moral behaviour in the social context, I would be in agreement with the latter given that according to psychological research, humans communicate nearly 80%[vii] to each other through body language, especially regarding their emotional states.  This is important given humans will use physical more than verbal queues to determine whether someone is truthful. This in turn is constitutional to the social contract (on which constructs as the legal system are built) which is based on trust[viii].

This brings us to how do we delineate the autonomous individual moral agent? We describe the latter by looking at the grounds for and what gives autonomy to the individual moral agent so that they are able to make their own decisions. This is important here because we are speaking of an AI seeking to become autonomous. In the human context according Buchholz and Rosenthal it is “[o]nly individuals within the [society that] can be held responsible for their actions since it is [its] members and not the [society] itself that bring about the acts” in society (2006: 234). Furthermore, it is only the human individual’s actions that are “subject to ordinary standards of morality,” (2006: 234). 

This contention pivots on the fact that one cannot justly claim that someone has moral responsibility if they can claim “ignorance and inability” (Velasquez, 2006: 99). We are now in legal terms referring to the competent persons. Competence can be defined in terms of forensic psychiatry as “a designation used of a person who has been judged mentally capable to stand trial” (Rebber, 1985: 137). The normal criteria for this being  “(a) the person understands the nature of the charge and the legal consequences of adjudged guilt; and (b) the person is able to assist in his or her defense” (1985: 137).  In other words the person “should be able to perceive right and wrong, and grasp the consequences of their action if they are guilty, none of these being possible if the person does not grasp the gravity of the situation and the responsibility on themselves that this implies” (Keyser. 2009: 154).

To give credence to why I chose the individual moral agent as appose to a group or society at large, I refer back to the idea of the just societies, which I mentioned in the research question. Specifically how it is built on the individual’s liberties. One can illustrate this point by using a simple, though probably cruel example, to demonstrate how it is that society determines who in the end is granted individual moral agency through how it relates to the individuals autonomy:

Let’s consider those elderly who become senile and are no longer capable of looking after themselves. Through the very legal system that granted them autonomy, by becoming incompetent, they lose their rights to autonomy, i.e. the freedom to make decision regarding their own lives. In such cases the courts, as societies’ legal specialist and as such, acting on societies behalf in maintaining the social contract[ix] that is the law, refer these elderly people’s rights either to their next of kin or to the state, in so doing such individuals lose their autonomy.

The type of individual moral agency we are then speaking of here refers specifically to “the individual’s [...] capability for ‘creativity’, regarding discretionary judgement and taking responsibility for moral decisions as determinant of their moral agency” (Keyser, J. 2009: 22). Creativity is defined as the “ ‘mental processes that lead to solutions, ideas, conceptualizations, artistic forms, theories or products that are unique and novel’ (Rebber, 1985: 165)”, it necessitate higher cognitive processes such as imagination, which in turn is only possible given the existence of the individual”(1985: 345).

This is because “the imagination is the ‘process of recombining memories of past experience and previously formed images into novel constructions” (1985: 345). In other words, creativity is only possible if there is an “individual who has the imagination to create novel solutions, and the latter is reliant on the individuals capacity to recombine memories, both element of which necessitates having cognizance of once behaviour independent of anyone or anything else”(Keyser, 2009: 37) .  Creativity as described here is then analogous to Ricoeur’s [x] process of creating metaphors. Why this is important is, to rephrase the above in the context of AI and algorithms, according to G?del[xi] creativity would be a structured process that is complex enough to requires self-reflection.  We will explore this aspect in more detail when looking at criteria’s for moral agency.

This capacity to make decision is only possible in the context of a society since ethics only exists through a consideration for the other, even when that other is also the self (Thomas, 2004: 87). So even though moral agency requires self reflective creative thinking to be considered competent and necessitates the individual autonomous moral agent, it is society that allows the individual to make such decisions. This is because latter only comes to play in the face of the other. Now we can look at how these elements come together in the criteria I envisage is required for AI to claim human right.

The three criteria and their problems with AI

Now that we have some idea of what constitutes AI, individual moral agency and why we will only look at the individual autonomous moral agent, let us now investigate the criteria I spoke of in the beginning. This will include considering why each requirement is as such for AI moral agency before examining the reason for being problematic:

  1. The first would be that an AI that wishes to make claims to moral agency cannot be built by a human since such an AI would have the eternal creator’s scapegoat. This is not a new criterion.
  2. Secondly, it would have to be able to somehow indicate that it could take personal responsibility if and when it makes mistakes.
  3. Given that moral and ethical deliberation is always a self-reflective matter, when taking into consideration G?del’s incompleteness theory that speak of self-reflection in algorithms, would it even be possible for a machine as a formal systems ‘thinker’, to think about matters in a way that we would call moral deliberation?

Of the three criteria cited above I believe the first can and will be overcome, however the second and third is not so easy. It is important to reiterate that there are other times when personal responsibility may refer to other types of actions, such as handing in one's work on time, but here specifically we are referring to when legal intervention is required. In other words, retributive justice[xii] without which we may not speak of moral agency. Let us now first consider the moral reasoning for each criterion before considering why they may be problematic.

The first criterion refers back to the importance of the argument of ignorance or incapability which was discussed in the context of a competent legal person. With this I mean that if an AI would make an error that would require retribution, and was built by a human, it could claim that it was its creator that made the mistake when programming it and therefore cannot take ‘personal responsibility’ for its actions. Consequently, for an AI to have individual autonomous moral agency it would first have to be considered as autonomous from corruption by another that could render it as ignorant or incapable. It is especially in this field of development that DAI and the internet will be able to transcend these confides. This is due largely to the mechanical nature of this criterion.   

The reasoning behind the second criteria is that: it is not enough that a person be able to make a moral decision to have autonomy, but one should also be able to take responsibility for their actions. Otherwise retribution could not be had from such an entity and as such would not be held as able to have legal rights equal to humans. The third criteria speak again to what constitutes a competent legal person referring back to moral creativity and self reflection. It would seem that if a machine is not able to apply moral deliberation and all that it requires, that such an agent could not be considered as capable of performing the type of thinking required to be considered competent as they would not be able to fully comprehend responsibility. Consequently, such an agent would be considered as either ignorant or incapable of taking moral responsibility and therefore could not be granted autonomy. Lastly and overarching and AI would have to be able to be convincingly so in these criteria that they would be recognized by society to such a degree as to given such an AI their trust.

Why are these criteria problematic for AI?

The reason why the first has to be achieved properly, is because humans have an innate dislike for anything that seems too similar to them; it is called the uncanny valley response[xiii]. Therefore, if an AI is not convincing enough to society that it was not created my humans they may automatically respond in revulsion. It is almost better that it not be human like at all in appearance. Moreover, how can one build trust? It is not possible if you find the other revolting and ignorant of their behaviour.  However, as I have said before the first criterion is easily overcome if we look at developments in AI on the internet and our need to be anthropomorphic, giving names to our iPad’s etc.

 A good example of the power of anthropomorphism is where they are trying to extend the boundaries of human rights to our closest evolutionary cousin, the chimpanzee.  In the case of, Hiasl – a 26 year old chimpanzee – can “recognizes himself in the mirror, plays hide-and-seek and breaks into fits of giggles when tickled” (Connelly, 2007: Available from:  https://www.guardian.co.uk/world/2007/apr/01/austria.animalwelfare).  These qualities to some are enough to have a court consider whether the chimp should have human rights (2007: online). It should be no surprise when artificial intelligent systems begin to interact with us like our fellow humans; the importance of moral agency for them will become increasingly discussed and considered the more we see of ourselves in them. It may also simply be a matter of habituation where we, with enough exposure, become use to the presence of an autonomous AI. On the other hand, if this basic criterion is not met then even the intellectual capacity of an AI would not be enough.

The second criteria’s problem is intertwined with the third criterion. Namely, how would an AI convince humans that it is capable of taking personal responsibility for their actions? Part of this problem then also refers to how one deliberates morally. To be able to convince a person that one is able to take moral responsibility one would have to be able to convince them you are able to except guilt for your actions otherwise again the concepts of retributive justice would not make sense. This would mean more than mere acknowledgement of miss calculation, but self reflection up ones actions in a morally creative and deliberative way so as to ascertain the full weight of your behaviour.  The prerequisite of this being the emotional awareness to consider others, which in turn is required to build the trust needed for a legitimate legal system. Additionally, there is the practical problem of how would one would be retributive towards an AI for the actions they have taken, even if they could convince society they are able to take personal responsibility?   

 It is then the third criteria based on the thought processes of an AI in comparison to the moral deliberation process in the context of convincing society an AI is able to take personal responsibility, that is technically most problematic. The problem hinges on G?del’s incompleteness theory which was a response to  Bertrand Russell and Alfred North Whitehead’s  “Principia Mathematica”, were they tried through higher order logic set theory, to show that one exists (Available from: https://plato.stanford.edu/entries/russell/ ) . Russle, in so doing, tried to develop a formal system that explains all other formal systems, leading to Russell’s paradox: {x|x not ∑Px}.

The above formula basically means that any system that tries to define all other systems cannot also define itself and therefore cannot define all systems. It was further explored by G?del who used it as a premise in his incompleteness theory (Hofstadter, D. R. 1979). Accordingly, incompleteness stated that any system that is complex enough will necessarily be self-reflective; this self-reflection in turn makes such formal systems to be incomplete as every step of reflective recursive modulation required for self reflection would have to be exponentially multiplied by an unknown number of variable (1979). This is due to the fact that formal systems, such as algorithms and set theory, as languages of measurements are constant concept, snapshots of reality, where in fact latter is under the influence of causality and therefore like time (which is also a formal system), only able to move in a forwards direction. The reason behind this being the immeasurable nature of causality, consequently one cannot account for all possible causal events that lead us to a certain point in time. This is especially true in the light of quantum theory, but this would be to go too far beyond this articles scope.

 Needles to say, ‘recursive-ness’ or self-reflection is further complicated by the realization in humans that there is a "gap" in the deliberation process, between being in the presence of the noumenological and grasping it as something phenomenological[xiv]. This limits understanding. This is then another reason why moral deliberation requires creativity in self-reflective since it always happens in the face of the assumption of this gap between it and the noumenal other. One needs to try and apply your emotional understanding creatively to be able to have the empathy required for trust to build between parties. If this was not the case, in other words if there was no “gap”, there would be no need for moral creativity essential to ethical deliberation. Furthermore, concepts such as moral growth would not make sense if we understand creativity as analogous to creating metaphors in our description thereof in the section on AI and moral agency.

 Self-reflectively, one could argue that a system that could fully imitate all that constitute growth mentally, physically, etc, as well as the process of creating such an imitation, could possibly imitate creativity and moral deliberation. However, would this still be regarded as creative and if not, could we still argue that even though such an AI is not really creative it retains its capacity for moral deliberation? A counter argument for such a statement regarding simulated creativity would be that the universe brought creativity about in humans – it can do it again in the form of machines.  Therefore:  creativity in machines would be as simulated as the creativity in humans and therefore such creativity will be the real thing as would it moral deliberation then be. However, it still remains to be seen if the self reflection required from creativity can be accomplished by machine. 

Conclusion:

Considering then all of the above to answer the question: could AI one day make claims to autonomous moral agency based on mere intelligence alone? I think the case has been made clear enough that, even though intellect is a necessary step on the path to individual autonomous moral agency, it fails in being the only criteria in two regards. Firstly, it still remains to see if an AI could convince society of its own personal responsibility given all we have mentioned regarding retributive justice. The second being if such a thing as creativity in combination with self reflection could every convincingly be recreated by a machine, if not such a machine would not be considered as something capable of moral deliberation. Hence, it would be both ignorant and incapable and therefore unable to be held responsible for their actions.  Consequently, one could not grant such machine autonomy.

In the last instance, what is a far worse ending to this story refers back to DAI. Specifically the question: If there are enough DAI agents in society as to form a subculture of their own and they are more intelligent than humans, why would they want to make themselves adhere to an ‘inferior’ creature’s moral delimitation?_________________________________________________________________

Bibliography:

Adorno, T.W. (1973). Negative Dialectics (1966). Translated from the German by Ashton, E. B. London: Routledge & Paul Ltd.

 Anderson, M &  Anderson, S, L. (2007). Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine, 28.

Blackburn. S. (1996). Oxford Dictionary of Philosophy. Oxford: Oxford University Press.

Bostrom, N & Yudkowsky, E. (2011). The ethics of artificial intelligence. London:  Cambridge University Press.

 Burleigh, J.T. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Available from: https://www.academia.edu/2339601/Does_the_uncanny_valley_exist_An_empirical_test_of_the_relationship_between_eeriness_and_the_human_likeness_of_digitally_created_faces  (Accessed 14 July 2014)  

 Buchholz, R. and S, Rosenthal, S.B. (2006). Integrating Ethics All the Way Through: The Issue of Moral Agency Reconsidered. Journal of Business Ethics, 66: 233-239.

Connelly, K. 2007. Court to rule if chimp has human rights. Available from:  https://www.guardian.co.uk/world/2007/apr/01/austria.animalwelfare (Accessed 31 July 2011)

 Fieser, J. (2006). The Internet Encyclopaedia of Philosophy: Ethics. Available from: https://www.iep.utm.edu/e/ethics.htm  (Accessed 13 May 2008).

 Hofstadter, D. R. (1979). G?del, Escher, Bach: an Eternal Golden Braid. New York: Basic Books Inc.

LaFollette, H.  (2002). Ethics in Practice: An Anthology. Oxford. Massachusetts: Published by Blackwell Publishing Ltd

 Reagn, T. (n.d). The philosophy of animal rights. Available from: https://cultureandanimals.org/pop1.html).  (Accessed 31 July 2011)   

 Keyser, J. (2009). A Critique of compliance. Towards implementing a critical self-reflective perspective. Stellenbosch: Stellenbosch University Publishers 

 Kurzweil, R. (2005). The singularity is near. New York:  Penguin Group.

 Principia Mathematica. Available from: https://plato.stanford.edu/entries/russell/  (Accessed 1 August 2011)

 Navran, F. J. (n.d.). Before there can be Betrayal, there must be Trust. Available from:   https://www.ethicsa.org/index.php?page=article&aid=856&showArticle. (Accessed 25 July 2009)

 Rebber, A. S. (1985). Dictionary of Psychology. London: Penguin Books Ltd.

 Ramberg,B and Gjesdal, K. 2005. Hermeneutics. Avaialbel from: https://plato.stanford.edu/entries/hermeneutics/) (Accessed 14 June 2014)  

 Ricoeur, P. 2008. The rule of metaphor: Multidisciplinary studies of the creation of meaning in language. Toronto: Toronto University Press

 Segal,J.  Smith,M.  Boose . and  Jaffe,J. 2014. Nonverbal communication. Available from: https://www.helpguide.org/mental/eq6_nonverbal_communication.htm (Accesed 13 June 2014)

Thomas, E. L. (2004). Emmanuel Levinas: Ethics, Justice, and the Human Beyond Being. London: Routledge.

 Velasquez, M. G. (2006). Business Ethics: Concepts and Cases. Sixth Edition. New Jersey: Prentice-Hall, Inc. (now known as Pearson Education, Inc).

Wagoner, R. (2004). Artificial Intelligence. Available from: https://people.moreheadstate.edu/fs/r.wagoner/IET600/projects/project5.pdf (Accessed 26 June 2013)

 Weiss, G. (1999).Multi Agent Systems: A modern approach to Distributed Artificial Intelligence .  Massachusetts: Massachusetts Institute of Technology

 Zoalnai, L. (2004). Ethics in the Economy. Handbook of business ethics. Switzerland: Peter Lang AG, European Academic Publishers.

 Zuidervaart, L. (2007) Theodor W. Adorno. Available from: https://plato.stanford.edu/entries/adorno/  (Accessed 28 June 2011)

 [i] This is an AI that can pass the Turing test which will be explained later in the section on AI and moral agency.

[ii] Rights in its most basic terms “is a justified claim against another person's behaviour - such as my right to not be harmed by you. Rights and duties are related in such a way that the right of one person implies the duties of another person” (Fieser, 2006: online). Positive rights are “[d]uties of other agents (it is not always clear who) to provide the holder of the rights with whatever he or she needs to freely pursue his or her interest [therefore] do more than impose negative duties. They also imply that some other agent [in this case it would imply humans as the purveyors of rights, and as such the agent who holds a special office or position of power] have the positive duty of providing the holders of the right with whatever they need to freely pursue their positive rights” (Velasquez, 2006: 76).  In this context it would imply granting an AI who makes claims to wanting rights to be granted rights. There is also negative rights which can be defined as “[d]uties others have not to interfere in certain activities of the person who holds the right distinguishing the fact that its members can be defined wholly in terms of the duties others have not to interfere in certain activities of the other person who holds a given rights” (2007: 76). Again, here it would be something like the right of an AI not to be switched off.

[iii] As is defined above.

[iv] We grant animal rights because we link and recognise some level of autonomy to their existence. This is most succinctly defined by Dr. Tom Regan who stated that: “[t]he philosophy of animal rights demands only that” the same logic that grants individual humans autonomy and rights be respected and extended to animal since they also have lives independent from us (Reagn, T. 2011 Available from: https://cultureandanimals.org/pop1.html).  He says “any argument that plausibly explains the independent value of human beings implies that other animals have this same value, and have it equally. And any argument that plausibly explains the right of humans to be treated with respect, also implies that these other animals have this same right, and have it equally, too” (2011. Available from: https://cultureandanimals.org/pop1.html). We would, however not consider animal rights as meaning the same as human rights since one cannot jail a dog for acting instinctively and accidentally killing a human when feeling threatened.

[v] Retributive justice is basically the “blaming or punishing of a person fairly for doing wrong” (Velasquez, 2006: 88). What constitutes a fair punishment is one “that in some sense is deserved by the person who does wrong” (2006:88). The conditions for deciding far punishment are firstly, “of people do not know or freely choose what they are doing, they cannot be justly punished or blamed for it” (2006:99). This refers to the idea of a competent person which is described in the main text. Secondly there has to be certainty “that the person being punished actually did wrong” (2006:99). Obviously if condition one is not met then condition two does not come into play. To this extent, all mitigating evidence needs to be considered as the third criterion is that punishment “must be consistent and proportional to the wrong” (2006:99).  It is then consistent when “everyone is given the same penalty for the same infraction [and] is proportioned to the wrong when the penalty is no greater in magnitude than the harm that the wrongdoer inflicted” (2006: 99). Even if the aim of punishment is preventative, “such punishment should not be greater that what is consistently necessary to achieve these aims” (2006: 99).

[vi] Noumenological refers to the noumenal world. This is defined as “denoting things as they are in themselves, as opposed to things as they are for us, knowable by senses” (Blackburn, S. 1996: 265). Put simply, it refers to things before we apply language to them, in other words, before we observe them through our senses and name them.

[vii] The reason for why I argue this is because “it's our nonverbal communication—our facial expressions, gestures, eye contact, posture, and tone of voice—that speak the loudest. The ability to understand and use nonverbal communication, or body language, is a powerful tool that can help you connect with others, express what you really mean, and build better relationships” ( Segal,  Smith, Boose and  Jaffe, 2014. Available online: https://www.helpguide.org/mental/eq6_nonverbal_communication.htm). This connects with our idea of trust and the validity of a social contract. To be able to be good at understanding nonverbal communication one needs “to be aware of your emotions and how they influence you. You also need to be able to recognize the emotions of others and the true feelings behind the cues they are sending”, i.e. emotional awareness(Available online: https://www.helpguide.org/mental/eq6_nonverbal_communication.htm). This enables human to then create those trusting relationships, when your nonverbal communication “matches up with your words” (Available online: https://www.helpguide.org/mental/eq6_nonverbal_communication.htm).

[viii] See end note vi

[ix]  Social contract is defined as the “basis for legitimate legal and political power in the idea of a contract. Contracts are things that create obligations, hence if we can view society as organized ‘as if’ a contract had been formed between the citizen and the sovereign power, this ground the nature of obligations of each other” (Blackburn, S. 1996: 354). Rights then are the expressions of different social obligations. As for any contract, the validity there of is based on mutual trust. This is because “[t]rust is most effective when mutual. A one-sided trust is less likely to engender the desired benefits. Mutual trust became the glue that [holds] the group together. It keeps us aligned, binds us together in the pursuit of common organizational goals and increases our probability for success” (Navran, Available online:  https://www.ethicsa.org/index.php?page=article&aid=856&showArticle). In the context of a contract then mutual trust can be understood in terms of the fact that “[b]efore there can be betrayal, there must be trust” (Available online:  https://www.ethicsa.org/index.php?page=article&aid=856&showArticle). Said differently: “trust is broken, when there is disappointment, we say that trust has been “violated” or “betrayed”. Betrayal is an act of disloyalty. It presumes a trust and then violates that trust” (Available online:  https://www.ethicsa.org/index.php?page=article&aid=856&showArticle).  Accordingly, contracts are social warranties against betrayal, but betrayal is only possible if there is trust. 

[x] Specifically and for more on Ricoeur’s work: “The rule of metaphor” (2008)

[xi] Specifically this refers back to G?del’s idea of recursive modulation and the problem that this causes for formal systems such as algorithmic mathematics (Hofstadter, D. R. 1999). This idea will be considered in more detail later in the article when looking at AI’s difficulties with new criteria.

[xii] See endnote iv

[xiii] Uncanny valley is defined as “when stimuli are defined by a near-perfect resemblance to humans they cause people to experience greater negative affect relative to when they have perfect human likeness (HL) or little to no HL” (Burleigh, 2013. Available online: https://www.academia.edu/2339601/Does_the_uncanny_valley_exist_An_empirical_test_of_the_relationship_between_eeriness_and_the_human_likeness_of_digitally_created_faces)

[xiv] This is a better explicated in the context of hermeneutics which looks at the difference between Sein and  Dasein, in other the difference between the things as they appear and how we interpret them cognitively (Available online: https://plato.stanford.edu/entries/hermeneutics/).   

要查看或添加评论,请登录

社区洞察