Morals in the World of AI.

Morals in the World of AI.

“Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.” —Diane Ackerman.

“Ethics is knowing the difference between what you have a right to do and what is right to do.” - Potter Stewart

***

Large language models promised to replace expert-level performances in writing, reading, information gathering, creative part, art generation, etc.

The experts from Germany and Brazil investigated the question and found out, that despite the great capabilities of Google's Gemini Pro, Anthropic's Claude 2.1, OpenAI's GPT-4, and Meta's Llama 2 Chat 70b and high correlations with the human performances, there are still differences.

According to the scientists, the models exaggerate effects that are present among humans, in part by reducing variance. The authors recommend caution with regards to proposals of replacing human participants with current state-of-the-art LLMs [1].

Another research conducted in Ottawa, Canada demonstrated that a basic level of education is required to master any instrument. There's no way AI can replace basic academic skills, emotional, rational, creative, and critical thinking [2]

Despite the discrepancy in the results and the perspectives, the most controversial question is the understanding of the moral aspects of AI. How good a machine can get in addressing legal or moral questions?

The Question of Trustworthiness

How do we as human beings trust machines and how ready are we to delegate tasks to AI?

The question is not as easy as it seems. The communications involve multiple aspects including the psychological part.???

The study conducted in Germany shows that the chances a human professional can make a mistake is 20-30%, while the chances a machine is able to make a mistake is 10%.?

Despite the statistics, people prefer to hear recommendations from a real human [3]. A good example is the communication with the doctors. There are no statistics behind the likelihood of the correct diagnosis made by a doctor, but any statistics developed by a machine still seem suspicious. Most people believe a doctor will make personalized, correct decisions.

A good example is the self-driving car, where statistics tell us that the chances of the self-driving car making a mistake are less than humans. While a human driver is considered to be more reliable than a machine.? Similarly, the tolerance for a mistake with skin scanning devices is even less. The chances the machine will make a mistake are less, while the trustworthiness is still on the human. One of the reasons is AI can be used by people who have malicious intentions to scale up criminal or unethical behavior. A machine has a high chance of generating malicious content that can go viral. AI can act independently and can cause harm with unparalleled efficiency and at scale. As an example, the most dangerous tool, according to the top experts, is deep fake. It enables scams at a scale [4].

The type of mistakes AI can make is different from the types of mistakes a human being can make [5]. A good example is face recognition methods, where the algorithms have a higher chance of misclassifying dark-skinned faces. The algorithm makes more positive predictions about white defendants during the court listening as well.

The Question of Blame and Penalty

When a human makes a mistake we consider it normal to get angry and look for a penalty and fines. How do things work with machines? Humans tend to experience more range toward machines and less tolerance toward their mistakes. All humanization strategies are not applicable in the case of penalties. A mistake made by a machine psychologically tends to be perceived more seriously than a mistake made by a human. Scientists predict that a special adaptation strategy will be required to adapt robots to everyday human activities [6,7].

Values Alignments?

Whom to ask, whom to seek knowledge and logic from to decide in judgments to develop and machine algorithms. From one perspective, the ethicists would be the right choice, but multiple dilemmas are still under debate. There are too many different groups with different perspectives on things to find the right solution.

A good example is when the driver has a choice to be killed or to kill a pedestrian. What would be the right solution? This question is still a part of the ethical dilemma. [8]


Scientists recommend exploring an exhaustive range of methods and controls to make sure that the public consensus is robust across experimental designs and demographics [9].

The Moral Machine experiment was developed by Awad in 2018 and the experiment developed nine possible priorities for AVs to decide which group of road users to save or save or to sacrifice, which led to millions of possible scenarios. This was only possible because the experiment went viral, collecting data from millions of participants. [10]

Conclusion:

Despite the great number of work as humankind we are still in the process of understanding and developing a method on how to communicate with machines on the moral questions.

The delegation of moral and ethical questions in the machine decision still is not possible from the ethical perspective.

References:

?[1] Guilherme F.C.F. Almeida, José Luiz Nunes, Neele Engelmann, Alex Wiegmann, Marcelo de Araújo, Exploring the psychology of LLMs’ moral and legal reasoning, Artificial Intelligence, Volume 333, 2024, 104145, ISSN 0004-3702, https://doi.org/10.1016/j.artint.2024.104145 .

[2] Brovko, A. (2024). The impact of Chat GPT on cognitive functions of university students. Scientific Collection ?InterConf?, (188), 64–68. Retrieved from https://archive.interconf.center/index.php/conference-proceeding/article/view/5380 [1] [3] Rebitschek FG et.al . People underestimate the errors made by algorithms for credit scoring and recidivism prediction but accept even fewer errors. Sci. rep. 11:20171.

?[4] Caldwell AB, Liu Q, Schroth GP, Galasko DR, Yuan SH, Wagner SL, Subramaniam S. Dedifferentiation and neuronal repression define familial Alzheimer's disease. Sci Adv. 2020 Nov 13;6(46):eaba5933. doi: 10.1126/sciadv.aba5933. PMID: 33188013; PMCID: PMC7673760.

?[5] Birhane A. The unseen Black faces of AI algorithms. Nature. 2022 Oct;610(7932):451-452. doi: 10.1038/d41586-022-03050-7. PMID: 36261566.

?[6] https://osf.io/preprints/psyarxiv/8ptdg/download

?[7] Cushman JD, Drew MR, Krasne FB. The environmental sculpting hypothesis of juvenile and adult hippocampal neurogenesis. Prog Neurobiol. 2021 Apr;199:101961. doi: 10.1016/j.pneurobio.2020.101961. Epub 2020 Nov 23. PMID: 33242572; PMCID: PMC8562867.

?[8] Bonnefon, Jean-Fran?ois & Shariff, Azim & Rahwan, Iyad. (2016). The Social Dilemma of Autonomous Vehicles. Science. 352. 10.1126/science.aaf2654.

?[9] Makovi, K., Sargsyan, A., Li, W. et al. Trust within human-machine collectives depends on the perceived consensus about cooperative norms. Nat Commun 14, 3108 (2023). https://doi.org/10.1038/s41467-023-38592-5

?[10] Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami, Ophelia Deroy,Algorithm exploitation: Humans are keen to exploit benevolent AI, iScience, Volume 24, Issue 6, 2021, 102679, ISSN 2589-0042, https://doi.org/10.1016/j.isci.2021.102679 .


要查看或添加评论,请登录

社区洞察

其他会员也浏览了