Moral agent or simply "embedded values"?? Ethics of Artificial Intelligence. In search of proper definition.

Moral agent or simply "embedded values"? Ethics of Artificial Intelligence. In search of proper definition.

Ethics of artificial intelligence is often confused (or at least equated to) with the concept of moral agents[1] and treated solely as protection from harmful actions made by autonomous machines[2]. ?The public debate is more about the set of norms and values that, for example, autonomous cars[3] (or other vehicles) should have implemented to protect passengers, pedestrians, and other drivers or whether futuristic-conscious [4] robots could eliminate human targets. While it is an exciting issue that should not be overlooked, we – humans – should focus on more critical issues that are important right now – the way we gather and process data and what consequences such processing may have for human beings. Data is not only the fuel for models and algorithms but also our responsibility towards other humans. Concepts and approaches are evolving, with B. C. Stahl even going further with the concept of ethics of digital ecosystems[6].?

A. Simpson Rochwerger and W. Pang – practitioners in so-called “artificial intelligence” - proposed a term – real-world AI[7] that relates to machine learning and similar techniques and approaches that solve real problems, like predicting cancer or the probability of occurrence of certain events. Such focus on real problems may bring risks for the developer, operator, and (end)users of “AI” systems. The recent World Health Organization’s document – Ethics and governance of artificial intelligence for health: WHO Guidance clearly states[8] that “[n]ew approaches to software engineering in the past decade move beyond an appeal to abstract moral values, and improvements in design methods are not merely upgraded programming techniques. Methods for designing AI technologies that include moral values in health and other sectors have been proposed to support effective, systematic, transparent integration of ethical values[9]”. This approach is often connected with data protection engineering that emphasizes security and privacy – parts or elements of every software, including based on “AI” have to be implemented in line with the “by design and default” principle[10].

?This is especially true regarding real-life applications of the machine and deep learning. As M. Kearns and A. Roth indicate, “(...) it’s essential that the scientific and research communities who work on machine learning be engaged and centrally involved in the ethical debates around algorithmic decision making[11]”. However, we must answer a crucial question – What should the ethics of machine learning models and algorithms look like? What principles should be a background of a (new) framework for truly responsible “AI”? D. Martnes focuses on data and its role in five stages of data science, i.e.: (i) data gathering; (ii) data preprocessing; (iii) modeling; (iv) evaluation and (v) deployment[12]. Data (its quality, quantity, availability) seems to be the most crucial element of any algorithm and model, but do efficient and effective data management, and governance the only prerequisites for ethical and responsible “AI”[13]? And maybe an even more important question – what tools should we use to ensure that the “AI” is ethical[14]?

Should we agree on the definition of the ethics of AI?

As mentioned in the previous paragraphs, the lack of definition and scope of ethics of AI is apparent. Still, the question is whether we should seek to define it and at least propose a minimum content (requirements) it should include or let it evolve freely. D. Martnes rightly pointed out that “[e]thical data science is all about balancing act: what data can I use, for what purpose, and how should I got about it[15]” and while the ethics of artificial intelligence may be understood as a broader concept than data science, it underlines that the data should be at the forefront. Similarly, M. Coeckelberg states that “(…) many ethical questions about AI concern technologies that are entirely or partly based on machine learning and related data science (…)”[16]. A recent report by one of the internal bodies of the European Parliament goes even further with the concept of a ‘data-centric’ approach to AI, indicating that “(…) unprecedented amounts of personal data will be collected, and digital technologies will affect the most intimate aspects of our life more than ever, including in the realms of love and friendship[17].

But should we try to define what AI ethics is? R. Blackman makes an interesting distinction: (i) “AI for Good” and (ii) AI for not Bad”[18] to make a line between “AI” aimed at a positive impact on the environment and society and the applications of AI that should be at least ethically neutral. According to Wikipedia, “[t]he ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI[19]. While this definition is imperfect, it is a good starting point for further deliberations.

We need a definition of ethics of AI to ensure that we pay attention to the right components and not confuse ethics with “hard” requirements, i.e., law and partially soft-binding law. This distinction is not always clear, as we could see in one of the proposals for amendments to the Artificial Intelligence Act[20] proposed by a group of European Parliaments Members. In line with the proposed Article 4a(1) “[t]he developer of an AI system shall, on all stages of development of the AI system, take into account the EU Charter of Fundamental Rights and place on the market or putting into service only trustworthy AI that is lawful, ethical and robust. (…) (b) ‘ethical’ means that the AI system was developed to respect the freedom and autonomy of human beings, to protect human dignity as well as mental and physical integrity, and to be fair and explicable”. As we can see from the above example, the “standards” included in the paragraph may be called either “legal” and “ethical.” The notion of ethics of AI also bears many issues and challenges, as A. Azoulay noticed in one of the United Nations papers[21].

According to the IEEE standards 7000-2021 – Standard Model Process for Addressing Ethical Concerns during System Design[22] “ethical” means “supporting the realization of positive values or the reduction of negative values,” while “ethics” is defined as a “branch of knowledge or theory that investigates the correct reasons for thinking that this or that is right”. V. Dignum has proposed an interesting distinction that shows all the critical and essential parts of the ethics of artificial intelligence and consists of the:

1.????“Ethics by Design: the technical/algorithmic integration of ethical reasoning capabilities as part of the behavior of the artificial autonomous system;

2.????Ethics in Design: the regulatory and engineering methods that support the analysis and evaluation of the ethical implications of AI systems as these integrate or replace traditional social structures;

3.????Ethics for Design: the codes of conduct, standards, and certification processes that ensure the integrity of developers and users as they research, design, construct, employ and manage artificial intelligent systems.”[23]

The above-mentioned distinction, however, does not propose the definition we are looking for but rather indicates all the elements necessary to make autonomous systems – ethical. V. C. Muller rightly points out also that there is also a notion of machine ethics that can be understood as “(…) ethics for machines for ‘ethical machines’, for machines as subjects, rather than for the human use of the machine as objects[24], however, this topic will not be a part of the article as its author is not a proponent of the “subjectivity” of machines, including autonomous ones. The reason is simply that we should overcome “the tendency to anthropomorphize it [artificial intelligence – author]”[25] and put focus on the human side of artificial intelligence. Therefore, the ethics of artificial intelligence will not – at least in the author’s view – be linked to the issue of machines and digital systems as moral agents.

In one of my recent working papers, I proposed the following approach to the ethics of artificial intelligence: “(…) the application of certain ethical principles both at the conceptual stage, creation, and application of these solutions[26]. This also brings us to the notion of “computational ethics,” which can be understood as “scholarly work that aims to formalize descriptive ethics and normative ethics in algorithmic terms, as well as work that uses this formalization to help to both (i) engineer ethical AI systems, and (ii) better understand human moral decisions and judgments”[27]. As a result, the application of values and principles will be important both from the perspective of algorithms or models of artificial intelligence and humans responsible for the development, deployment, and monitoring it when in motion. This approach will generally align with J. Bryson's approach to “non-trustable” artificial intelligence systems[28].

I understand that the above-mentioned definition is not perfect, and I am confident that it will evolve further. However, as the maturity of artificial intelligence is increasing, we should focus on creating standards and requirements for (more) ethical systems to ensure that they are being run for our good and will not (or the humans responsible for them) do harm to human beings. Let's discuss not only the "substance" but also a definition that will lay the foundations for further concepts.


[1] A. Martinho, A. Poulsen, M. Kroesen, Perspectives about?artifcial moral agents, AI and Ethics, 1/2021, p. 477-478.

[2] B. Bro?ek, and M. Jakubiec rightly point out that “(…) that autonomous machines cannot be granted the status of legal agents” and therefore will not be accountable and liable of any actions made by “themselves”. B. Bro?ek, M. Jakubiec, On the legal responsibility of autonomous machines, Artificial Intelligence Law, No 25, Springer 2017, p. 303.

[3] F. M. Kamm, The Use and Abuse of the Trolley Problem: Self-Driving Cars, Medical Treatments, and the Distribution of Harm [in:] S. M. Liao (ed.), Ethics of Artificial Intelligence, Oxford 2020, p. 79.

[4] J. J. Bryson argues that that “(…) the potential of ‘uploading’ human intelligence in any meaningful sense is highly dubious”. J. J. Bryson, The Artificial Intelligence of the Ethics of Artificial Intelligence. An Introductory Overview for Law and Regulation [in:] M. D. Dubber, F. Pasquale, S. Das, The Oxford Handbook of Ethics of AI, Oxford 2020, p. 3. Indeed, we are not able to say how our brain works in practice and therefore ‘mimicking’ it is highly questionable, at least at the current state of science.

[5] LAWS stands for Lethal Autonomous Weapons Systems. More: L. Righetti, Q,. Pham, R. Madhavan, R. Chatila,, Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues], IEEE Robotics & Automation Magazine, vol. 25, no. 1, p. 123-126, March 2018.

[6] B. C. Stahl, From computer ethics and?the?ethics of?AI towards?an?ethics of?digital ecosystems, AI and Ethics, No 2, 2022, p. 71-72.

[7] A. Simpson Rochwerger, W. Pang, Real World AI. A Practical Guide for Responsible Machine Learning, Appen 2021, s. 12 and subsequent.

[8] Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization; 2021, p. 65.

[9] Good example of guidance for ethical AI that is based on values is the document prepared by the European Comission’s experts – Ethics Guidelines for Trustworthy AI, April 2019, available at: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419 (access: 24/07/2022). This document is, however, sometimes unrealistic in its approach to values and principles that should be the ground for trustworthy AI systems. We will discuss this topic in next pages.

[10] ENISA, Data Protection Engineering. From Theory to Practice, January 2022, available at: https://www.enisa.europa.eu/publications/data-protection-engineering/@@download/fullReport (access: 24/07/2022).

[11] M. Kearns, A. Roth, The Ethical Algorithm. The Science of Socially Aware Algorithm Design, Oxford 2022, p. 17.

[12] D. Martnes, Data Science Ethics. Concepts, Techniques and Cautionary Tales, Oxford 2022, p. 20.

[13] J. A. Kroll, Data Science Data Governance [AI Ethics], IEEE Security & Privacy, vol. 16, no. 6, 2018, p. 61-62.

[14] R. Blackman, Ethical machines. Your concise guide to tally unbiased, transparent, and respectful AI, Harvard 2022, p. 163-166.

[15] D. Martens, Data Science…, op.cit., p. 40.

[16] M. Coeckelbergh, AI Ethics, Cambridge 2020, p. 83.

[17] https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729543/EPRS_STU(2022)729543(ANN1)_EN.pdf (access: 26/07/2022).

[18] R. Blackman, Ethical machines…, op.cit., p. 3.

[19] https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence (access: 26/07/2022).

[20] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 (access; 26/07/2022).

[21] A. Azoulay, Towards an Ethics of Artificial Intelligence, UN Chronicle, Vol. 55, Issue 4, January 2019, p. 25.

[22] https://ieeexplore.ieee.org/browse/standards/reading-room/page/viewer?id=9536679 (access: 26/07/2022).

[23] V. Dignum, Ethics of artificial intelligence: introduction to the special issue, Ethics and Information Technology, No 20, 2018, p. 2.

[24] V. C. Muller, Ethics of artificial intelligence [in:] A. Elliott (ed.), The Routledge social science handbook of AI, London 2021, p. 14.

[25] M. Ryan, In AI We Trust: Ethics, Artificial Intelligence, and Reliability, Science and Engineering Ethics, No 26, 2020, p. 2749-2750.

[26] M. Nowakowski, Ethical FinTech. The importance of ethics in creating secure financial products and services, under review, June 2022.

[27] E. Awad, S. Levine, M. Anderson (et.al.), Computational ethics, Trends in Cognitive Science No. 2272, p. 1, https://reader.elsevier.com/reader/sd/pii/S1364661322000456?token=48436E819E376881AB57851F965DFFD981E67510BA13A9027EBC1D1164DB1A5AE9C1E480407DD4620F3A9DCAAB906530&originRegion=eu-west-1&originCreation=20220330182910 (access: 31/07/2022).

[28] J. Bryson, AI & Global Governance: No one should trust AI, United Nations, 2018, article available at: https://cpr.unu.edu/publications/articles/ai-global-governance-no-one-should-trust-ai.html (access: 31/07/2022).

Wojciech Jamroziak

Expert at Project Management Office

1 年

How much artificial intelligence should there be in humanity? How much humanity in artificial intelligence? Will we have any influence on these proportions or will we be just observers of changes? What amazing times do we live in that philosophy becomes the foundation of technology development. S.Lem invites us to the future... future that happened yesterday

Robert Kroplewski

#SAIL - Stewardship AI Lab | Interconnected Future Technology Governance, Designing & Standarisation | information technology convergence expert | solicitor kroplewski.com

1 年

1. Moral one? What about this could say Leon Petra?ycki ? 2. - AI as real intelligence is dead, said Marvin Minski in 2006. 3. #DataislikeOil? Finnaly, let loose this old paradigme for #DataislikeAir. Only in this approach #CyberTrustedDataSpace could be launched.

回复
Joshua Tshifhiwa Maumela

Ubuntu-based AI algorithm Co-Inventor(Ulimisana Optimization Algo)| PhD, Artificial Intelligence| Snr Machine Learning Engineer @ Vodacom

1 年

要查看或添加评论,请登录

社区洞察

其他会员也浏览了