Navigating Ethics and AI
The current debate around AI ethics is often complicated by misunderstanding or confusion between various concepts; like any field of knowledge, it is important to clearly define the contours and scope of each concept; We will only be able to address a few of them here, but we will draw a map of them and propose some bases for reflection and discussion.
A first level of reading is to clearly distinguish normative ethics, the why and the what; of its counterpart applied ethics, which will define the how and to whom.
In the case of AI, the standards are "relatively" easy to transpose into the programming of a decision tree or a cause-consequence-action analysis, once obviously defined which standards will be integrated as "law tables ".
We can get lost in a number of debates on the advantages and disadvantages of each ethical school, from the Kantian categorical imperative to consequentialism, and why not integrate aspects of Epicureanism or Stoicism into our algorithms; but concretely a good starting point seems to me to be the principles defined by the UNESCO in its recommendation “Ethics of Artificial Intelligence” (https://www.unesco.org/en/artificial-intelligence/recommendation-ethics)
1. Proportionality and Do No Harm - The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses. For the ones keen of science fiction, we retrieve here Asimov's famous laws of robotics.
2. Safety and Security - Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.
3. Right to Privacy and Data Protection - Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.
4. Multi-stakeholder and Adaptive Governance & Collaboration - International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.
5. Responsibility and Accountability - AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.
6. Transparency and Explainability - The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.
7. Human Oversight and Determination - Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.
8. Sustainability - AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals, including those set out in the UN’s Sustainable Development Goals.
领英推荐
9. Awareness & Literacy - Public understanding of AI and data should be promoted through accessible education, civic engagement, skills & ethics training, media & information literacy.
10. Fairness and Nondiscrimination - AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.
In the second level of Applied Ethics, lies the most complex, “operational” part. A first confusion for Applied Ethics is this covers in fact different domains, interrelated but with their own specificities:
A first domain will be the ethics of the AI data analyst and trainer, and how it applies the principles; that is where we found the most errors in foundation models, with often uncontrolled transposition from one context to another that creates biases, false content, unethical decisions. Where comes the importance of multi skills (? are they a philosopher, a historian and a lawyer in the room?) and multicultural teams.
Another domain will be the AI owner, and for which usage and audience, and in which context, it deploys its solution; be it with the best intention or with a diversion of use. And if it deploys the AI with respect of the international and local regulation in terms of data privacy and ownership, fundamental rights and principles, be it for civilian, medical, security or military use.
Then we will have the applied ethics for the user, where we can retrieve other bias, misuse intentional or not of the data produced; but also the most effect of bias and discrimination, lack of transparency and explainability of AI decisions for the end users. This means a human responsibility, accountability and control on the AI decisions.
My approach would be to take into account transversal ethical controls in the application of the principles, that AI must respect in its decisions. We are here more in the scope of generative AI that will learn and integrate from their repetitive dataset, learning processes and human guidance. And we could define that non-compliance with any of those control will trigger an ethical dilemma, put on hold the decision process, and will require the human decision.
In conclusion, we have here a more precise reading grid of the two aspects of normative and applied ethics, allowing us to refine our analytical model of various issues and the corrective actions that could be implemented for a more ethical use of AI, putting people at the center of our thinking and our practice.
Insighful ! Thanks for sharing this