Framework for Trustworthy AI, Simplified!
Rajesh Dangi
Technology Advisor, Founder, Mentor, Speaker, Author, Poet, and a Wanna-be-farmer
A draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), of which a final version was due in March 2019. You may read the AI Ethics Guidelines, Here…
Trustworthy AI
Artificial Intelligence helps improving our quality of life through personalized medicine or more efficient delivery of healthcare services. It can help achieving the sustainable development goals such as promoting gender balance, tackling climate change, and helping us make better use of natural resources. It helps optimizing our transportation infrastructures and mobility as well as supporting our ability to monitor progress against indicators of sustainability and social coherence. AI is thus not an end in itself, but rather a means to increase individual and societal well-being.
Trustworthy AI has two key components:
(1) its development, deployment and use should respect fundamental rights and applicable regulation, as well as core principles and values, ensuring an “ethical purpose”..
(2) it should be technically robust and reliable. Indeed, even with good intentions or purpose, the lack of technological mastery can cause unintentional harm.
Moreover, compliance with fundamental rights, principles and values entails that these are duly operationalized by implementing them throughout the AI technology’s design, development, and deployment. Such implementation can be addressed both by technical and non-technical methods.
Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly governed and managed. While AI’s benefits outweigh its risks, ensure that we stay on the right track, a human-centric approach to AI was needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being.
Seven essentials for achieving Trustworthy Artificial Intelligence
Trustworthy Artificial Intelligence should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight: Enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
- Robustness and safety: Algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of Artificial Intelligence systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of Artificial Intelligence systems should be ensured.
- Diversity, non-discrimination and fairness: Consider the whole range of human abilities, skills and requirements, and ensure accessibility.
- Societal and environmental well-being: Enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for Artificial Intelligence systems and their outcomes.
Principles and values in the context of AI
The Principle of Beneficence: “Do Good”
- AI systems should be designed and developed to improve individual and collective wellbeing.
- AI systems can do so by generating prosperity, value creation and wealth maximization and sustainability.
- AI can be a tool to bring betterment to the world and/or to help with the world’s greatest challenges.
The Principle of Non maleficence: “Do no Harm”
- AI systems should not harm human beings.
- By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work.
- AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services.
- Ensure that the research, development, and use of AI are done with an eye towards environmental awareness.
The Principle of Autonomy: “Preserve Human Agency”
- Autonomy of human beings in the context of AI development means freedom from subordination to, or coercion by, AI systems.
- Human beings interacting with AI systems must keep full and effective self-determination over themselves.
- AI system entails a right to decide to be subject to direct or indirect AI decision making, a right to knowledge of direct or indirect interaction with AI systems, a right to opt out and a right of withdrawal
The Principle of Justice: “Be Fair”
- The development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatization and discrimination.
- Additionally, the positives and negatives resulting from AI should be evenly distributed, avoid placing vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination.
- AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences.
- Those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.
The Principle of Explicability: “Operate transparently”
- Transparency is key to building and maintaining citizen’s trust in the developers of AI systems and AI systems themselves. Both technological and business model transparency matter from an ethical standpoint.
- Technological transparency implies that AI systems be auditable, comprehensible and intelligible by human beings at varying levels of comprehension and expertise.
- Business model transparency means that human beings are knowingly informed of the intention of developers and technology implementers of AI systems.
- Explicability is a precondition for achieving informed consent from individuals interacting with AI systems and in order to ensure that the principle of explicability and non-maleficence are achieved the requirement of informed consent should be sought.
KEY GUIDANCE from EU
Achieving Trustworthy AI means that the general and abstract principles need to be mapped into concrete requirements for AI systems and applications.
Realizing Trustworthy AI:
- Incorporate the requirements for Trustworthy AI from the earliest design phase: Accountability, Data Governance, Design for all, Governance of AI Autonomy (Human oversight), Non-Discrimination, Respect for Human Autonomy, Respect for Privacy, Robustness, Safety, Transparency.
- Consider technical and non-technical methods to ensure the implementation of those requirements into the AI system. Moreover, keep those requirements in mind when building the team working on the system, the system itself, the testing environment and the potential applications of the system.
- Provide in a clear and proactive manner information to stakeholders (customers, employees, etc.) about the AI system’s capabilities and limitations, allowing them to set realistic expectations. Ensuring Traceability of the AI system is key in this regard.
- Make Trustworthy AI part of the organization’s culture, and provide information to stakeholders on how Trustworthy AI is implemented into the design and use of AI systems. Trustworthy AI can also be included in organizations’ deontology charters or codes of conduct.
- Ensure participation and inclusion of stakeholders in the design and development of the AI system. Moreover, ensure diversity when setting up the teams developing, implementing and testing the product.
- Strive to facilitate the auditability of AI systems, particularly in critical contexts or situations. To the extent possible, design your system to enable tracing individual decisions to your various inputs; data, pre-trained models, etc. Moreover, define explanation methods of the AI system. \
- Ensure a specific process for accountability governance.
- Foresee training and education, and ensure that managers, developers, users and employers are aware of, and trained in, Trustworthy AI.
- Be mindful that there might be fundamental tensions between different objectives (transparency can open the door to misuse; identifying and correcting bias might contrast with privacy protections). Communicate and document these trade-offs.
- Foster research and innovation to further the achievement of the requirements for Trustworthy AI.
Assessing Trustworthy AI:
- Adopt an assessment list for Trustworthy AI when developing, deploying or using AI, and adapt it to the specific use case in which the system is being used.
- Keep in mind that an assessment list will never be exhaustive, and that ensuring Trustworthy AI is not about ticking boxes, but about a continuous process of identifying requirements, evaluating solutions and ensuring improved outcomes throughout the entire lifecycle of the AI system.
Conclusion
Europe has a unique vantage point based on its focus on placing the citizen at the heart of its endeavors and the EU is striving to preserve that focus that forms part of a vision that emphasizes human-centric artificial intelligence which will enable Europe to become a globally leading innovator in AI, rooted in ethical purpose. This ambitious vision will facilitate a rising tide that will raise the boats of all global citizens to help create a culture of “Trustworthy AI”, isn't it?
***
Compiled and Summarized from public domain EU publications and media articles..
The Philippines Recruitment Company - ? HD & LV Mechanic ? Welder ? Metal Fabricator ? Fitter ? CNC Machinist ? Engineers ? Agriculture Worker ? Plant Operator ? Truck Driver ? Driller ? Linesman ? Riggers and Dogging
5 年I’ve been following your posting for a while Rajesh, and I always get valuable information on AI.