ISO 42001 and the Common Good: A Mandatory Path

ISO 42001 and the Common Good: A Mandatory Path

by Oliviero Casale

The ISO/IEC 42001:2023 standard represents a fundamental turning point in the management of artificial intelligence (AI) systems. Created to ensure that AI is developed and used responsibly, this standard fits into a global context increasingly attentive to the ethical and social implications of emerging technologies. In this article, we delve into the concept of the common good, the importance of generative AI, and the need for careful and responsible AI management.

Artificial Intelligence and the Common Good

From an operational standpoint, reference should be made to ISO/IEC 22989, which establishes the terminology for AI and describes its concepts. In it, AI is described as a “discipline useful for the research and development of mechanisms and applications of AI systems.” The context of this discipline can be described as “a technical-scientific field dedicated to the engineered system that generates outputs such as content, predictions, recommendations, or decisions for a given set of human-defined objectives.”

ISO 42001 aims to create a framework for the implementation and management of AI systems, ensuring that these are used for the common good. This concept implies improving the quality of life for all people, without discrimination, and promoting responsible and transparent use of AI.

The Role of Generative Artificial Intelligence

Generative Artificial Intelligence (GAI) represents one of the most advanced frontiers of AI, with applications ranging from creating multimedia content to simulating complex scenarios. However, the use of GAI raises significant ethical issues. For example, the ability to generate realistic but false content (deepfakes) can be exploited to spread misinformation, undermining public trust and social cohesion.

ISO/IEC JTC 1/SC 42, the technical committee for AI, has initiated several projects to address these ethical and social aspects, including the identification of biases and the transparency of AI systems. ISO 42001 emphasizes the need for an ethical and responsible approach, requiring organizations to consider the impact of their AI applications on people and society in general.

Implementation Process of ISO 42001

Implementing ISO 42001 requires a systematic and iterative approach. Here are the key steps in the process:

1. Understanding the Organization's Context (Section 4.1):

Determine the internal and external issues that affect the organization's ability to achieve the intended results of the AI management system .

2. Identifying Interested Parties (Section 4.2):

Understand the needs and expectations of relevant interested parties for the AI management system .

3. Defining the Scope of the AI Management System (Section 4.3):

Establish the boundaries and applicability of the AI management system .

4. Leadership and Commitment (Section 5.1):

Top management must demonstrate leadership and commitment, ensuring that necessary resources are available and promoting a responsible approach to the development and use of AI .

5. AI Policy (Section 5.2):

The AI policy must be consistent with the organization's strategic direction and set the objectives for AI .

6. Roles, Responsibilities, and Authorities (Section 5.3):

Define and communicate roles, responsibilities, and authorities related to the AI management system .

7. Actions to Address Risks and Opportunities (Section 6.1):

Plan actions to address risks and opportunities, including AI risk assessment, AI risk treatment, and AI system impact assessment .

8. AI Objectives and Planning to Achieve Them (Section 6.2):

Establish AI objectives that are consistent with the AI policy, measurable, monitored, and updated as necessary .

9. Planning of Changes (Section 6.3):

When the organization determines the need for changes to the AI management system, changes must be carried out in a planned manner .

10. Resources (Section 7.1):

The organization must determine and provide the resources needed for the establishment, implementation, maintenance, and continual improvement of the AI management system .

11. Competence (Section 7.2):

The organization must determine the necessary competencies of persons performing work under its control that affects AI performance and ensure that these persons are competent based on appropriate education, training, or experience .

12. Awareness (Section 7.3):

Persons working under the organization's control must be aware of the AI policy, their contribution to the effectiveness of the AI management system, and the implications of not conforming to the AI management system requirements .

13. Communication (Section 7.4):

The organization must determine the internal and external communications relevant to the AI management system .

14. Documented Information (Section 7.5):

The organization's AI management system must include the documented information required by this document and those necessary for the effectiveness of the AI management system .

15. Operational Planning and Control (Section 8.1):

The organization must plan, implement, and control the processes needed to meet requirements and implement actions to address risks and opportunities .

16. Performance Evaluation (Section 9):

The organization must monitor, measure, analyze, and evaluate the performance and effectiveness of the AI management system, conduct internal audits, and review the management system to ensure its continuing suitability, adequacy, and effectiveness.

17. Improvement (Section 10):

The organization must continually improve the suitability, adequacy, and effectiveness of the AI management system, addressing nonconformities and taking corrective actions.

Ethical Considerations and Respect for Human Rights

ISO 42001 places a strong emphasis on respect for human rights and ethical considerations. The standard requires organizations to evaluate the impact of their AI systems on privacy, security, and human rights, ensuring that AI applications are developed and used in a fair and transparent manner.

Conclusions

The ISO 42001:2023 standard is an essential guide for organizations that wish to develop and use AI responsibly. By adopting this standard, organizations can significantly contribute to the common good, ensuring that AI technologies are used to improve the quality of life for all people, without discrimination, and addressing the ethical and social challenges that accompany these powerful technologies.

***Source:

1) ISO/IEC 42001:2023 Information technology Artificial intelligence Management system

2) ISO/IEC 22989:2022 Information technology Artificial intelligence Artificial intelligence concepts and terminology


要查看或添加评论,请登录

社区洞察

其他会员也浏览了