AI ISO/IEC, AAMI, NIST, OECD and IEEE related standards
ISO/IEC, AAMI, NIST, OECD, and IEEE have developed several standards to ensure the responsible development, deployment, and governance of artificial intelligence (AI) systems.
The following is a list of key AI-related standards.
General AI Standards
- ISO/IEC 5338: 2023 Information technology - Artificial intelligence - AI system life cycle processes
- ISO/IEC 22989:2022 Information technology - Artificial intelligence - Concepts and terminology
- ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
- ISO/IEC 23894:2023 Information Technology - Artificial Intelligence - Risk Management
- ISO/IEC?38507:2022?Information technology — Governance of IT — Governance implications of using artificial intelligence by organizations
- AAMI CR34971 Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning.
- ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system
Trustworthiness in AI
- ISO/IEC TR 24027:2021 Assessment of the robustness of neural networks
- ISO/IEC TR 24028:2020 Overview of trustworthiness in artificial intelligence
- ISO/IEC TR 5469 (Under Development) Artificial intelligence — Principles of Explainability
Reliability in AI
- ISO/IEC JTC 1/SC 42: This is a joint technical committee of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) focused on artificial intelligence. It is working on a range of standards related to AI, including those addressing reliability, robustness, and risk management.
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: The IEEE has developed standards and guidelines to ensure ethical and reliable AI systems. This includes the IEEE P7000 series, which covers various aspects of AI ethics and reliability.
- NIST AI Risk Management Framework: The National Institute of Standards and Technology (NIST) in the United States is developing a framework to manage risks associated with AI, including reliability. This framework aims to improve the trustworthiness of AI systems by addressing issues such as bias, transparency, and accountability.
- OECD AI Principles: The Organisation for Economic Co-operation and Development (OECD) has established AI principles that emphasize the importance of reliability, safety, and accountability in AI systems. These principles are intended to guide governments and organizations in the responsible development and deployment of AI.
- EU AI Act: The European Union is implementing the AI Act, which aims to regulate AI systems based on risk levels. The act includes provisions for ensuring the reliability and safety of AI systems, particularly those used in high-risk applications.
- ISO 9001 and ISO 27001: While not specific to AI, these standards for quality management systems and information security management systems, respectively, provide frameworks that can be applied to ensure the reliability and security of AI systems.
领英推è
Data and Machine Learning
- ISO/IEC 5259:2024 Series Artificial Intelligence - Data quality for analytics and machine learning (ML)
- ISO/IEC 20546:2019 Information technology — Big data — Overview and vocabulary
- ISO/IEC TR 24029-1:2021 Assessment of the robustness of neural networks — Part 1: Overview
- ISO/IEC 25012:2008 Data quality model
Ethics and Governance
- ISO/IEC TR 24027:2021 Information technology—Artificial intelligence (AI)—Bias in AI systems and AI-aided decision-making
- ISO/IEC TR 24030:2021 Artificial intelligence - Use cases
- ISO/IEC TR 24368:2022 Artificial intelligence - Overview of ethical and societal concerns
- ISO 26000:2010 Guidance on Social Responsibility
- IEEE 7000-2021 IEEE Standard Model Process for Addressing Ethical Concerns during System Design
Domain-Specific Standards
- ISO/IEC TS 4213:2022 AI in healthcare — Foundational principles
- ISO/IEC TR 24029-2 (Under Development) Robustness of neural networks — Part 2: Testing methodologies
- ISO 11238:2023 AI in pharmaceuticals — Ontology
Data Governance Framework
- ISO 9001:2015 Quality Systems Management
- ISO/IEC 27001:2022 Information security, cybersecurity and privacy protection — Information security management systems - Requirements
- ISO/IEC 25012:2008 Data Quality model
These standards provide a framework to promote AI's ethical, secure, and effective use across industries.
Senior Healthcare Data Analytics Manager | Strategic Planning | Data Governance | Healthcare Compliance (HRSA, UDS, HEDIS) | Value-Based Care | Statistical Analysis & Predictive Modeling | Python, R, SQL, Tableau
3 个月Very helpful. Thank you for compiling these in such an orderly manner
IT Business Analyst for laboratory instrumentation at Alten Switzerland
3 个月Thanks for this sum-up Orlando
Transforming Life Sciences with AI/ML & eClinical Tech | Pre-Sales Engineering & Enablement Expert
3 个月Orlando Lopez thank you for sharing this. Coincidentally Laxmiraju Kandikatla, CSM? and I just had a chat about how fast industry is running to harness the power of AI-based systems and the equal importance of guarding them. Similarly we caught up a bit from the AWS :reinforce 2024 session - build responsibilie AI Applications with Guardrails for AWS bedrock demonstrated by @AnubhavMishra
Chief Simplification Officer QMSFlow | Entrepreneur & Full Stack Developer
3 个月Hey Orlandolopez Gonzalez Interesting AI standards lineup—ISO is really stepping up as the ultimate referee for our future robot overlords! ?? Curious, which standard do you think companies will find the trickiest to implement? Asking for a bot friend. ?? #AIGovernance #ISOCompliance #QMSFlowcom