AI ISO/IEC, AAMI, NIST, OECD and IEEE related standards

AI ISO/IEC, AAMI, NIST, OECD and IEEE related standards

ISO/IEC, AAMI, NIST, OECD, and IEEE have developed several standards to ensure the responsible development, deployment, and governance of artificial intelligence (AI) systems.

The following is a list of key AI-related standards.


General AI Standards

  1. ISO/IEC 5338: 2023 Information technology - Artificial intelligence - AI system life cycle processes
  2. ISO/IEC 22989:2022 Information technology - Artificial intelligence - Concepts and terminology
  3. ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
  4. ISO/IEC 23894:2023 Information Technology - Artificial Intelligence - Risk Management
  5. ISO/IEC?38507:2022?Information technology — Governance of IT — Governance implications of using artificial intelligence by organizations
  6. AAMI CR34971 Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning.
  7. ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system


Trustworthiness in AI

  1. ISO/IEC TR 24027:2021 Assessment of the robustness of neural networks
  2. ISO/IEC TR 24028:2020 Overview of trustworthiness in artificial intelligence
  3. ISO/IEC TR 5469 (Under Development) Artificial intelligence — Principles of Explainability


Reliability in AI

  1. ISO/IEC JTC 1/SC 42: This is a joint technical committee of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) focused on artificial intelligence. It is working on a range of standards related to AI, including those addressing reliability, robustness, and risk management.
  2. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: The IEEE has developed standards and guidelines to ensure ethical and reliable AI systems. This includes the IEEE P7000 series, which covers various aspects of AI ethics and reliability.
  3. NIST AI Risk Management Framework: The National Institute of Standards and Technology (NIST) in the United States is developing a framework to manage risks associated with AI, including reliability. This framework aims to improve the trustworthiness of AI systems by addressing issues such as bias, transparency, and accountability.
  4. OECD AI Principles: The Organisation for Economic Co-operation and Development (OECD) has established AI principles that emphasize the importance of reliability, safety, and accountability in AI systems. These principles are intended to guide governments and organizations in the responsible development and deployment of AI.
  5. EU AI Act: The European Union is implementing the AI Act, which aims to regulate AI systems based on risk levels. The act includes provisions for ensuring the reliability and safety of AI systems, particularly those used in high-risk applications.
  6. ISO 9001 and ISO 27001: While not specific to AI, these standards for quality management systems and information security management systems, respectively, provide frameworks that can be applied to ensure the reliability and security of AI systems.

Data and Machine Learning

  1. ISO/IEC 5259:2024 Series Artificial Intelligence - Data quality for analytics and machine learning (ML)
  2. ISO/IEC 20546:2019 Information technology — Big data — Overview and vocabulary
  3. ISO/IEC TR 24029-1:2021 Assessment of the robustness of neural networks — Part 1: Overview
  4. ISO/IEC 25012:2008 Data quality model


Ethics and Governance

  1. ISO/IEC TR 24027:2021 Information technology—Artificial intelligence (AI)—Bias in AI systems and AI-aided decision-making
  2. ISO/IEC TR 24030:2021 Artificial intelligence - Use cases
  3. ISO/IEC TR 24368:2022 Artificial intelligence - Overview of ethical and societal concerns
  4. ISO 26000:2010 Guidance on Social Responsibility
  5. IEEE 7000-2021 IEEE Standard Model Process for Addressing Ethical Concerns during System Design


Domain-Specific Standards

  1. ISO/IEC TS 4213:2022 AI in healthcare — Foundational principles
  2. ISO/IEC TR 24029-2 (Under Development) Robustness of neural networks — Part 2: Testing methodologies
  3. ISO 11238:2023 AI in pharmaceuticals — Ontology


Data Governance Framework

  1. ISO 9001:2015 Quality Systems Management
  2. ISO/IEC 27001:2022 Information security, cybersecurity and privacy protection — Information security management systems - Requirements
  3. ISO/IEC 25012:2008 Data Quality model


These standards provide a framework to promote AI's ethical, secure, and effective use across industries.

Gayatri Ankem MPH, MS, BE

Senior Healthcare Data Analytics Manager | Strategic Planning | Data Governance | Healthcare Compliance (HRSA, UDS, HEDIS) | Value-Based Care | Statistical Analysis & Predictive Modeling | Python, R, SQL, Tableau

3 个月

Very helpful. Thank you for compiling these in such an orderly manner

Marc-Olivier Souder

IT Business Analyst for laboratory instrumentation at Alten Switzerland

3 个月

Thanks for this sum-up Orlando

赞
回复
Rajesh Hagalwadi

Transforming Life Sciences with AI/ML & eClinical Tech | Pre-Sales Engineering & Enablement Expert

3 个月

Orlando Lopez thank you for sharing this. Coincidentally Laxmiraju Kandikatla, CSM? and I just had a chat about how fast industry is running to harness the power of AI-based systems and the equal importance of guarding them. Similarly we caught up a bit from the AWS :reinforce 2024 session - build responsibilie AI Applications with Guardrails for AWS bedrock demonstrated by @AnubhavMishra

Steve Lieberman

Chief Simplification Officer QMSFlow | Entrepreneur & Full Stack Developer

3 个月

Hey Orlandolopez Gonzalez Interesting AI standards lineup—ISO is really stepping up as the ultimate referee for our future robot overlords! ?? Curious, which standard do you think companies will find the trickiest to implement? Asking for a bot friend. ?? #AIGovernance #ISOCompliance #QMSFlowcom

赞
回复

要查看或添加评论,请登录

Orlando Lopez的更多文章

社区洞察

其他会员也浏览了