ISO 42001 : Building Responsible AI
Responsbile AI

ISO 42001 : Building Responsible AI

Post rollout of ChatGPT, overall the technology news has been focused on capabilities of AI and its potential when used for all the right and wrong reasons. AI being a fairly new technology ( in terms of its use cases), its ability to mimic or exceed human intelligence has been rightly questioned – especially when it comes to aspects such as fairness, transparency, bias, ethics and so on.

Today organizations across various sectors are leveraging or exploring the power of AI to drive innovation, enhance efficiency, and improve decision-making processes with their applications or businesses.

With great power comes great responsibility. The ethical implications of AI technologies have prompted the development of various frameworks and standards aimed at ensuring “responsible AI” practices.

Among these, recently published ISO/IEC 42001 stands out as a comprehensive set of guidelines that can take organizations towards building ethical AI.

Click Responsible AI for beginners for easy understanding.

?Who Should Be Covered Under ISO/IEC 42001:2023

Overall, be it ISO 42001 or EU AI act or any such similar framework is designed to be applicable to all organizations involved in the development, deployment, and use of AI technologies. This includes but is not limited to

-????????????? Technology companies developing AI software and hardware.

-????????????? Enterprises integrating AI into their operations, products, or services.

-????????????? Research institutions conducting AI-related studies and experiments.

-????????????? AI consumers or third parties used by organizations for AI development.



The key aspects of Responsible AI

Below are some of the key characteristics and aspects that are usually expected when building responsible AI. It’s crucial to ensure that AI systems are built with accountability and below values as the AI harms can be quite adverse and detrimental to the society.

1. Security and Privacy

Privacy is a fundamental aspect of responsible AI and ?individuals' sensitive information should be protected throughout the AI lifecycle. ISO 42001 emphasizes the importance of incorporating privacy considerations into AI systems from inception to deployment. For example, consider a healthcare organization developing an AI-driven diagnostic tool. By adhering to ISO 42001 guidelines, the organization would consider implementing data anonymization techniques to safeguard patient data and such risks would be covered in the AI risk assessment.

Know some of the Potential deepfake risks in elections in upcoming elections

2. Fairness

Fairness focuses on mitigating biases and ensuring fair outcomes for all individuals. ISO 42001 encourages fairness by advocating for the use quality datasets . Lets say for example, a financial institution utilizing AI for credit scoring adopts ISO 42001 principles to detect and rectify biases. The planning phase of ISO 42001 [ clause 4-7] would normally take into account fairness principles and its risks.

?

3. Transparency

Transparency strengthens the trust and accountability in AI systems by demonstrating ?how decisions are made and the rationale behind them. ISO 42001 encourages organizations to provide clear documentation and explanations of AI processes. For example, a retail company employing AI-powered recommendation systems adheres to ISO 42001 guidelines by disclosing how customer data is utilized to further generate personalized recommendations.

4. Bias

Consider a recruitment agency leveraging AI for candidate screening. The agency can include bias detection mechanisms to prevent the perpetuation of gender or racial biases in hiring decisions, thereby promoting diversity and inclusion in the workforce. Bias is one the serious risks that should be documented in AI risk register for the organizations.

?5. Continuous Improvement

Continuous improvement is integral to responsible AI, especially when it comes non-supervised learnings. A quality of dataset, algorithms etc are dynamic factors in AI development and they should be continuously monitored for any deviations. The correction / learning should be integral part of your AI development.

Especially the clause 10 of ISO 42001 clearly calls out for continuous improvement in AI development lifecycle.

For example, a social media platform regularly reviews its AI algorithms for content moderation, incorporating user feedback and emerging best practices. Infact, there was a news recently where an individual took social media company to the court, for wrongly terminating his account based on AI based content moderation tool. His child’s photo was tagged incorrectly by the system.

6. Enhanced Governance

ISO 42001 promotes robust governance structures that oversee AI development, deployment, and monitoring processes. Its recommended to have a AI governance council established in the organization and leadership support should be secured. The centralized governance, oversight ensures that AI development is risk-free for all the stakeholders and assures ethical use of the system.

?

7. Increased Stakeholder Confidence

Responsible AI practices clearly bolster stakeholder confidence by demonstrating a commitment to ethical principles and societal well-being. This is done by reassuring customers, regulators, and investors of its ethical approach.

?

8. Systematic Risk Assessment

AI risk assessments can be strategic, system level or AI/component level. ISO 42001 guides organizations in conducting systematic risk assessments across various stages of the AI lifecycle.


In a nutshell ,

ISO 42001 may not be a holistic framework however it serves as a valuable resource at this point , providing guidelines and best practices to work towards ?AI ethics. I think Infosys was one of the early players to be certified for ISO42001.

Hope this article helped understand the context of responsible AI. For any guidance, implementation or training for AI framework, feel free to DM or reach out to [email protected]


References and sources to follow

Cybersecurity Content and Advisory

Be CyberFIT YT Channel

Medium Blog

Secure Data Erasure Solution

?

?

Takahide Maruoka

Credly Top Legacy Badge Earner | ISO/IEC FDIS 42001 | ISO/IEC 27001:2022 | NVIDIA | Google | IBM | Cisco Systems | Generative AI

4 个月

Thank you for info. I have ISO 42001 certificate.

manoj desale

Proprietor at Atharva Associates

10 个月

Thanks for sharing

Commander Varun Gupta

Risk Consulting @ PwC | Naval Veteran | Risk Consulting | Cybersecurity | CISSP | CCSP | AWS SAA | MBA(ITSM) | ISO 27001 LA

10 个月

Very well summarised Santosh Kamane

Arup Majumder

Cybersecurity Architect | Risk and Vulnerability Management | AppSec | GenAI-Driven Security Automation

10 个月

I believe bias originates from human nature itself. Language models like LLMs draw data from publicly sourced information such as articles, news, comments, and paid journals etc. Frankly, the world seems divided into two groups: left-minded and right-minded, and their perspectives are mirrored in this open-source information. Ultimately, it's the LLM data processor's discretion regarding what data to utilize and at what depth, as human control is limited. Therefore, bias is inevitable.......

Bob Korzeniowski

Wild Card - draw me for a winning hand | Creative Problem Solver in Many Roles | Manual Software QA | Project Management | Business Analysis | Auditing | Accounting |

10 个月

AI is based on a dehumanizing philosophy. There is no such thing as a responsible dehumanizing philosophy, and thus no responsible AI.

要查看或添加评论,请登录

Santosh Kamane的更多文章

  • Data Privacy Act - Key factors for success.

    Data Privacy Act - Key factors for success.

    It’s been a while since DPDP Act implementation rules are out for consultation. The final version of rules hopefully…

    2 条评论
  • CISO Reporting : Does it matter?

    CISO Reporting : Does it matter?

    Often, information security professionals adopt a rulebook approach and apply general guidelines for building…

    2 条评论
  • CISO safeguard checklist : Protecting role & reputation.

    CISO safeguard checklist : Protecting role & reputation.

    “Star health data breach: Evidence shows CISO is being wrongly implicated” “Judge Clears SolarWinds, CISO of Most…

    4 条评论
  • Common Cloud Security failures

    Common Cloud Security failures

    Understanding shared responsibility "Shared responsibility" is a loosely used term in the context of cloud security…

    6 条评论
  • CISO Interviews: Beyond tech know-how (Part 2 )

    CISO Interviews: Beyond tech know-how (Part 2 )

    As a continuation to my earlier article on the same topic, sharing further insights into expectations from CISO role in…

    4 条评论
  • vCISO advantage : Expertise of Demand

    vCISO advantage : Expertise of Demand

    Should you get a Virtual CISO if you don’t have In-House staff to manage Security? Cybersecurity programs aren’t…

    20 条评论
  • Deepfake threats in Elections

    Deepfake threats in Elections

    Rising deepfake threats and risks to the elections These days, as you may have noted in several news articles and…

    4 条评论
  • CISO Interviews : Beyond Tech Know-How [ Part -1]

    CISO Interviews : Beyond Tech Know-How [ Part -1]

    CISO : A multifaceted role In a recent conversation with a C-level executive about hiring a CISO for his team, I was…

    10 条评论
  • Key meetings every CISO must attend

    Key meetings every CISO must attend

    CISO roles have immensely evolved over the year with rising technology advancement. Technology and digitisation has…

    11 条评论
  • 7 Habits of Highly Effective CISOs

    7 Habits of Highly Effective CISOs

    Habits create the behaviours you need to achieve success and make your work effective. They decide the quality of…

    8 条评论

社区洞察

其他会员也浏览了