ISO 42001 : Building Responsible AI
Santosh Kamane
Cybersecurity and Data Privacy Leader | Independant Director | Entrepreneur | PECB Certified ISO 42001 Trainer and advisor | Virtual CISO | GRC | DPO as a Service | Empowering Future Cybersecurity Professionals
Post rollout of ChatGPT, overall the technology news has been focused on capabilities of AI and its potential when used for all the right and wrong reasons. AI being a fairly new technology ( in terms of its use cases), its ability to mimic or exceed human intelligence has been rightly questioned – especially when it comes to aspects such as fairness, transparency, bias, ethics and so on.
Today organizations across various sectors are leveraging or exploring the power of AI to drive innovation, enhance efficiency, and improve decision-making processes with their applications or businesses.
With great power comes great responsibility. The ethical implications of AI technologies have prompted the development of various frameworks and standards aimed at ensuring “responsible AI” practices.
Among these, recently published ISO/IEC 42001 stands out as a comprehensive set of guidelines that can take organizations towards building ethical AI.
Click Responsible AI for beginners for easy understanding.
?Who Should Be Covered Under ISO/IEC 42001:2023
Overall, be it ISO 42001 or EU AI act or any such similar framework is designed to be applicable to all organizations involved in the development, deployment, and use of AI technologies. This includes but is not limited to
-????????????? Technology companies developing AI software and hardware.
-????????????? Enterprises integrating AI into their operations, products, or services.
-????????????? Research institutions conducting AI-related studies and experiments.
-????????????? AI consumers or third parties used by organizations for AI development.
The key aspects of Responsible AI
Below are some of the key characteristics and aspects that are usually expected when building responsible AI. It’s crucial to ensure that AI systems are built with accountability and below values as the AI harms can be quite adverse and detrimental to the society.
1. Security and Privacy
Privacy is a fundamental aspect of responsible AI and ?individuals' sensitive information should be protected throughout the AI lifecycle. ISO 42001 emphasizes the importance of incorporating privacy considerations into AI systems from inception to deployment. For example, consider a healthcare organization developing an AI-driven diagnostic tool. By adhering to ISO 42001 guidelines, the organization would consider implementing data anonymization techniques to safeguard patient data and such risks would be covered in the AI risk assessment.
Know some of the Potential deepfake risks in elections in upcoming elections
2. Fairness
Fairness focuses on mitigating biases and ensuring fair outcomes for all individuals. ISO 42001 encourages fairness by advocating for the use quality datasets . Lets say for example, a financial institution utilizing AI for credit scoring adopts ISO 42001 principles to detect and rectify biases. The planning phase of ISO 42001 [ clause 4-7] would normally take into account fairness principles and its risks.
?
3. Transparency
Transparency strengthens the trust and accountability in AI systems by demonstrating ?how decisions are made and the rationale behind them. ISO 42001 encourages organizations to provide clear documentation and explanations of AI processes. For example, a retail company employing AI-powered recommendation systems adheres to ISO 42001 guidelines by disclosing how customer data is utilized to further generate personalized recommendations.
领英推荐
4. Bias
Consider a recruitment agency leveraging AI for candidate screening. The agency can include bias detection mechanisms to prevent the perpetuation of gender or racial biases in hiring decisions, thereby promoting diversity and inclusion in the workforce. Bias is one the serious risks that should be documented in AI risk register for the organizations.
?5. Continuous Improvement
Continuous improvement is integral to responsible AI, especially when it comes non-supervised learnings. A quality of dataset, algorithms etc are dynamic factors in AI development and they should be continuously monitored for any deviations. The correction / learning should be integral part of your AI development.
Especially the clause 10 of ISO 42001 clearly calls out for continuous improvement in AI development lifecycle.
For example, a social media platform regularly reviews its AI algorithms for content moderation, incorporating user feedback and emerging best practices. Infact, there was a news recently where an individual took social media company to the court, for wrongly terminating his account based on AI based content moderation tool. His child’s photo was tagged incorrectly by the system.
6. Enhanced Governance
ISO 42001 promotes robust governance structures that oversee AI development, deployment, and monitoring processes. Its recommended to have a AI governance council established in the organization and leadership support should be secured. The centralized governance, oversight ensures that AI development is risk-free for all the stakeholders and assures ethical use of the system.
?
7. Increased Stakeholder Confidence
Responsible AI practices clearly bolster stakeholder confidence by demonstrating a commitment to ethical principles and societal well-being. This is done by reassuring customers, regulators, and investors of its ethical approach.
?
8. Systematic Risk Assessment
AI risk assessments can be strategic, system level or AI/component level. ISO 42001 guides organizations in conducting systematic risk assessments across various stages of the AI lifecycle.
In a nutshell ,
ISO 42001 may not be a holistic framework however it serves as a valuable resource at this point , providing guidelines and best practices to work towards ?AI ethics. I think Infosys was one of the early players to be certified for ISO42001.
Hope this article helped understand the context of responsible AI. For any guidance, implementation or training for AI framework, feel free to DM or reach out to [email protected]
References and sources to follow
?
?
Credly Top Legacy Badge Earner | ISO/IEC FDIS 42001 | ISO/IEC 27001:2022 | NVIDIA | Google | IBM | Cisco Systems | Generative AI
4 个月Thank you for info. I have ISO 42001 certificate.
Proprietor at Atharva Associates
10 个月Thanks for sharing
Risk Consulting @ PwC | Naval Veteran | Risk Consulting | Cybersecurity | CISSP | CCSP | AWS SAA | MBA(ITSM) | ISO 27001 LA
10 个月Very well summarised Santosh Kamane
Cybersecurity Architect | Risk and Vulnerability Management | AppSec | GenAI-Driven Security Automation
10 个月I believe bias originates from human nature itself. Language models like LLMs draw data from publicly sourced information such as articles, news, comments, and paid journals etc. Frankly, the world seems divided into two groups: left-minded and right-minded, and their perspectives are mirrored in this open-source information. Ultimately, it's the LLM data processor's discretion regarding what data to utilize and at what depth, as human control is limited. Therefore, bias is inevitable.......
Wild Card - draw me for a winning hand | Creative Problem Solver in Many Roles | Manual Software QA | Project Management | Business Analysis | Auditing | Accounting |
10 个月AI is based on a dehumanizing philosophy. There is no such thing as a responsible dehumanizing philosophy, and thus no responsible AI.