How to adopt NIST AI Risk Management Framework (AI RMF) and GDPR together to strengthen the Responsible AI framework?
Dr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP))
?? ICT Professional of the Year 2022 | IEEE AI Standards Committee Member | Emerging Technology | AI Governance | AI Strategy and Risk | Technology Foresight | Ethical & Sustainable Technology Advocate | Keynote Speaker
The National Institute of Standards and Technology (NIST) (in the USA) has just released (on January 26, 2023) their Artificial Intelligence Risk Management Framework (AI RMF) in collaboration with the private and public sectors to better manage the risks associated with AI for individuals, organizations, and society. The framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The framework was developed through a consensus-driven, open, transparent, and collaborative process that included a request for information, multiple draft versions for public comments, workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others. NIST also published a companion playbook, roadmap, crosswalk, video explainer and various perspectives to support the framework.
NIST AI RMF is a guidance document that provides a comprehensive approach to managing the risks associated with AI systems. The framework is organized around four main components: Govern, Map, Measure, and Manage.
The NIST AI RMF is intended to be flexible and adaptable to the specific needs of different organizations and industries, and can be used to guide the development and deployment of AI systems in a way that promotes transparency, explainability, and robustness while minimizing the risk of unintended consequences.
In contrast, the General Data Protection Regulation (GDPR) is a regulation by the European Union that came into effect on May 25, 2018. It replaced the 1995 EU Data Protection Directive and strengthens EU data protection laws. GDPR applies to any organization that processes personal data of EU residents, regardless of where the organization is located. The GDPR has six key principles that organizations must adhere to when processing personal data. These principles are:
In the context of AI risk management, organizations must ensure that they are complying with these principles and that they have appropriate technical and organizational measures in place to protect personal data. Additionally, GDPR requires organizations to conduct regular risk assessments to identify and mitigate the potential risks associated with the processing of personal data and provide transparency to the data subjects about their data processing activities. The process helps identify and mitigate the potential risks associated with their AI systems.
Adopting the NIST AI RMF and the GDPR together can help organizations strengthen their Responsible AI framework. Both frameworks provide guidance on managing the risks associated with AI and protecting personal data, respectively, and can be used together to ensure that organizations are effectively managing the risks associated with their AI systems while also complying with data protection regulations. Here are a few steps organizations can take to adopt the NIST AI RMF and GDPR together:
By following these steps, organizations can effectively adopt the NIST AI RMF and GDPR together, and ensure that their AI systems are developed and deployed in a way that promotes transparency, explainability, and robustness while also protecting personal data and complying with data protection regulations.
领英推荐
The 16th United Nations Sustainable Development Goal (UN SDG) - Peace, Justice, and Strong Institutions, can play a leading role in motivating and guiding adoptation of the NIST AI RMF and the GDPR for governance, regulation, and compliance of AI.
This goal focuses on promoting peaceful and inclusive societies, providing access to justice for all, and building effective, accountable, and inclusive institutions at all levels. In the context of AI, this goal can help to ensure that AI systems are developed and used in ways that promote peace and social inclusion, and do not perpetuate discrimination or bias. This can be achieved by implementing compliance with international human rights and ethical principles in AI development and deployment, having transparent and explainable systems, and robust governance mechanisms such as clear policies and procedures and accountability for the actions of AI systems and their impact on individuals and society. Additionally, strong institutions are needed to ensure effective and accountable governance of AI which includes regulatory bodies that oversee the development and deployment of AI systems and ensure compliance with laws and regulations, and initiatives that promote civil society participation in the governance of AI and the development of international standards and guidelines for Responsible AI.
Nevertheless, beyond laws, compliance and regulations, human ethics also can play a vital role in Responsible AI, which sets guidelines for organizations to design fair, accountable, transparent, and trustworthy AI systems, thereby fostering trust in the technology. For instance, preventing bias and discrimination in AI systems by making them transparent and explainable, and considering diverse perspectives, is essential. Organizations must also have proper governance measures, such as clear policies and procedures, and be open about their data processing, to take responsibility for the consequences of their AI systems on individuals and society. Transparency is also crucial for Responsible AI, as it allows individuals to understand how AI systems make decisions and use data, thereby promoting trust and alignment with societal values and interests.
Human augmentation and human ethics play a pivotal role in building Responsible AI by ensuring that the technology is developed and used in ways that align with societal values and promote the well-being of individuals and communities. Human augmentation, the use of technology to enhance human capabilities such as cognitive, physical, or emotional abilities, can be utilized to design AI systems that work alongside and enhance human capabilities rather than replace them, avoiding unintended consequences or displacement of human workers. Human ethics, the branch of ethics that deals with how human beings ought to treat one another, is also crucial for Responsible AI, by ensuring that the technology is developed and used in ways that respect human rights and promote the well-being of individuals and communities, through principles such as fairness, transparency, and accountability. Both human augmentation and human ethics are also important for sustainability and social impact, by promoting the well-being of individuals and communities, and aligning with societal values, creating AI systems that are not only technically advanced but also socially responsible that can have a positive impact on society and the environment.
To succeed in the Responsible AI strategy, leaders can focus the key dimensions of
to understand the stakeholders, strategy, social justice and sustainable impact. As shown in the figure, KITE abstraction framework analyses the synergy and social impact of AI from organizational, social and sustainability perspectives. The interdependent perspectives enable the evaluation of motivations for AI ethics and good governance, AI for good, AI for sustainability, and social diversity and inclusion in AI strategies and initiatives. In our experience, this framework enables organizations to systematically engage with the community, volunteers and partners to collaborate towards ethical and sustainable AI for social justice. It hides the application-specific complexities in AI and generalizes the key success factors (KSF) of AI initiatives where stakeholders can easily understand their responsibilities for sustainability and social justice. These key success factors include but are not limited to social DEI (Diversity, Equity and Inclusion), SDGs (sustainable development goals), strategy, ethics and governance in AI. Moreover, this framework supports mitigating AI risks related to biases in various aspects, including bias in data, algorithms, and the people involved in AI. For the complete strategy, please refer to my submission?bit.ly/AIESG?in response to the Australian Government AI regulation consultation process.
In summary, the NIST AI RMF, GDPR and UN SDGs provide guidance and regulations for responsible AI development and use. Leaders, policymakers and society all have a role to play in ensuring AI is used in a way that supports ESG initiatives through setting regulations, guidelines, and holding organizations accountable for their use of AI.
LinkedIn Top Voice AI | Head, Generative AI | Thought Leader | Speaker | Master Data Scientist | MBA | Australia's National AI Think Tank Member | Australian
1 年Thanks for the share Dr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP))
Real-Estate Investment Strategist | Australia-Wide Regional Property Expert | Max Returns with Research & Data-Driven Strategies | Real-Estate Business Partner | Client First Mindset | Experienced Finance Professional
1 年Thanks for sharing, nice one Mahendra. Ive subscribed to it and was chatting to a colleague last week on AI Bias, especially AI being in the nascent stages, is critical to be eliminated and this framework covers it to build that trust. Trust = solid foundation. ??
Conference Content Producer
1 年Thanks for sharing Dr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP))
.
1 年Well done Mahendra!