How to adopt NIST AI Risk Management Framework (AI RMF) and GDPR together to strengthen the Responsible AI framework?
NIST AI Risk Management Framework (AI RMF 1.0) Launch.

How to adopt NIST AI Risk Management Framework (AI RMF) and GDPR together to strengthen the Responsible AI framework?

The National Institute of Standards and Technology (NIST) (in the USA) has just released (on January 26, 2023) their Artificial Intelligence Risk Management Framework (AI RMF) in collaboration with the private and public sectors to better manage the risks associated with AI for individuals, organizations, and society. The framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The framework was developed through a consensus-driven, open, transparent, and collaborative process that included a request for information, multiple draft versions for public comments, workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others. NIST also published a companion playbook, roadmap, crosswalk, video explainer and various perspectives to support the framework.

No alt text provided for this image
The National Institute of Standards and Technology (NIST) Artificial Intelligence (AI) Risk Management Framework (AI RMF). Source: https://www.nist.gov/itl/ai-risk-management-framework

NIST AI RMF is a guidance document that provides a comprehensive approach to managing the risks associated with AI systems. The framework is organized around four main components: Govern, Map, Measure, and Manage.

  1. Govern: The governance component of the framework includes establishing governance policies and procedures, as well as creating an organization-wide culture of risk management. This includes identifying and managing legal, ethical, and societal risks associated with AI, as well as ensuring compliance with regulations.
  2. Map: The mapping component of the framework involves identifying, understanding, and documenting the different types of risks associated with AI systems, as well as the systems, processes, and controls in place to manage those risks. This includes identifying the data, algorithms, and infrastructure used to develop and operate AI systems, as well as the people and organizations involved in the development and operation of those systems.
  3. Measure: The measurement component of the framework involves evaluating the effectiveness of the risk management processes and controls in place. This includes monitoring and assessing the performance of AI systems, as well as the effectiveness of the controls in place to manage risks associated with those systems.
  4. Manage: The management component of the framework involves taking action to mitigate or eliminate identified risks. This includes implementing and maintaining controls to manage risks, as well as conducting regular reviews of those controls to ensure they remain effective. This also includes taking appropriate action in response to the risks, such as shutting down or modifying the system.

The NIST AI RMF is intended to be flexible and adaptable to the specific needs of different organizations and industries, and can be used to guide the development and deployment of AI systems in a way that promotes transparency, explainability, and robustness while minimizing the risk of unintended consequences.

No alt text provided for this image
The GDPR principles. Source: https://www.planetcompliance.com/gdpr-challenges-opportunity

In contrast, the General Data Protection Regulation (GDPR) is a regulation by the European Union that came into effect on May 25, 2018. It replaced the 1995 EU Data Protection Directive and strengthens EU data protection laws. GDPR applies to any organization that processes personal data of EU residents, regardless of where the organization is located. The GDPR has six key principles that organizations must adhere to when processing personal data. These principles are:

  1. Lawfulness, fairness, and transparency: Organizations must have a legal basis for processing personal data and must be transparent about their data processing activities.
  2. Purpose limitation: Personal data must be collected for specific, explicit, and legitimate purposes and not further processed in a way that is incompatible with those purposes.
  3. Data minimization: Organizations must only collect the personal data that is necessary for the specific purpose it was collected for.
  4. Accuracy: Personal data must be accurate and kept up to date.
  5. Storage limitation: Personal data must be kept for no longer than is necessary for the purpose it was collected for.
  6. Integrity and confidentiality: Personal data must be processed in a way that ensures its security, including protecting it against unauthorized or unlawful processing, accidental loss, destruction or damage.

In the context of AI risk management, organizations must ensure that they are complying with these principles and that they have appropriate technical and organizational measures in place to protect personal data. Additionally, GDPR requires organizations to conduct regular risk assessments to identify and mitigate the potential risks associated with the processing of personal data and provide transparency to the data subjects about their data processing activities. The process helps identify and mitigate the potential risks associated with their AI systems.

Adopting the NIST AI RMF and the GDPR together can help organizations strengthen their Responsible AI framework. Both frameworks provide guidance on managing the risks associated with AI and protecting personal data, respectively, and can be used together to ensure that organizations are effectively managing the risks associated with their AI systems while also complying with data protection regulations. Here are a few steps organizations can take to adopt the NIST AI RMF and GDPR together:

  1. Incorporate the NIST AI RMF into the organization's existing data protection framework. This can involve using the NIST AI RMF to identify and assess the risks associated with AI systems, and then using GDPR to implement controls and processes to mitigate those risks and protect personal data.
  2. Ensure that GDPR requirements are integrated into the organization's AI development and deployment processes. This can include obtaining explicit consent for the collection and processing of personal data, as well as providing individuals with the right to access, correct, or delete their personal data.
  3. Regularly review and update the organization's AI and data protection policies and procedures to ensure they are in line with both the NIST AI RMF and GDPR.
  4. Identify and document all data used in AI systems and ensure that they meet GDPR requirements, such as ensuring that data is accurate, up-to-date, and not excessive.
  5. Implement transparency and explainability mechanisms in AI systems to ensure that individuals understand how their data is being used and that the systems can be audited.
  6. Conduct regular risk assessments to identify and assess the potential risks associated with the organization's AI systems, and then implement appropriate controls to manage those risks in line with both the NIST AI RMF and GDPR.

By following these steps, organizations can effectively adopt the NIST AI RMF and GDPR together, and ensure that their AI systems are developed and deployed in a way that promotes transparency, explainability, and robustness while also protecting personal data and complying with data protection regulations.

No alt text provided for this image
THE 17 United Nations Sustainable Development Goals (UN SDGs). Source: https://sdgs.un.org/goals

The 16th United Nations Sustainable Development Goal (UN SDG) - Peace, Justice, and Strong Institutions, can play a leading role in motivating and guiding adoptation of the NIST AI RMF and the GDPR for governance, regulation, and compliance of AI.

No alt text provided for this image
The 16th United Nations Sustainable Development Goal (UN SDG) - Peace, Justice, and Strong Institutions. Source: https://sdgs.un.org/goals/goal16

This goal focuses on promoting peaceful and inclusive societies, providing access to justice for all, and building effective, accountable, and inclusive institutions at all levels. In the context of AI, this goal can help to ensure that AI systems are developed and used in ways that promote peace and social inclusion, and do not perpetuate discrimination or bias. This can be achieved by implementing compliance with international human rights and ethical principles in AI development and deployment, having transparent and explainable systems, and robust governance mechanisms such as clear policies and procedures and accountability for the actions of AI systems and their impact on individuals and society. Additionally, strong institutions are needed to ensure effective and accountable governance of AI which includes regulatory bodies that oversee the development and deployment of AI systems and ensure compliance with laws and regulations, and initiatives that promote civil society participation in the governance of AI and the development of international standards and guidelines for Responsible AI.

Nevertheless, beyond laws, compliance and regulations, human ethics also can play a vital role in Responsible AI, which sets guidelines for organizations to design fair, accountable, transparent, and trustworthy AI systems, thereby fostering trust in the technology. For instance, preventing bias and discrimination in AI systems by making them transparent and explainable, and considering diverse perspectives, is essential. Organizations must also have proper governance measures, such as clear policies and procedures, and be open about their data processing, to take responsibility for the consequences of their AI systems on individuals and society. Transparency is also crucial for Responsible AI, as it allows individuals to understand how AI systems make decisions and use data, thereby promoting trust and alignment with societal values and interests.

No alt text provided for this image
Human ethics underpins the AI development and value creation process. Therefore, leaders should be aware of the ethics in governance. Source: bit.ly/AIESG

Human augmentation and human ethics play a pivotal role in building Responsible AI by ensuring that the technology is developed and used in ways that align with societal values and promote the well-being of individuals and communities. Human augmentation, the use of technology to enhance human capabilities such as cognitive, physical, or emotional abilities, can be utilized to design AI systems that work alongside and enhance human capabilities rather than replace them, avoiding unintended consequences or displacement of human workers. Human ethics, the branch of ethics that deals with how human beings ought to treat one another, is also crucial for Responsible AI, by ensuring that the technology is developed and used in ways that respect human rights and promote the well-being of individuals and communities, through principles such as fairness, transparency, and accountability. Both human augmentation and human ethics are also important for sustainability and social impact, by promoting the well-being of individuals and communities, and aligning with societal values, creating AI systems that are not only technically advanced but also socially responsible that can have a positive impact on society and the environment.

No alt text provided for this image
KITE abstraction framework presented in United Nations Worlds Data Forum to support UN SDGs and Responsible AI. Source: https://unstats.un.org/unsd/undataforum/blog/KITE-an-abstraction-framework-for-reducing-complexity-in-ai-governance/

To succeed in the Responsible AI strategy, leaders can focus the key dimensions of

  1. AI,
  2. Organization,
  3. Society,
  4. Sustainability,

to understand the stakeholders, strategy, social justice and sustainable impact. As shown in the figure, KITE abstraction framework analyses the synergy and social impact of AI from organizational, social and sustainability perspectives. The interdependent perspectives enable the evaluation of motivations for AI ethics and good governance, AI for good, AI for sustainability, and social diversity and inclusion in AI strategies and initiatives. In our experience, this framework enables organizations to systematically engage with the community, volunteers and partners to collaborate towards ethical and sustainable AI for social justice. It hides the application-specific complexities in AI and generalizes the key success factors (KSF) of AI initiatives where stakeholders can easily understand their responsibilities for sustainability and social justice. These key success factors include but are not limited to social DEI (Diversity, Equity and Inclusion), SDGs (sustainable development goals), strategy, ethics and governance in AI. Moreover, this framework supports mitigating AI risks related to biases in various aspects, including bias in data, algorithms, and the people involved in AI. For the complete strategy, please refer to my submission?bit.ly/AIESG?in response to the Australian Government AI regulation consultation process.

No alt text provided for this image
AI Governance and Ethics Framework for Sustainable AI and Sustainability. A submission in response to the Australian Government AI regulation consultation process.Source: https://doi.org/10.48550/arXiv.2210.08984

In summary, the NIST AI RMF, GDPR and UN SDGs provide guidance and regulations for responsible AI development and use. Leaders, policymakers and society all have a role to play in ensuring AI is used in a way that supports ESG initiatives through setting regulations, guidelines, and holding organizations accountable for their use of AI.

Aruna Pattam

LinkedIn Top Voice AI | Head, Generative AI | Thought Leader | Speaker | Master Data Scientist | MBA | Australia's National AI Think Tank Member | Australian

1 年
Rex Francis

Real-Estate Investment Strategist | Australia-Wide Regional Property Expert | Max Returns with Research & Data-Driven Strategies | Real-Estate Business Partner | Client First Mindset | Experienced Finance Professional

1 年

Thanks for sharing, nice one Mahendra. Ive subscribed to it and was chatting to a colleague last week on AI Bias, especially AI being in the nascent stages, is critical to be eliminated and this framework covers it to build that trust. Trust = solid foundation. ??

要查看或添加评论,请登录

Dr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP))的更多文章

社区洞察

其他会员也浏览了