"How to" for Privacy Professionals: AI Governance, Responsible AI Principles, and AI Risk Assessments
10-minute read – you should read this if you are designing or implementing AI governance across your organization. This is the full version of three articles previously published.
There are clear parallels between AI governance today and privacy risk management in 2017
I started my privacy career – as did many others – in the run-up to the General Data Protection Regulation (GDPR) when I worked to design and implement enterprise-wide privacy risk control frameworks. The parallels between data privacy risk work in 2017 and artificial intelligence (AI) governance today are striking. The risks associated with the use of AI aren’t new, but the level of risk is increasing exponentially. A new European law is driving change. Use of AI is proliferating. Significant fines can be imposed on firms who get AI wrong. I believe that is why many privacy teams are at the forefront of AI governance and ethics within their organization, and why many AI risk teams are borrowing their approach from the world of privacy risk management.
To get started in AI governance and ethics, you must understand the EU AI Act (the Act), a European law with far-reaching global consequences. And to stay ahead of emerging regulatory regimes, you will need to implement ethical AI principles to guide the way your business uses AI. But the key to AI governance – as for data privacy – is to know what AI you have within your organization and to assess the risk of each individual use.
The EU AI Act: not all AI is created equal; its governance must reflect the varied risks and benefits it presents
The Act is a robust declaration of the EU's commitment to individual rights in the digital age. It confronts head-on the challenges AI poses to privacy, non-discrimination and data protection. It sets clear boundaries against uses of AI that could lead to unwarranted surveillance, biased decision-making or other forms of harm. The Act's provisions against certain AI practices, notably those involving invasive monitoring and profiling, are a direct response to these concerns.
This ground-breaking legislation is a response to the deepening entanglement of AI in our daily lives. It acknowledges the profound impact that AI can have on individual rights and societal norms. It represents a pivotal shift in policy, where the protection of individual dignity and privacy is placed at the forefront of technological progress.
Central to the Act is its nuanced, risk-based approach to AI regulation. By differentiating AI systems according to their potential impact, the Act applies more stringent controls where the stakes are highest in sectors like healthcare, law enforcement and essential public services. This underscores a key understanding: not all AI is created equal, and its governance must reflect the varied risks and benefits it presents.
In framing the future of AI, the Act serves a dual purpose. It is both a regulatory framework for a rapidly advancing technological field and a statement of values, asserting that the march of progress must not trample the rights and freedoms that form the bedrock of democratic societies. It marks a significant moment in the global discourse on technology and ethics and illustrates the need for organizations to set clear guiding principles as a starting point for their AI governance frameworks.
Nine guiding "Responsible AI" principles to govern the use of AI within your organization
Because the legal regimes are developing, robust AI governance must be designed to flex and align to emerging requirements. As with GDPR, that means developing principles to act as a guide for how to approach AI. The nine "responsible AI” principles set out below have been adopted by EY teams and are a good potential starting point. They encourage firms to look beyond compliance and to integrate ethical considerations into each stage of AI development and deployment.
1.?????? Accountability: There is unambiguous ownership over AI systems, their impacts and resulting outputs across the AI lifecycle.
2.?????? Data protection: Use of data in AI systems is consistent with permitted rights, maintains confidentiality of business and personal information and reflects ethical norms.
3.?????? Reliability: AI systems are aligned with stakeholder expectations and continually perform at a desired level of precision and consistency.
4.?????? Security: AI systems, their input and output data are secured from unauthorized access and resilient against corruption and adversarial attack.
5.?????? Transparency: Appropriate levels of disclosure regarding the purpose, design and impact of AI systems is provided so that stakeholders, including end users, can understand, evaluate and correctly employ AI systems and their outputs.
6.?????? Explainability: Appropriate levels of explanation are enabled so that the decision criteria and output of AI systems can be reasonably understood, challenged and validated by human operators.
7.?????? Fairness: The needs of all impacted stakeholders are assessed with respect to the design and use of AI systems and their outputs to promote a positive and inclusive societal impact.
8.?????? Compliance: Ensure that the design, implementation and use of AI systems and outputs comply with relevant laws, regulations and professional standards.
9.?????? Sustainability: Considerations of the impacts of technology are embedded throughout the AI lifecycle to promote physical, social, economic and planetary well-being.
There is, of course, no “right answer” when it comes to choosing guiding principles that match your own organization’s character and risk appetite.
领英推荐
After settling on a set of principles, you will need to embed them within priority areas of your business that are involved in the AI lifecycle. These business areas will help you to understand where AI is used, and what your organization uses it for.
Defining AI: Your AI catalogue starts with an understanding of what AI is – and isn’t
AI can be difficult to define clearly. The recent surge in interest towards Generative AI (GenAI) among the public and regulatory bodies has highlighted the inconsistencies, emphasizing the need for clarity. The interchangeable use of AI concepts, processes and types is exacerbated by differing industry definitions among technology vendors, regulators and academia. The Organisation for Economic Co-operation and Development (OECD) dissects AI into distinct elements, each contributing to the broader understanding of the role and capabilities of AI in the modern world.
1.?????? Machine-based system: This element forms the backbone of AI, emphasizing its reliance on advanced machine technology.
2.?????? Explicit or implicit objectives: AI systems operate based on objectives that can be direct and clearly stated or inferred from their programming and learning processes. This dual nature of objectives is evident in systems ranging from autonomous vehicles to advanced language models.
3.?????? Inferring from received input: Central to the operation of AI is its ability to infer or deduce outputs from inputs. This process is a testament to its logical and analytical prowess where it processes and interprets inputs, be they from humans or machines, to generate meaningful outcomes.
4.?????? Generating outputs: Expanding AI's scope, this aspect includes the creation of content such as text, video or images, alongside making predictions, recommendations and decisions. This broadens AI's role from a mere decision-making tool to a creator of diverse digital content.
5.?????? Influencing physical or virtual environments: AI's impact transcends the physical world and extends into virtual realms. This distinction underscores the technology's pervasive influence, whether in tangible, real-world scenarios or within digital landscapes.
6.?????? Varying levels of autonomy and adaptiveness post-deployment: Highlighting AI's dynamic nature, this point reflects the technology's ability to evolve and adapt over time. Such adaptiveness is seen in systems that tailor their responses based on user preferences or specific interactions.
You can use this definition at the outset and also a checkpoint when you are designing your AI risk assessment.
Develop multi-stakeholder AI risk assessments to bring AI governance to life within your organization
After cataloguing where AI is being used within your organization, your next task will be to understand the inherent risks in the potential AI application and how effective your organizational controls are in managing that risk. The pace of change in AI means that AI risk assessment needs to move quickly from the theoretical and into the practical. And the complexity of the potential risks means you will need to account for multiple viewpoints. The following considerations are a useful starting point as you begin to work on your own risk assessments:
Parties and roles: Identify all stakeholders involved in the AI system's lifecycle and define their roles and responsibilities. This clarity is crucial for effective governance and accountability.
System characteristics: Detail the technical and functional aspects of the AI system, including architecture, algorithms and operational parameters. Understanding these characteristics is fundamental to assessing potential risks.
Data requirements: Examine the data inputs and outputs of the AI system, focusing on data sources, quality and processing practices. This section is key to evaluating risks related to data privacy and security.
Purpose and use: Articulate the intended use cases and objectives of the AI system, providing a basis for assessing alignment with ethical principles and regulatory requirements.
Individual impact: Evaluate the potential effects of the AI system on individuals and communities, considering factors like fairness, transparency and the risk of harm. This section underscores the commitment to your AI?principles.
Safety protocols: Develop tools and guidelines for ongoing compliance monitoring, including checklists and templates aligned with current AI laws and regulations. This proactive approach facilitates continuous compliance?and risk management.
I hope you found this article useful and useable as you embark on your AI journey. Please reach out to me directly, to discuss the contents.
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.