AI the right way: Responsible, Explainable and Compliant
Sohail Ahmed
Passionate about Digital Strategy & Ethical Innovation | IAPP-Certified AI Governance Professional (AIGP)
Disclaimer: This is an exploratory article that presents the author's views and research; it does not represent, promote or challenge the views of any organisation(s), and does not provide legal advice. All external information sources have been cited, and no AI has been used to produce this article except for proofreading purposes.
Artificial Intelligence (AI) can do great things. And like humans, it can continuously improve; it is also highly adaptive, and scalable. This means that there are great benefits for humans in using AI at work and in life. But how can we use AI, especially, Generative AI (Gen AI) the ‘right way’? This article brings together at a high level, key considerations for an enterprise or organisation's AI implementation journey under the three tenets of AI being responsible, explainable, and compliant.
Responsible AI
Responsible AI is broadly, a set of principles and considerations for building AI that makes a positive impact. Depending on the organisation and the context of use, it may have a varying meaning, but generally, the main considerations can be classified into Input-level risks, Output-level risks, and Environmental Impact & ESG commitments.
Explainable AI
Given the scale and impact that AI solutions can achieve, organisations building them must be accountable and able to explain how their solutions work and produce output, and in some cases, how that’s used to inform decision-making or design products & services. Investing in explainability is key to building trust among consumers and in turn, the overall growth and profitability of AI solutions.
This entails preparing relevant frameworks and documentation to 1. firstly, monitor the functioning of an AI solution, and 2. provide explanations for how the solution works and arrives at an output/result. Of course, explainability carries a unique importance on both internal (within an enterprise) and external (clients and end-users) sides.
Explainability cannot just be a checklist item for either the technical or leadership teams. It requires collaboration at all levels of the business, from both teams focused on developing the solution, as well as those marketing and representing the solution, and kept current with any updates made to the solution. Leadership teams need to work closely with technical teams to have a detailed understanding of how their AI model works from start to end i.e. Input (how and what kind of data it ingests), Processing (how many components process information and how they relate to each other), and Output (how many factors are involved and at what weightage, for an output/result).
An important concept within explainability is the use of 'Features' (different from input). Features are essentially transformations of raw input data to make it usable by the model. For example, if the input data is a list of transactions with amounts, a feature can be the number of transactions in the last 30 days and the average transaction value; for the model to be explainable, it should be made clear which features influence a prediction (Microsoft 2024). For example, if an AI solution on a bank’s website uses a set of algorithms and customer data to make a decision on a credit card application, appropriate documentation will need to be maintained explaining what features of customer data such as credit score, current/past relationship with the bank, and income, were used and at what weights, to make the decision on approval/rejection.
Again, depending on the organisation and context of use, there are several potential tools and frameworks to operationalise explainable AI. For instance, Google Cloud with Vertex AI offers a holistic in-built framework to design and deploy interpretable AI, as well as streamline model performance monitoring and training (Google, 2024). In some cases, the output also needs to be reviewed with human supervision before further action before it can be considered explainable.
Compliant AI
Governments and regulatory bodies at the international, national, and state/provincial levels are actively monitoring the developments in AI, and with growing interest in AI adoption, there have been several new bills developed or updated in many jurisdictions, albeit at a pace generally slower than technological developments.
At a high level, the general steps for an organisation to consider in regards to regulatory compliance are:
Because AI is heavily reliant on the data that powers it, having a robust data governance and privacy protection framework is a precursor to an effective AI Governance framework. With the increasing scale and complexity of AI solutions, many organisations are revamping their AI Governance & Compliance strategies, and those that invest in an early days AI Governance strategy will have a substantial head start as legislation picks up pace. At the international level, the Organisation for Economic Cooperation & Development (OECD) has developed the OECD AI Principles to which several OECD member and non-member countries are adherent (OECD, 2023).
More recently, a landmark development in the world of AI regulation came when the European Union (EU) passed the EU AI Act - the first-ever formal legal framework on AI. The regulatory framework proposed by the Act defines 4 levels of risk for AI systems - Minimal Risk, Limited Risk, High Risk and Unacceptable Risk. Penalties for non-compliance range from EUR 35 million or 7% of global revenue to EUR 7.5 million or 1.5% of revenue, depending on the infringement and size of the company.
领英推荐
In Canada, there have been multiple new pieces of AI & data privacy legislation or updates at the federal level over the past year, including notably, the proposed amendments to the Artificial Intelligence & Data Act (AIDA) in December 2023. Given that the provisions under AIDA would come into force no sooner than 2025, September 2023 saw the development of a voluntary code of conduct on the responsible development and management of advanced Gen AI systems, which has 22 signatories to date. Further, in December 2023, the Office of the Privacy Commissioner of Canada published key principles for responsible, trustworthy and privacy-protective generative AI technologies, to help organisations building, providing or using Gen AI solutions apply key Canadian privacy principles.
Given that driving value through AI is not just a technological process, enterprises should consider investing in granular research and knowledge development around the fast-changing world of AI Governance.
References:
Microsoft Corporation LLC (2024) Copilot for Microsoft 365 – Microsoft Adoption. https://adoption.microsoft.com/en-us/copilot/ .
The importance of data ingestion and integration for enterprise AI. https://www.ibm.com/blog/the-importance-of-data-ingestion-and-integration-for-enterprise-ai/ .
Responsible AI dashboard | Microsoft AI Lab (2024). https://www.microsoft.com/en-us/ai/ai-lab-responsible-ai-dashboard .
Google AI (2024) Google Responsible AI Practices – Google AI. https://ai.google/responsibility/responsible-ai-practices/ .
Data centres & networks - IEA (2024). https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks#overview .
As the world goes digital, datacenters that make the cloud work look to renewable energy sources - Microsoft News Centre Europe (2022). https://news.microsoft.com/europe/features/as-the-world-goes-digital-datacenters-that-make-the-cloud-work-look-to-renewable-energy-sources/ .
Explainability - Microsoft Research (2022). https://www.microsoft.com/en-us/research/group/dynamics-insights-apps-artificial-intelligence-machine-learning/articles/explainability/ .
Explainable AI | Google Cloud (no date). https://cloud.google.com/explainable-ai .
OECD legal instruments (no date). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 .
The Artificial Intelligence and Data Act (AIDA) – Companion document (2023). https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document#s11 .
Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (2024). https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems .
Principles for responsible, trustworthy and privacy-protective generative AI technologies (2024) https://www.priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/gd_principles_ai/