AI the right way: Responsible, Explainable and Compliant

AI the right way: Responsible, Explainable and Compliant

Disclaimer: This is an exploratory article that presents the author's views and research; it does not represent, promote or challenge the views of any organisation(s), and does not provide legal advice. All external information sources have been cited, and no AI has been used to produce this article except for proofreading purposes.

Artificial Intelligence (AI) can do great things. And like humans, it can continuously improve; it is also highly adaptive, and scalable. This means that there are great benefits for humans in using AI at work and in life. But how can we use AI, especially, Generative AI (Gen AI) the ‘right way’? This article brings together at a high level, key considerations for an enterprise or organisation's AI implementation journey under the three tenets of AI being responsible, explainable, and compliant.

Responsible AI

Responsible AI is broadly, a set of principles and considerations for building AI that makes a positive impact. Depending on the organisation and the context of use, it may have a varying meaning, but generally, the main considerations can be classified into Input-level risks, Output-level risks, and Environmental Impact & ESG commitments.

  1. Input-level Risks: AI feeds on data. This means that the data that an AI model ingests, brings with it some risks and privacy concerns, in mainly 3 ways - Data Ownership & Intellectual Property (who owns the data and for what purposes can it be used for), Data Quality & Interoperability (cleanness & suitability of the data for use in AI solutions), and Data Residency (the geography that the data is housed in). While the area of data ownership and intellectual property issues is itself a large, complex area of concern (given that AI absorbs large amounts of data), research conducted by IBM suggests that 3 key elements to addressing other data-related risks are 1. good data governance (complying with data privacy laws and organisational policies), 2. data integration & preprocessing (combining disparate data sources into one secure location & formatting to meet training requirements), and 3. a suitable data storage policy (running most data integration on internal servers and being cautious about using external LLMs with sensitive information). Further, a data mapping exercise is often recommended - clarifying what data can be used where e.g. customer profile data can be used for order fulfilment, but, can it be used for marketing or retargeting?
  2. Output-level Risks: Working with AI entails several risks beyond just data-related risks including but not limited to: Bias (e.g. an AI model that discriminates against job applicants based on ethnicity or gender), Safety (e.g. can AI models be safely used to give basic medical advice?) Accuracy (e.g. hallucinations and the need for human validation given that AI systems are not always correct), Sensitivity (careful handling of confidential information), Toxicity (harmful, abusive or obscene content)and Breach of Contract (e.g. breach of website terms). Ideally, teams working with AI should take up a scoping/assessment as the very first step in their AI development lifecycle, not later or at the end. While there are several items to consider, depending on the use case and the organisation, Microsoft’s Responsible AI Dashboard* is one of the ways of putting it all together for model review and decision-making during the development lifecycle. Given how AI models work, several rounds of tests including unit testing, integration testing, interactive user testing, and quality checks may be needed to ensure the AI system works as expected and is trustworthy (Google AI, 2024). Further, organisations may need to undertake external reviews/audits, and make efforts to promote responsible use of AI solutions at the end-user level through the right user education and support.
  3. Environmental Impact & ESG Commitments: Another important consideration in Responsible AI practices is the usage of computational resources, the resulting environmental impact and if/how it, in turn, affects any ESG commitments. Data centres that power large AI models typically require significantly higher energy and cooling resources (water) than traditional cloud-based solutions. Depending on the scale and complexity of an enterprise AI solution, it can consume enormous amounts of computational power/energy from data centres, and therefore, carbon footprint. Currently, about 1% to 1.5% of global electricity use is taken up by data centres, according to the International Energy Agency. While sustainable AI is a whole other space to look into, there are several feasible steps that most organisations can take to use AI more responsibly. Coupled with accurate impact assessments on energy use, depending on the service model of their AI solution, organisations must invest in being more environment-friendly; for example, when choosing an infrastructure vendor for an AI solution, choose a provider that has better energy efficiency based on internationally recognised standards, or make better-informed decisions on Power Purchase Agreements (PPAs) where applicable. One interesting case study is where Microsoft forecasts being able to power its data centres in Ireland 100% with renewable energy by the year 2025.

Explainable AI

Given the scale and impact that AI solutions can achieve, organisations building them must be accountable and able to explain how their solutions work and produce output, and in some cases, how that’s used to inform decision-making or design products & services. Investing in explainability is key to building trust among consumers and in turn, the overall growth and profitability of AI solutions.

This entails preparing relevant frameworks and documentation to 1. firstly, monitor the functioning of an AI solution, and 2. provide explanations for how the solution works and arrives at an output/result. Of course, explainability carries a unique importance on both internal (within an enterprise) and external (clients and end-users) sides.

Explainability cannot just be a checklist item for either the technical or leadership teams. It requires collaboration at all levels of the business, from both teams focused on developing the solution, as well as those marketing and representing the solution, and kept current with any updates made to the solution. Leadership teams need to work closely with technical teams to have a detailed understanding of how their AI model works from start to end i.e. Input (how and what kind of data it ingests), Processing (how many components process information and how they relate to each other), and Output (how many factors are involved and at what weightage, for an output/result).

An important concept within explainability is the use of 'Features' (different from input). Features are essentially transformations of raw input data to make it usable by the model. For example, if the input data is a list of transactions with amounts, a feature can be the number of transactions in the last 30 days and the average transaction value; for the model to be explainable, it should be made clear which features influence a prediction (Microsoft 2024). For example, if an AI solution on a bank’s website uses a set of algorithms and customer data to make a decision on a credit card application, appropriate documentation will need to be maintained explaining what features of customer data such as credit score, current/past relationship with the bank, and income, were used and at what weights, to make the decision on approval/rejection.

Again, depending on the organisation and context of use, there are several potential tools and frameworks to operationalise explainable AI. For instance, Google Cloud with Vertex AI offers a holistic in-built framework to design and deploy interpretable AI, as well as streamline model performance monitoring and training (Google, 2024). In some cases, the output also needs to be reviewed with human supervision before further action before it can be considered explainable.

Compliant AI

Governments and regulatory bodies at the international, national, and state/provincial levels are actively monitoring the developments in AI, and with growing interest in AI adoption, there have been several new bills developed or updated in many jurisdictions, albeit at a pace generally slower than technological developments.

At a high level, the general steps for an organisation to consider in regards to regulatory compliance are:

  1. Maintain a general understanding of different legislations and the scope of each (for example, how privacy legislations differ from AI laws, and how they may be interconnected depending on the solution).
  2. Identify the right legislations/regulations based on the nature of the AI solution.
  3. Prepare documentation and controls to address the applicable legislations/regulations.

Because AI is heavily reliant on the data that powers it, having a robust data governance and privacy protection framework is a precursor to an effective AI Governance framework. With the increasing scale and complexity of AI solutions, many organisations are revamping their AI Governance & Compliance strategies, and those that invest in an early days AI Governance strategy will have a substantial head start as legislation picks up pace. At the international level, the Organisation for Economic Cooperation & Development (OECD) has developed the OECD AI Principles to which several OECD member and non-member countries are adherent (OECD, 2023).

More recently, a landmark development in the world of AI regulation came when the European Union (EU) passed the EU AI Act - the first-ever formal legal framework on AI. The regulatory framework proposed by the Act defines 4 levels of risk for AI systems - Minimal Risk, Limited Risk, High Risk and Unacceptable Risk. Penalties for non-compliance range from EUR 35 million or 7% of global revenue to EUR 7.5 million or 1.5% of revenue, depending on the infringement and size of the company.

In Canada, there have been multiple new pieces of AI & data privacy legislation or updates at the federal level over the past year, including notably, the proposed amendments to the Artificial Intelligence & Data Act (AIDA) in December 2023. Given that the provisions under AIDA would come into force no sooner than 2025, September 2023 saw the development of a voluntary code of conduct on the responsible development and management of advanced Gen AI systems, which has 22 signatories to date. Further, in December 2023, the Office of the Privacy Commissioner of Canada published key principles for responsible, trustworthy and privacy-protective generative AI technologies, to help organisations building, providing or using Gen AI solutions apply key Canadian privacy principles.


Given that driving value through AI is not just a technological process, enterprises should consider investing in granular research and knowledge development around the fast-changing world of AI Governance.

References:

Microsoft Corporation LLC (2024) Copilot for Microsoft 365 – Microsoft Adoption. https://adoption.microsoft.com/en-us/copilot/ .

The importance of data ingestion and integration for enterprise AI. https://www.ibm.com/blog/the-importance-of-data-ingestion-and-integration-for-enterprise-ai/ .

Responsible AI dashboard | Microsoft AI Lab (2024). https://www.microsoft.com/en-us/ai/ai-lab-responsible-ai-dashboard .

Google AI (2024) Google Responsible AI Practices – Google AI. https://ai.google/responsibility/responsible-ai-practices/ .

Data centres & networks - IEA (2024). https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks#overview .

As the world goes digital, datacenters that make the cloud work look to renewable energy sources - Microsoft News Centre Europe (2022). https://news.microsoft.com/europe/features/as-the-world-goes-digital-datacenters-that-make-the-cloud-work-look-to-renewable-energy-sources/ .

Explainability - Microsoft Research (2022). https://www.microsoft.com/en-us/research/group/dynamics-insights-apps-artificial-intelligence-machine-learning/articles/explainability/ .

Explainable AI | Google Cloud (no date). https://cloud.google.com/explainable-ai .

OECD legal instruments (no date). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 .

AI Act (2024). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai .

The Artificial Intelligence and Data Act (AIDA) – Companion document (2023). https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document#s11 .

Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (2024). https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems .

Principles for responsible, trustworthy and privacy-protective generative AI technologies (2024) https://www.priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/gd_principles_ai/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了