What explainable AI can teach us about good policies
Imagine that you are in court having to make a case. Your license to continue operation as a business is on the line. You are painfully aware that there is a large, complex, legal framework consisting of many (many) laws, regulations, policies and accepted practices that you need to carefully navigate. Your evidence is mixed, of various types, and has both relevant and irrelevant elements (and you are not sure which is which). Your interests and exposures span several jurisdictions. You want to do the right thing, you believe you are doing the right thing, but now you need to prove it.
Other than losing sleep, what do you do?
Many would suggest that, along with calling a real good lawyer, you should press the magic AI button.
But this button brings concerns. The answer can’t be a “black box”, we will have to explain how we are making our case, what evidence we are using and how that supports our position in the context of relevant regulations.?
This type of example makes evident the need for “explainable AI”: an approach that aims to make the decision-making processes of AI systems understandable and transparent to humans.
Towards the end of 2024 we were working with a team of experts who were building a proof of concept that made good use of explainable AI for a client (and in fact, we hope, the rest of the world...). Along the way we used a version of this courtroom scenario to help explain how we were using AI, why it needed to provide a chain of evidence, and why the answers needed to be “explainable”.
The experts from PyxGlobal that we were working with are ahead of most on explainable AI. They understood that there are issues to do with IP, commercial confidentiality and rights to access. To continue our court room metaphor, we will have to prove that we have the right to use the evidence that we base our case on, that we have gathered the evidence legally, and (depending on the nature of our trial), we may even reserve the right not to disclose evidence that we have gathered.
This work (which I'll explain in another post, and which we are very much looking forward to continuing with in 2025) taught me many things. One of them is that the rigor that we demand of AI - that the answers it provides should be explainable, and have a chain of evidence (citations etc.) that we can explore and test - well those requirements seem applicable to each and every “system” that comes up with determinations and makes decisions.
I think we should look for “explainable” from every system where it isn’t immediately evident why a decision or conclusion is made. This is not limited to technology systems. We should also expect to have explainable policy decisions, explainable governance decisions, explainable regulatory decisions, explainable human decisions.
Good designs for all such systems, be they human, technology assisted, or autonomous technology, should consider how their logic can be explained.
领英推荐
Get real
Sometimes of course we might require the ability to explain after the event, not before confirming the decision. I want my automatic braking system to brake, not to ask me if I think it should brake based on the evidence of an impending crash that it presents to me in compelling logic on the dash. By all means (in fact, ‘please’) brake now, but let me have the chance to ask questions and examine the evidence later.?
Whether we demand explanations before confirming a decision, or as part of exploring a decision that has been made, we should run the “explainable” ruler across all manner of life impacting decision systems. For example:
But you can’t handle the truth…
Then there is of course the reality that complex systems are too complex for most of us to understand. I think that’s true, in fact I think that’s true for all systems and all of us. Think of how many PhDs and lifetimes you would need to “understand” the full explanation of an airplane autopilot, a car cruise control, or even a mobile phone notification system.
But we don’t need the “whole truth”, in the sense that we don’t need to understand everything everywhere, all at once. Explainability and understanding have the quality of sufficiency about them. We don’t need all knowledge, we need enough knowledge to satisfy our needs, and we need to know that more knowledge is available if we want to dig (and learn) further.
Another angle on "the truth" is that we should still expect that AI systems, all systems, can make "mistakes". The fact that they can and should explain their logic doesn't make their logic right, it makes it explainable. Facts can be omitted and/or selectively chosen, to explain an action. And as (at least) one Judge memorably explained to a defendant, your defense explains your actions, it doesn't justify them.
So should we apply the rigor that we expect of AI to explain itself to other systems? I certainly think so. What do you think?
#digitaltrust
Chief Solutions Architect, Public Sector at TrustGrid | Blockchain Practice Leader | Hyperledger and INATBA
1 个月There are a set of characteristics ( explainable, verifiable, accurate, timely etc ) that are core for any good system .. AI is just another system that needs to meet those requirements.
Executive Director at Loading Growth | IT Growth Specialist | Business Development Leader
1 个月This is a compelling exploration of the importance of explainability, not just in AI, but across all systems that influence critical decisions. The concept of explainable AI is essential for building trust, ensuring accountability, and providing transparency in decision-making processes. As you aptly point out, this rigor should extend to policy, governance, and other human systems where understanding the rationale behind decisions is crucial. While we may not need to grasp every detail, having access to sufficient explanations allows us to question, learn, and ensure that decisions are made ethically and legally. Your courtroom analogy highlights the need for a transparent chain of evidence, which is vital in maintaining digital trust. Thanks for sharing these insights and prompting a broader conversation on the necessity of explainability in decision systems!?
Senior Project/Program Manager/Principal Consultant
1 个月I don’t profess to understand that much about AI but my view is that it is based on logic albeit highly data orientated and as such needs to be sufficiently explainable otherwise there is a danger that we, the human race, loses control which is where it gets very scary….
Founder and CEO, Digital Governance Institute
1 个月Great piece, John. The nugget here is" we don’t need all knowledge, we need enough knowledge to satisfy our needs, and we need to know that more knowledge is available if we want to dig (and learn) further." However, this is always subjective. What we have learned about knowledge and trust is that it is the perceiver's perception of the amount of knowledge needed to make decisions, not an objective analysis of all knowledge, People make decisions on various degrees of information (a key differentiator in decision-making styles). AI systems need to understand that.
Independent Legal Consultant for Docusign, Adobe, HM Land Registry, Digidentity, OneID, ShareRing, Ascertia, CSC, IoM Govt Digital Agency, Scrive #eidas #esignature #digitalidentity #blockchain #aml #ageverification
1 个月I enjoyed this. A good read, John.