What explainable AI can teach us about good policies
Image generated by AI - at least that's my explanation...

What explainable AI can teach us about good policies

Imagine that you are in court having to make a case. Your license to continue operation as a business is on the line. You are painfully aware that there is a large, complex, legal framework consisting of many (many) laws, regulations, policies and accepted practices that you need to carefully navigate. Your evidence is mixed, of various types, and has both relevant and irrelevant elements (and you are not sure which is which). Your interests and exposures span several jurisdictions. You want to do the right thing, you believe you are doing the right thing, but now you need to prove it.

Other than losing sleep, what do you do?

Many would suggest that, along with calling a real good lawyer, you should press the magic AI button.

But this button brings concerns. The answer can’t be a “black box”, we will have to explain how we are making our case, what evidence we are using and how that supports our position in the context of relevant regulations.?

This type of example makes evident the need for “explainable AI”: an approach that aims to make the decision-making processes of AI systems understandable and transparent to humans.

Towards the end of 2024 we were working with a team of experts who were building a proof of concept that made good use of explainable AI for a client (and in fact, we hope, the rest of the world...). Along the way we used a version of this courtroom scenario to help explain how we were using AI, why it needed to provide a chain of evidence, and why the answers needed to be “explainable”.

The experts from PyxGlobal that we were working with are ahead of most on explainable AI. They understood that there are issues to do with IP, commercial confidentiality and rights to access. To continue our court room metaphor, we will have to prove that we have the right to use the evidence that we base our case on, that we have gathered the evidence legally, and (depending on the nature of our trial), we may even reserve the right not to disclose evidence that we have gathered.

This work (which I'll explain in another post, and which we are very much looking forward to continuing with in 2025) taught me many things. One of them is that the rigor that we demand of AI - that the answers it provides should be explainable, and have a chain of evidence (citations etc.) that we can explore and test - well those requirements seem applicable to each and every “system” that comes up with determinations and makes decisions.

I think we should look for “explainable” from every system where it isn’t immediately evident why a decision or conclusion is made. This is not limited to technology systems. We should also expect to have explainable policy decisions, explainable governance decisions, explainable regulatory decisions, explainable human decisions.

Good designs for all such systems, be they human, technology assisted, or autonomous technology, should consider how their logic can be explained.

Get real

Sometimes of course we might require the ability to explain after the event, not before confirming the decision. I want my automatic braking system to brake, not to ask me if I think it should brake based on the evidence of an impending crash that it presents to me in compelling logic on the dash. By all means (in fact, ‘please’) brake now, but let me have the chance to ask questions and examine the evidence later.?

Whether we demand explanations before confirming a decision, or as part of exploring a decision that has been made, we should run the “explainable” ruler across all manner of life impacting decision systems. For example:

  • Licenses
  • Insurance
  • Loans
  • Citizenship
  • Policy and regulatory determinations

But you can’t handle the truth…

Then there is of course the reality that complex systems are too complex for most of us to understand. I think that’s true, in fact I think that’s true for all systems and all of us. Think of how many PhDs and lifetimes you would need to “understand” the full explanation of an airplane autopilot, a car cruise control, or even a mobile phone notification system.

But we don’t need the “whole truth”, in the sense that we don’t need to understand everything everywhere, all at once. Explainability and understanding have the quality of sufficiency about them. We don’t need all knowledge, we need enough knowledge to satisfy our needs, and we need to know that more knowledge is available if we want to dig (and learn) further.

Another angle on "the truth" is that we should still expect that AI systems, all systems, can make "mistakes". The fact that they can and should explain their logic doesn't make their logic right, it makes it explainable. Facts can be omitted and/or selectively chosen, to explain an action. And as (at least) one Judge memorably explained to a defendant, your defense explains your actions, it doesn't justify them.

So should we apply the rigor that we expect of AI to explain itself to other systems? I certainly think so. What do you think?

Sezoo

#digitaltrust

Jim Mason

Chief Solutions Architect, Public Sector at TrustGrid | Blockchain Practice Leader | Hyperledger and INATBA

1 个月

There are a set of characteristics ( explainable, verifiable, accurate, timely etc ) that are core for any good system .. AI is just another system that needs to meet those requirements.

Julia Markram

Executive Director at Loading Growth | IT Growth Specialist | Business Development Leader

1 个月

This is a compelling exploration of the importance of explainability, not just in AI, but across all systems that influence critical decisions. The concept of explainable AI is essential for building trust, ensuring accountability, and providing transparency in decision-making processes. As you aptly point out, this rigor should extend to policy, governance, and other human systems where understanding the rationale behind decisions is crucial. While we may not need to grasp every detail, having access to sufficient explanations allows us to question, learn, and ensure that decisions are made ethically and legally. Your courtroom analogy highlights the need for a transparent chain of evidence, which is vital in maintaining digital trust. Thanks for sharing these insights and prompting a broader conversation on the necessity of explainability in decision systems!?

Tony Rumble

Senior Project/Program Manager/Principal Consultant

1 个月

I don’t profess to understand that much about AI but my view is that it is based on logic albeit highly data orientated and as such needs to be sufficiently explainable otherwise there is a danger that we, the human race, loses control which is where it gets very scary….

Scott Perry

Founder and CEO, Digital Governance Institute

1 个月

Great piece, John. The nugget here is" we don’t need all knowledge, we need enough knowledge to satisfy our needs, and we need to know that more knowledge is available if we want to dig (and learn) further." However, this is always subjective. What we have learned about knowledge and trust is that it is the perceiver's perception of the amount of knowledge needed to make decisions, not an objective analysis of all knowledge, People make decisions on various degrees of information (a key differentiator in decision-making styles). AI systems need to understand that.

Richard Oliphant

Independent Legal Consultant for Docusign, Adobe, HM Land Registry, Digidentity, OneID, ShareRing, Ascertia, CSC, IoM Govt Digital Agency, Scrive #eidas #esignature #digitalidentity #blockchain #aml #ageverification

1 个月

I enjoyed this. A good read, John.

要查看或添加评论,请登录

John Phillips的更多文章

  • Shared interests in a crisis of fakes

    Shared interests in a crisis of fakes

    2025 sees global attitudes to regulating online platforms drifting apart[N1]. Between the laissez faire of the US and…

    10 条评论
  • Accordingly large and urgent? 2/2

    Accordingly large and urgent? 2/2

    This is the second of two articles on the Australian Universities Accord final report. The report describes the impact…

    3 条评论
  • Accordingly large and urgent? 1/2

    Accordingly large and urgent? 1/2

    This is the first of two articles I'll be posting looking at the final report of the Australian Universities Accord…

  • Towards Better Ends

    Towards Better Ends

    People, and the organisations they rely upon for services, are bad at anticipating, preparing for, and easing, the…

    8 条评论
  • Education providers should look to provide Verifiable Credentials for their students, now

    Education providers should look to provide Verifiable Credentials for their students, now

    It's been over five years since I and my good friend Andrew Tobin took the idea of "verifiable credentials" to the…

    13 条评论
  • Who says you can do that work here?

    Who says you can do that work here?

    Proving you have the right credentials to do a job, take up a role, or take up further studies can be difficult enough…

    8 条评论
  • Smishing with fake org ID – a risk to customers, organisations, and their directors

    Smishing with fake org ID – a risk to customers, organisations, and their directors

    Banks and other financial institutions need to use a number of channels to communicate with their customers, including…

    12 条评论
  • Do you mind if I…? Towards better online consent models

    Do you mind if I…? Towards better online consent models

    Preface Consent is an increasingly frequent and intrusive part of our online experience and is handled in many…

    10 条评论
  • Digital Wallet Design for Guardianship

    Digital Wallet Design for Guardianship

    This article is part of a series that we’re writing on how we might best implement a digital model for Guardianship…

    5 条评论
  • Guardianship and Education

    Guardianship and Education

    Authors: John Phillips, Jo Spencer This article is part of a series that we’re writing on how we might best implement a…

    4 条评论

社区洞察

其他会员也浏览了