Go Digital: Please can you explain how your AI system made its decision?
Adobe Stock

Go Digital: Please can you explain how your AI system made its decision?

If you are in the UK reading this and can recall the chaos that ensued when an algorithm determined UK Students' grades, you would have also seen the outcomes that resulted after the basis on which the results were derived was explained. The algorithm resulted in ‘high-achieving students at under-performing schools, many in deprived areas, saw their marks downgraded, while students at above-average schools kept their predicted grades.’

The explanation provided for the unfair and discriminatory outcomes triggered a very public outcry, prompting accountability on the part of the government that resulted in, ‘students in England receiving the grades estimated by their teachers, unless the ones generated by the algorithm were higher. Education authorities in Scotland, Wales and Northern Ireland made similar moves.’ ?

Have you been a recipient of a decision made by an algorithm that didn’t go your way as you would have expected, but did not receive an explanation on what basis the decision was made?

Interpretability vs Explainability

The references to interpretability and explainability have often been used interchangeably, however, it is important to understand why they are different.

This article provides a simple definition for each. It cites ‘Interpretability is about the extent to which a cause and effect can be observed within a system. Or, to put it another way, it is the extent to which you are able to predict what is going to happen, given a change in input or algorithmic parameters. It’s being able to look at an algorithm and go yep, I can see what’s happening here.’

‘Explainability, meanwhile, is the extent to which the internal mechanics of a machine or deep learning system can be explained in human terms.‘ I would go further and suggest that explainability needs to be about the outcomes from those systems.

I would also suggest that the intended audience for interpretability would be those who are closely involved with the lifecycle of the AI system specifically the data scientist and engineers, while the intended audience for explainability would be anyone in society who will be impacted by their outcomes.

The success criteria for explainability

For me, the success criteria for explainability should fundamentally be the ease with which anyone in society can understand how the outcome that impacted them was derived and from which dataset.

It is a communications exercise that requires simple, concise and considered language to be used in a way that a layperson can relate to and more importantly, understand.

This is where a robust and operationalised AI Ethics and Governance framework that features diverse inputs and multi-stakeholder feedback throughout the lifecycle of algorithmic, AI and autonomous systems, can contribute to high standards of explainability that are more likely to be consumable by the layperson in society they are written for.

What do we care about?

If you have been impacted by the outcome of algorithmic, AI and autonomous systems, you are likely to want to see your personal data included within the explanation provided.

As we see more privacy-related lawsuits such as the one brought on by The Irish Council for Civil Liberties against a branch of the Interactive Advertising Bureau (IAB) over what was described as "the world's largest data breach" ?and regulatory fines such as the one imposed by Luxembourg’s National Commission for Data Protection (CNPD) on Amazon over data misuse, could a clear explanation of how personal data will be used by algorithmic, AI and autonomous systems upfront, as well as after the fact, help mitigate these consequences? For example, if you are informed upfront of how your personal data was collected and how it would be used by algorithms to derive decisions that could impact your loan application at a particular lender, and you discovered that a subset of your personal data collected was not what you had consented, nor accurate, you may decide to not proceed with your loan application with the respective lender that will likely result in an unfavourable outcome for you.

Most people are certainly not aware of how their personal data is used by algorithmic, AI and autonomous systems to arrive at decisions that impact them, as explainability is not something that is typically disclosed upfront. Given the increased awareness of unconsented use of personal data collected through external parties, as explained by the cartoon in this article, which illustrates reactions in instances where zero, first, second and third-party data is used, while this article echoes the growing sentiment around giving back control to people to decide on where and how their personal data should be used.?

Are explanations convincing?

Biometric data is another area of concern and this is reflected in Annex 3 of the proposed EU AI Act. Yet biometric data continues to be collected as this recent article highlights, ‘What’s Amazon doing with this data exactly? Your palm print on its own might not do much — though Amazon says it uses an unspecified “subset” of anonymous palm data to improve the technology. But by linking it to your Amazon account, Amazon can use the data it collects, like shopping history, to target ads, offers and recommendations to you over time.’ Initial explanations need to be examined along with underlying intents associated with the nature of the business these organisations that use biometric data are in.

You might have also seen the news this week about Facebook’s plans to ‘analyze encrypted data without decrypting it’. Using a technique called Homomorphic Encryption, its standardisation body cites that it ‘provides the ability to compute on data while the data is encrypted.?This ground-breaking technology has enabled industry and government to provide never-before enabled capabilities for outsourced computation securely.’ It also describes, ‘More complex application scenarios can involve multiple parties with private data that a third party can operate on, and return the result to one or more of the participants to be decrypted.’ From a privacy perspective, it begs the question of why should encrypted data not remain inaccessible to those who have no right to access them?

Personalisation appears to be the intended use of most of the personal data collected directly or indirectly, yet we seldom see any explanation of how the outcomes are derived. Some of you may have seen the Wall Street Journal clip about their investigation into how TikTok’s algorithms figure out your deepest desires. If you haven't it is worth watching as you will, like me, uncover how their algorithm can seek out vulnerabilities in their audience to deliver highly personalised content for the sake of increasing engagement! Whilst they claim their content is moderated by humans, these algorithms appear to have been designed to seek out those vulnerabilities. There is a clear need for these algorithms to be bounded by ethical guardrails and prevented from seeking those vulnerabilities in the first place!

Personalisation should have its ethical limits. The question is will regulations address this specifically, and if so, until they become law and enforceable, what should organisations using algorithms for personalisation do to understand what lines they should not be crossing?

Explainable AI

Many thought leaders have been promoting Explainable AI to build trust, increase transparency and force accountability on the part of its creators.

As we have seen with the UK student’s grades debacle, being able to explain how the outcomes came about did increase transparency and facilitated accountability, but perhaps did not achieve as much success on the trust factor. You could argue that the algorithm was poorly designed, or the requirements were flawed or the data used to train the model was biased.

In my opinion, the quality and standard of explainability need to mature to a level that can be understandable, meaningful, and believable to the layperson impacted by the outcomes. Explainability, together with other criteria such as reliability, accuracy, transparency, fairness, ethics, compliance, safety and security can then form the basis for trust that people and society can start embracing.

Earlier, I canvassed the idea of introducing explainability upfront to declare the scope, nature, context, and purpose of algorithmic, AI and autonomous systems – like the health warnings on cigarette packets, or disclaimers relating to the performance of investment funds. Explainability for outcomes would then cover any concept drifts or variations in the performance of algorithmic, AI and autonomous systems. The primary benefit of any upfront explainability is to inform the recipients of the expected outcomes through transparency and effective communication. Operationally, continuous monitoring of system performance supplemented by robust controls and mitigation mechanisms can reduce unintended outcomes – as anticipated, considered, and examined through risk and impact assessments incorporating diverse inputs and multi-stakeholder feedback, which then can narrow the variation between the before and after scenarios.

In this article, The National Institute of Standards and Technology (NIST) ‘wants to figure out exactly how much you trust AI’. The answer will depend on who is asked, the context and use case. I would suggest that in algorithmic, AI and autonomous system applications where their scope is targeted and limited, their context is contained, their nature is ethical, and their purpose is for the benefit of humans, most people would trust the use of AI in these instances. As for the other applications, the debate continues and there is much to do before humans can readily trust the use of AI in those scenarios.

Explainability is one of many capabilities in the AI Ethics and Governance framework that need to mature further to the level previously described, for humans to effectively engage with and build towards trusting algorithmic, AI and autonomous systems.

Fundamentally, if you cannot explain to society how your algorithmic, AI and autonomous systems arrived at their decisions that resulted in outcomes that can impact humans, they are then not necessary!

I look forward to hearing your thoughts. Feel free to contact me via LinkedIn to discuss and explore how I can help.

Ryan Carrier, FHCA

Executive Director at ForHumanity/ President at ForHumanity Europe

3 年

I would take Explainability one step further - an educational element. It is a genuine sense of care for those who needed the explainability. Favorable results from systems rarely need to be explained. It is the unfavorable responses that generate the desire for explainability. Rejection from a job, failure to get a loan. If we translate explainability into education then we can teach those with an unfavorable outcome how they may improve the next time around - to me that is human. That is For...Humanity

Jeff Jockisch

Partner @ ObscureIQ??Data Broker Expert??Privacy Recovery for VIPs

3 年

Great peice, Chris Leong! You break up Explainability and Interpretability. I like this approach as it allows more nuanced thinking about the inherent problems. I'm quite concerned with the black box problem of machine learning, especially as it relates to credit decisions for instance, when we throw a massive amount of data into a system and then a result pops out. We know those results using deep learning models and exhibiting higher accuracy are less understandable to the human brain, less explainable in DARPA parlance, and less interpretable using your syntax. In your model, if a system has low Interpretability, can it still be explainable to a consumer?

Great piece - thank you for articulating it so clearly, Chris. ??

要查看或添加评论,请登录

Chris Leong, FHCA的更多文章

社区洞察

其他会员也浏览了