Responsible Innovation: Part 1 – Why don’t we start with Purpose?
? Leong & Santacaterina

Responsible Innovation: Part 1 – Why don’t we start with Purpose?

Responsible Innovation: Part 1 – Why don’t we start with Purpose?

After attending the Regulation and Risk Management of AI in Financial Services Conference earlier this month in London, we reflected on the views expressed by the speakers and panellists about the benefits as well as the risks of using AI/ML in Financial Services Institutions. We heard views from those who believe that prescriptive AI regulation will stymie innovation. Conversely, others believe that we need more clarity and regulatory certainty around AI.

For the time being at least, there is no clear definition around AI and its appropriate uses and for the most part, many of the users, including senior managers lack the statistical literacy that is required to make the best use of these computational tools. If statistics are misunderstood or mathematical accuracy is mistaken as certainty (usually commercially available models perform to +/- 95% accuracy) it could potentially lead to misguided decision-making. Even with a high accuracy rate, the remaining error margin may have a detrimental impact on the human consumer.

Many of the regulators were represented too, as they reiterated the need to regulate the markets and protect the human consumers of digital services from adverse outcomes, arising from the deployment of AI/ML. We heard confirmation that existing regulations do apply to the outcomes and impacts arising from the use of AI/ML. Although there appears to be little recourse despite existing legislation, for those who have suffered harm, without engaging legal representatives, which can prove costly.

Interestingly, many of the conversations at the Conference centred on Model Risk Management (MRM) and use cases where the enterprise had?direct control?over the lifecycle of the AI/ML models. When use cases involving third-party systems, applications, solutions, platforms, chatbots, apps and services with embedded AI/ML were cited, such as in hiring, they were categorised as “Shadow IT.” Hence their existence fell outside of the remit of those responsible for internal AI/ML models. Meaning any scrutiny, risk management, oversight and controls of third-party systems and their performance would need to be verified separately.

The Financial Services Industry is highly regulated. In the absence of AI-specific regulation (for example, the EU AI Act due to be enacted in 2023), the industry is seeking guidance on the safe adoption of AI/ML systems. We believe that a broader and more holistic view of how these technologies should be adopted is needed, and this transcends the scientific disciplines, governance compliance and risk functions currently driving their adoption.

Whilst there appears to be robust control exercised by teams designing, developing and deploying?fully engineered?models to deliver financial benefits for the banks adopting AI/ML systems, these capabilities are very much an extension of the?internal?legacy models that have been in use over the past decades. Rather than new capabilities being deployed to better serve and create more positive impacts for people consuming their digital services in B2C settings, such as retail banking, wealth management and insurance.

We refer to the systems in these settings as Socio-Technical Systems (STS) as they are not only designed to interact autonomously with employees, customers and consumers of digital services but also impact people directly and instantly through automated decision-making and profiling. STS are complex adaptive systems with an interdependent, interconnected and intricate set of components that transcend disciplines.?You could liken these components to the organs and connective tissues in the human body. STS can appear as platforms, apps, smart devices, chatbots, recommendation systems or online assessment portals.

When we then add?third-party?systems, applications, solutions, platforms, chatbots, apps and services procured by the business functions that have AI/ML embedded within them, we start expanding the footprint of use cases that?require constant scrutiny, continuous monitoring, risk management and oversight to ensure that the potential for adverse outcomes is mitigated successfully.

We reference?the report from the Artificial Intelligence Public-Private Forum, jointly published earlier this year by the Bank of England and the UK’s Financial Conduct Authority, as ‘The Report’, to illustrate some crucial points, we raised in our previous articles and are being reinforced within this context.

We also reference the MITSloan article on?Responsible AI – Executives Are Coming to See RAI as More Than Just a Technology Issue, as ‘The Article’, where a panel of experts provided their views on responsible AI governance which extends beyond technology leadership.

We will anchor our discussions around the main image across 3 articles. We will start with Purpose and work our way through to the outermost layer. For this part, we start with Purpose and explore the technology and data components of the STS.

We see that much more can be done if Financial Services Institutions are open to thinking differently and aspire to innovate responsibly, operate more efficiently and achieve beneficial outcomes for their human consumers.

Start with Purpose, Start with WHY

If you are familiar with Simon Sinek, you know ‘The Golden Circle,’ provides “a framework upon which organizations can be built, movements can be led and people can be inspired.” All of this starts with WHY.

Everything that we do should originate with its Purpose.

However,?Purpose?needs to be considered?both?from the perspective of the organisation deploying the STS, as well as from the perspective of those who will be impacted.

How will it?impact?employees, customers, partners, suppliers, and shareholders, as well as communities, society and the environment?

At the Conference, we were pleased to hear caution expressed by the speakers about the repurposing of ML models.

We have also seen instances reported where an online personality assessment?tool?designed for hiring was repurposed for decisions made on redundancies in the BBC documentary ‘Computer Says No.’

There is an urgent need to carefully consider the impact STS can have?on society. We need to have a clear understanding of their?purpose?as well as the?scope, context and nature?of these systems, applications, solutions, platforms, chatbots, apps and services, both within the?context of their outcomes and potential impacts. This is business critical for the deploying organisation and specifically for those accountable?within the organisation, specifically the CEO and Board members.

How do organisations ensure that?useful?and?meaningful?human interactions can be afforded by the STS deployed in the real world?

Your Inventory

The Report focused heavily on internal ML models, where risks are?typically?managed through Model Risk Management (MRM) practices. We expect larger and better resourced Financial Services Institutions to have the capabilities to build their internal models, but those adopting?Buy?rather than?Build?strategies will have recourse to?third-party?systems, applications, solutions, platforms, chatbots, apps and services procured by the business functions, which have embedded AI/ML.

The Report pointed out that:

“Most AI applications currently used in financial services are static.”

“One example of the new challenges arises from the dynamic nature of some AI models - their ability to learn continuously from live data and generate outputs that change accordingly. While dynamic AI models could outperform static models by adapting to changing data inputs (data drift) or changes in the statistical properties of the data (concept drift), firms should align governance processes and assessment of model risk to the adaptation cycle.”

“MRM for AI models requires, among other things: greater understanding of the use of hyperparameters; understanding and managing issues involving explainability and reproducibility; and data privacy and bias risks, which can also flow through to models and algorithms. These issues become even more challenging when using third-party models.”

“However, while there are clear benefits to using third-party models and data, there are many associated challenges, particularly in ensuring full due diligence of the model that is being outsourced.”

Regardless of whether your organisation has elected to?Build?or?Buy, or your enterprise does both, there should be a?detailed?inventory of?all?models, systems, applications, solutions, platforms, chatbots, apps and services in use, with embedded AI/ML whether built internally or procured from third-party providers. The inventory should be well?maintained centrally?and remain accessible across your group organisations.

Data is the fuel

Data is the fuel for these models, from their inception through to their training and production in live operations. Every organisation aspiring to be digital first understands the importance and value of data, but they should not become?overly reliant?on?data alone. There is such a thing as?excess?automation.

Whilst the quality and?provenance?of the data are of paramount importance, vast amounts of data are required to train these ML models. Where the organisation requires more data than it?legally?possesses to profile the customer or consumer of its digital services, the?temptation?to procure additional data is heightened by the ease of access through third-party data brokers.

The Report highlights:

“The use of alternative data by AI systems is one of the main ways that AI could exacerbate data quality issues. This is partly because these data are often sourced from third-party providers, which presents additional challenges relating to quality, provenance, and sometimes, legality.”

“There are also challenges in understanding the provenance and legal status of data sourced from vendors that scrape website data or collate it from a range of sources.”

“This can create risks for firms and may have implications for consumers. These include questions on what data customers are willing to give up for free (e.g., via social media sites or consent to cookies), and how those may be used both in financial services and other sectors.”

“A further concern is the potential unexpected outcomes from the use of data intended for different purposes.”

“Another challenge for firms is that AI can introduce non-financial risks that are less well understood, such as those involving data privacy and protection. This is important to consider as protected characteristics may be needed to measure or establish the fairness of AI predictions.”

As Data Controllers, large established organisations need to account for all data within the enterprise, as well as within the supply chain. They will also need to ensure that their obligations under the General Data Protection Regulation (GDPR) are always upheld, particularly when personal data is used with AI/ML.

Data minimisation?obligations as well as all rights afforded to Data Subjects by GDPR,?including those that relate to?automated decision-making and profiling?need to be complied with.

We previously highlighted issues surrounding digital profiles in?this article?in light of the recent?Class action against Oracle’s worldwide surveillance machine. The CEOs and their Boards need to be aware of the?potential regulatory, legal and ethical risks?that accompany their organisation’s quest to implement personalisation capabilities. Particularly when they are reliant on data not sourced directly with informed consent from individual Data Subjects for the specific purpose of a given digital product or service.

Whilst there is a focus on data quality by those developing internal ML models, they?should not?deprioritise other equally important?regulatory, social and ethical?requirements.

Explainability facilitating Transparency

Whether your ML system or application is:

·??????Descriptive, where it analyses data and outlines what has happened; or,

·??????Predictive, where it analyses data and infers what will happen; or,

·??????Prescriptive, where it analyses data and proposes several options for the next steps;

Since it is trained either using supervised, unsupervised or reinforced learning techniques, your organisation must be able to explain how the outputs were derived, especially if they directly and instantly impact people consuming your digital services.

Predictive algorithms are widely deployed in automated decision-making and profiling use cases across industries. If you are concerned about their potential adverse impacts and are open to mitigating their downside risks, this?paper?discusses ‘Why Predictive Algorithms are So Risky for Public Sector Bodies’.

The Report cites the following:

“Explainability, while still important, becomes part of a much broader requirement on firms to communicate decisions in meaningful and actionable ways. From this perspective, the focus is not just on model features and important parameters, but also on consumer engagement, and clear communications.”

We have seen tools to enable teams developing models with AI/ML to qualify data outputs with full traceability of the decision-making process, effectively bringing transparency into the models, notably?turning the black box clear. This enables?organisations to craft and communicate explanations that?can be understood?by the customer or the consumer of automated decisions and profiling.

As regulators demand greater transparency into how AI has?impacted people, such tools will no doubt make a difference to organisations that are finding it challenging to explain their models, as The Report confirms:

“At the model level, there are challenges in explaining and documenting the workings and outputs of complex models, as well as ensuring appropriate governance around using such data and models.”

Since many organisations that develop their models are protective of their Intellectual Property (IP), there is a?balance?that needs to be struck to meet Transparency obligations. We are certainly?not?suggesting that IP is disclosed to meet transparency requirements. However, we believe that?explainability can be communicated in ways that can be shared with the external stakeholders and meet Transparency obligations, without disclosing any trade secrets or IP.

Security & Infrastructure

In a digital world that continues to expand, cyber threats remain a significant risk for any organisation that operates within it. The interconnectedness of the digital ecosystems and the reliance on shared infrastructure could increase the?severity?of any impact resulting from a cyberattack for regulated Financial Services Institutions or any organisation cross-sector.

The Report noted this:

“Certain adaptive AI models need large amounts of data and are open to ‘adversarial attacks’, which can expose firms to even greater cyber risks”

Whilst organisations developing their models may have greater control over security matters, a question mark remains over?how?models, systems, applications, solutions, platforms, chatbots, apps and services with embedded AI/ML from?third-party providers?are?scrutinised, managed and monitored for a variety of risks.

With greater complexity being added through the introduction of non-deterministic and stochastic AI/ML models trained on data potentially sourced from second and third-party providers, introducing new adversarial attack vectors, are organisations viewing their security requirements?differently?

The use of open-source assets is also a popular practice as their reusability facilitates a shorter time to market for new products and services. The following articles provide some food for thought on potential areas of vulnerabilities:

·??????How Hackers Infiltrate Open Source Projects

·??????Mitigating a cybersecurity nightmare: avoid open source software

·??????Open-source security: It's too easy to upload 'devastating' malicious packages, warns Google

If your models, systems, applications, solutions, platforms, chatbots, apps and services include open-source assets, have they been scrutinised by your security teams?

We will cover the Socio-layers of our Responsible Innovation framework in the next part of this article.

Chris Leong?is a Fellow and Certified Auditor (FHCA) at ForHumanity and the Director of Leong Solutions Limited, a UK-based management consultancy and licensee of ForHumanity’s Independent Audit of AI Systems.

Maria Santacaterina?is a Fellow and Certified Auditor (FHCA) at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping organisations build a sustainable digital future.

Peter Gruben

People Performance Booster,

1 年

Great summary Maria, I feel that AI can be used in a range of areas within the financial services industry. However, I also believe that this shouldn’t be a discussion about a 100% transition to AI. It is important to understand that we are at the very beginning of AI and that there are plenty of blurred regulatory lines, unknown risks and limitations depending on where it is deployed. When I think purpose then I feel a good stakeholder analysis matching the purpose could be helpful. Great topic and so many perspectives.

James Scott Cardinal

Archaeologist and Data Scientist | Patterned Data is Information, but Patterned Information is Knowledge

1 年

It may seem oddly unrelated, but Dr. Jennifer Loughmiller-Cardinal and I published an article a couple years ago on the concept of materiality as a triangulation between use, purpose, and function as a holistic approach to understanding how the three relate and interact. Archaeological artifacts might be a long ways from innovation in ML/AI, but you might find the formulation interesting. There are some parallels to the questions you address here. https://doi.org/10.3390/heritage3030034

要查看或添加评论,请登录

社区洞察

其他会员也浏览了