Responsible Innovation: Part 3 – What are your Growth Catalysts?
? Leong & Santacaterina

Responsible Innovation: Part 3 – What are your Growth Catalysts?

Responsible Innovation: Part 3 – What are your Growth Catalysts?

Growth Catalyst – Risk Management

Banks developing internal AI/ML Models have adopted Model Risk Management principles and frameworks. The Report noted:

“Some UK banks have used the SR11-7 principles as well as the PRA’s Model Risk Management Principles for Stress Testing as templates, to help with the lack of clarity in the UK guidance.”

“Complexity is the key challenge for managing the risks arising from AI models. This includes complexity of the inputs (multiple input layers and dimensions); relationships between variables; the intricacies of the models themselves (e.g. deep learning models); and the outputs, which may be actions, algorithms, unstructured (e.g. images or text), and/or quantitative.”

Whilst there is a focus on MRM for internally developed models, we see a variety of other risks related to the use of AI/ML powered digital services that need to be managed, specifically when those services relate to third-party systems, applications, solutions, platforms, apps and services. If you are an established global organisation, chances are you would be using AI/ML powered tools in your HR function for recruitment, which incidentally will be subjected to New York City Council’s Local Law No.144 from 1st January 2023 if you have operations in that jurisdiction.

We explored what these business risks might be in one of our previous articles titled, Have you unwittingly onboarded business risks through Socio-Technical Systems?

When we consider all the interdisciplinary intricacies, interconnectivity and interdependencies of the constituent parts of STS, it is crucial to understand how they need to be orchestrated to function optimally and beneficially for the recipient, namely the human consumer.

The risks that are associated with these constituent parts include:

·      operational risk – STS are operational systems as they impact human consumers directly and instantly;

·      third-party risk – where STS is deployed through third-party providers, risks need to be managed as part of the enterprise and value chain, rather than outside and delegated;

·      compliance risk, regulatory risk – existing and future regulations will impact existing STS within your enterprise and value chain. Breaches will be costly in many ways;

·      governance risk – the S-Y-S-T-E-M should be governed as a whole, not in silos.

·      legal risk, financial risk – with the increase in civil and class litigation resulting from adverse outcomes to human consumers, your Legal Counsel should be aware of your organisation’s exposures and your CFO involved in provisioning for potential losses. Planning ahead in these areas will also deliver more positive outcomes in the long run, rather than reacting post hoc.

·      people risk, transformation risk, change risk – See our previous article: Do organisations have the right Mindset, Talent and Culture

·      reputation risk, competition risk – your brand reputation contributes to trustworthiness. In a competitive digital space, organisations need to differentiate through trustworthiness. Without trust, there will be no engagement, without engagement there will be no first-party data. We all know how valuable first-party data and engagement is for digital growth.

·      cybersecurity risk – cyber threats still keep all CISOs and CEOs awake at night. STS introduces additional vectors for attack, therefore their vulnerabilities need to be scrutinised.

Collectively, they reflect organisational risk and represent business risk. How many of the above risks are managed as part of your STS deployment, including those by your third-party providers?

Where residual risks are accepted with accountability, they need to be monitored, managed and mitigated before they manifest into issues that can materially impact the business. Failure to manage these risks collectively can also impact the organisation’s performance and bottom line, hence we denote it in our diagram as one of the outer layers that form the suite of Operational Safeguards.

Risk Management is critical and in the context of STS, needs to be operationalised holistically, orchestrated across all functions and multiple disciplines and executed cohesively, rather than in silos due to the systemic nature of responsible innovation

Robust and effective risk management is a Catalyst for Growth.

Growth Catalyst - Trustworthiness

Rachel Botsman defines Trust as a “confident relationship with the unknown.” She also describes it as “a beautiful alchemy of expectations and vulnerability,” “a mixture of our hopes and fears, and why it hurts when it breaks down when people abuse our trust.”

Trust is contextual and subjective. We may even suspend disbelief in order to entertain new possibilities. More importantly, Trust is something where we decide if we want to give it to others. To put it another way, trust needs to be earned by one human being from another human being. It develops over time when confidence is consistently renewed in subsequent interactions between people. Intellectual honesty and reliability are fundamental in the exchange of ideas or information that may be shared from one conversation to another.

But, in today's world, we are constantly interacting with software systems or artificial agents, which have a limited view of our reality, and do not have the capacity to synthesise past, present and future. Transparency is a necessary part of the process for organisations to begin restoring trust following the deployment of STS, where it may have been eroded due to sub-optimal outcomes.

When the concept is applied to information about everything we do, improved disclosure around how algorithms work and how organisations use them to process our data to infer potential outcomes (which take the form of automated decisions and profiling), is a necessity from a practical perspective. Especially when we interact with STS so that we can have some assurance that their intentions of the deploying organisation are aligned with our own. All of this falls against the backdrop of a continuing increase in adverse outcomes impacting real people (‘natural persons’), not to mention the intrusiveness and increasing surveillance of our personal lives on a global scale.

The Report proposes:

“Auditing and certification of AI models could help to address many of the governance challenges, as well as address the wider questions around transparency and explainability that could help build broader trust in the technology.”

We outlined our concerns previously with regard to practitioners focusing just on the AI/ML models, if they are part of the STS. We also highlighted the need to scrutinisemanage and monitor a variety of risks relating to models as well as systems, applications, solutions, platforms, chatbots, apps and services that have embedded AI/ML built or procured from third-party providers.

We then highlighted the key roles that the Socio-layers - Core Values & Ethics, Regulations & Policies, Processes & Procedures, People, Leadership and Culture, have in the development, procurement, deployment and operationalisation of STS that are engaging, inclusive, safe and trustworthy.

Therefore, we believe that auditing the AI/ML models in STS is necessary, but not sufficient to ensure that Human AutonomyHuman Agency and Human-Centricity considerations are afforded to the human consumer.

It is true that there are different approaches and frameworks for auditing AI are emerging, and global standards have not yet been established. It is possible there may be some convergence over time, that will be driven by their efficacy to mitigate downside risks and create the layers of trustworthiness consumers of digital services expect from STS being deployed by private and public entities.

Ryan Carrier – Executive Director of ForHumanity, explains the Independent Audit of AI Systems (IAAIS) framework, is “a systematic risk mitigation process to ensure that governance, oversight and accountability of AI, Algorithmic & Autonomous (AAA) Systems, reflects the legal requirements, such as GDPR, EU AI Act, the Children's Code and the Digital Services Act; in addition to the best practices in managing the risk of harms to humans from Socio-Technical Systems.”

ForHumanity believes that a global, harmonised, binary (compliant/non-compliant) set of criteria, approved by Governments and the Regulators and independently verified by certifying bodies can create an Infrastructure of Trust for the public.

An Infrastructure of Trust enables the segregation of duties to be conducted by certified and trained experts, that establish a robust ecosystem, which engenders trust for all citizens and protects those who have no power or control.

He further explains that, “For Humanity’s system is grounded on four core tenets:

1.      ForHumanity produces accessible, binary (compliant/not compliant) certification criteria that transparently and inclusively align to legal requirements, for example in the United Kingdom and other countries (e.g. GDPR, EU AI Act), that embed compliance and performance in practice.

The certification criteria take into account corporate wisdom, but impervious to corporate dilution and undue influence, whilst being mindful of the regulatory burden in order to optimise risk mitigations to humans.

2.      ForHumanity Certified Auditors (FHCAs) are specifically trained and accredited on the relevant certification criteria. They are held to a high standard of behaviour and professionalism as described in the ForHumanity Code of Ethics and Professional Conduct.

3.      Certification Bodies employ FHCAs to independently assure compliance with the relevant certification criteria on behalf of the public. These Certification Bodies are independent, robust organisations, licensed to perform the IAAIS audits on behalf of the public, to ascertain the audited systems are compliant with certification criteria. They are held to the highest ethical standards and are subject to third-party oversight by entities such as national accreditation bodies (e.g. UKAS, DAkkS).

4.      Corporations can use the IAAIS audit criteria to operationalise governance, oversight and accountability for their AAA systems, helping them satisfy legal and regulatory requirements in a comprehensive manner. This will enable the organisation to leverage their governance, oversight and accountability structures to reduce the risks of negative outcomes for their stakeholders and deliver more sustainable profitability.”

Having independence baked into each dimension of any External Audit Framework ecosystem can only increase the integrity and credibility of the process and practitioners, hence contributing towards trustworthiness.

We take a pragmatic approach focusing on the outcomes. We conduct independent verification of effective risk mitigations that also preserve Human AutonomyHuman Agency and Human-Centricity, so as to optimise the intended benefits from the STS.

Responsible Innovation needs to be demonstrable and independently verified through Human-Centric Audit Criteria.

While data is crucial for digital businesses, trust is critical for engagement. Without trust, there is no engagement. Without engagement, no data can be obtained and no business value can be realised.

Growth Catalyst - External Stakeholders

When it comes to stakeholders for any model, system, application, solution, platform, app and service, the list is likely to heavily feature shareholders, internal stakeholders and the end users.

When we consider STS, the list of stakeholders is wide-ranging. Organisations developing, procuring, deploying and operating STS should focus on the External Stakeholders - depending on their scope, context, nature and purpose - that include regulators, employees, customers, partners, suppliers, and shareholders, as well as communities, society and the environment. The social implications of STS cannot be underestimated, hence the systemic nature of responsible innovation.

A growing number of Financial Services Institutions aspire to possess the capabilities of Big Tech firms as they see tremendous value in data that can be harnessed. But, is their Purpose balanced?

If you are a Bank and deploying a retail-banking platform that is an STS for your customers, responsible innovation goes beyond the UX and CX design elements, since you also need to understand the limitation and potential flaws of your AI/ML capabilities. You need to incorporate data ethics, algorithm ethics, data protection and privacy and related regulatory requirements. You also need to introduce operational safeguards and ensure that your customers are treated fairly, providing agency and autonomy and so on.

Similarly, this also applies to Investment firms with wealth management and/or financial services platforms with apps, or Insurance firms with platforms and apps for their products.

In the same way that businesses have managed to sustain enduring relationships with their customers through exceptional service over a prolonged period of time before our world was digitalised, deploying organisations should aspire to mature their STS - not just AI models and systems, to a level where customers do not feel underserved, dissatisfied and frustrated.

In the meantime, organisations developing, procuring, deploying and operating STS should have feedback loops implemented to enable any External Stakeholder to provide feedback, which should then be collated, analysed and acted upon to deliver improvement systemically.

Having engaged, satisfied and loyal External Stakeholders that trust your digital services is crucial for growth in an interconnected digital world that values social interaction.

Dynamics of Change - Innovation Pathways & Communication

We have outlined that the STS can be developed and deployed by the organisation through a ‘build’ pathway. It can also be procured and deployed through third-party providers through a ‘buy’ pathway.

Regardless of which Innovation Pathway is chosen by the organisation for a particular STS, everything starts with the Purpose and ends with Outcomes that impact all your Stakeholders.

When the Outcomes from STS impact real people, transparency, explainability and disclosures of residual risks can enable effective communication with the External Stakeholders, which also helps engender trust.

Therefore, when Purpose is discussed and considered, it must not only be from the perspective of the organisation deploying the STS, but also from the perspective of External Stakeholders. It is only when the organisations developing, procuring, deploying and operating STS consider all the different perspectives that they have a chance to innovate responsibly and deliver digital services that are engaging, inclusive, safe and trustworthy.

Most importantly, Human Autonomy, Human Agency and Human-Centricity considerations must be afforded to the human consumer through the consistent deployment of the organisation’s Ethics-based Principles and Core-Values, reflected in their Culture and evidenced in the outcomes for a sustainable digital future.

Communication is key to achieving purpose-led, ethics-based outcomes, incorporating human values.

Communication is key to delivering change and transforming any organisation into one that can innovate responsibly. The effectiveness of communication relies on the orchestrator’s understanding of the systemic nature of responsible innovation and the interdisciplinary intricacies, interconnectivity and interdependencies of the elements that make up the STS.

The success of communication is reflected in its execution.

Dynamics of Change - Diverse Inputs & Multi-Stakeholder Feedback

Obtaining inputs and feedback throughout the lifecycle of any STS from a diverse range of stakeholders is critical to ensure that the outcomes from the STS benefit those it is deployed for. We highlighted diversity of thought and lived experiences when we covered Core Values & Ethics.

We would also like to emphasise that Diverse Inputs and Multi-Stakeholder Feedback should also be institutionalised in the other Socio-layers - Regulations & Policies, Processes & Procedures, People, Leadership and Culture, as well as throughout the lifecycle of the STS.

It is critical for a feedback loop to be established at the External Stakeholder layer so that Outcomes-based feedback can be aligned back to the Purpose for which the STS was commissioned.

The value of having feedback from multiple stakeholder groups with diversity of thought and lived experiences - internal as well as external, throughout the STS lifecycle, should not be underestimated. All feedback, positive as well as negative enables organisations to learn, improve and grow.

Diverse Inputs and Multi-Stakeholder Feedback provides validation on how the outcomes and the journey towards the outcome align back to the Purpose of innovation. Operationalised well, it allows for the outside-in communication to prevail, complimenting the inside-out communication exercised during the Innovation Pathways.

As a consumer, when you next engage with an STS, make a note of how easy it is to provide feedback other than a rating, or whether you are even asked to do so!

If you are an internal stakeholder, how often were you engaged to provide input and feedback during the lifecycle of the STS?

If you are driving a digital transformation programme featuring STS, how open are you to involve those that are not core to your transformation team to provide inputs and feedback?

Your Roadmap & Maturity journey

Coincidentally, the MITSloan article we cited in Part 1, closed with these 3 recommendations:

“For organizations seeking to ensure that their C-suite views RAI as more than just a technology issue, we recommend the following:

1.      Bring diverse voices together. Executives have varying views of RAI, often based on their own backgrounds and expertise. It is critical to embrace genuine multi- and interdisciplinarity among those in charge of designing, implementing, and overseeing RAI programs.

2.      Embrace nontechnical solutions. Executives should understand that mature RAI requires going beyond technical solutions to challenges posed by technologies like AI. They should embrace both technical and nontechnical solutions, including a wide array of policies and structural changes, as part of their RAI program.

3.      Focus on culture. Ultimately, as Mekel-Bobrov explains, going beyond a narrow, technological view of RAI requires a “corporate culture that embeds RAI practices into the normal way of doing business.” Cultivate a culture of responsibility within your organization.”

We focus on STS, rather than purely on AI, for the simple reason that they are complex adaptive systems with an interdependent, interconnected and intricate set of components that transcend disciplines. It is only when these systems are understood that you can innovate responsibly, as you can change your organisation and grow your business, while producing and operating engaginginclusivesafe and trustworthy STS.

We have provided an outline of our Responsible Innovation Framework, where we simply described its systemic nature and the interdisciplinary intricacies, interconnectivity and interdependencies of the elements that make up any STS.

We noted the importance of Purpose, and why it matters. We traced the Innovation Pathways organisations can take, as we expanded the aperture for how innovation needs to be viewed beyond data and AI/ML models.

We outlined the different Socio-layers that make up the Accountability, Oversight and Governance capabilities that play a key role in the Operational Safeguards you need to have in place, as you deploy and operationalise your STS, covering Core Values & Ethics, Regulations & Policies, Processes & Procedures, People, Leadership & Culture. Encircled by Risk Management - which is a crucial layer that often gets omitted when we see Digital Transformation diagrams and playbooks.

We concluded by outlining the two final layers that complete the Catalysts for Growth - Trustworthiness and External Stakeholders, before describing the Dynamics of Change which comprise: the Communications channel outbound from Purpose to Outcomes; complemented by the Communications channel inbound facilitating Diverse Inputs and Multi-Stakeholder Feedback loops throughout the lifecycle of the STS.

The guidelines above serve as an illustration of the digital transformation journey. We always start by asking questions, with a view to assessing your organisation’s level of maturity across our Responsible Innovation dimensions, in concert with your aspirations, ambitions, expectations and your organisation’s mission.

We first seek to understand the current state of play - how your organisation operates, its culture along with challenges and risks, and then discuss the strategic direction you would like to undertake for your organisation.

If you would like to grow your business using technology to deliver beneficial outcomes for your stakeholders, please get in touch with Maria and Chris.

Chris Leong is a Fellow and Certified Auditor (FHCA) at ForHumanity and the Director of Leong Solutions Limited, a UK-based Management Consultancy and Licensee of ForHumanity’s Independent Audit of AI Systems.

Maria Santacaterina is a Fellow and Certified Auditor (FHCA) at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping you build a sustainable digital future.



要查看或添加评论,请登录

社区洞察

其他会员也浏览了