Responsible Innovation: Part 2 – How do we Socio-lise your STS?
Maria Santacaterina
CEO | SANTACATERINA |Transforming business with AI (Ambition and Imagination) for a sustainable digital future | Independent | Non-Executive Director | FTSE100 | Global | Strategy | Innovation | Luxury Retail & Fashion
Responsible Innovation: Part 2 – How do we Socio-lise your STS?
Core Values & Ethics
We have found that the?perception of ethics?within the context of STS varies across organisations and across industries.?We believe that the adoption of ethics in the innovation process will primarily need to be driven by the Board and the CEO.?We often hear those executing corporate strategy, suggest that ethics in innovation conflict with their business goals, which often equate to?incentives?aligned with revenue growth targets and increased profits.?Tensions can lead to trade-offs; in which case the resulting decisions need to be documented for full transparency and accountability.
The Report highlighted the following:
“It could also be beneficial to develop a set of AI risk principles and map them to existing risk frameworks. This would enable firms to identify and focus on potential risks not already covered, including areas where staff need to be trained. This is particularly relevant as AI governance and issues like ethics will require a broader set of skills, experiences, and backgrounds rather than a narrow focus on technology, risk management, and compliance.”
We were encouraged to learn from the Conference that several Banks have or are in the process of introducing Ethics Boards. How exactly these Ethics Boards are structured and represented, and where they fit into the overall governance structures within the Banks remains to be seen. The?outcomes?from the contribution of these Ethics Boards will reflect their efficacy.
The Report suggested there were challenges around how existing governance structures accommodate the governance and oversight requirements for the use of AI/ML, which echoed our view that?diversity of thought and lived experiences, as well as broader skills and competencies are necessary to address the interdisciplinary attributes related to STS.
Any Ethics Board or Ethics Committee should be empowered by the Board of Directors to adjudicate on ethical choices that arise throughout the lifecycle of these systems, based on the organisation’s shared moral framework. The Ethics Board or Ethics Committee should comprise individuals who:
·??????can bring diversity in thought and lived experiences;
·??????can offer insights from having different industry experiences; and
·??????share similar core values to those representing external stakeholders.
The Ethics Board or Ethics Committee will also be an independent governance function responsible for conducting the?Ethical Risk Assessment, which is one of several key inputs to an enterprise-wide governance structure for data as well as models, systems, applications, solutions, platforms, chatbots, apps and services that have embedded AI/ML built or procured from third-party providers.
ForHumanity?also proposes conducting a?Necessity Assessment?and a?Proportionality Study, while taking into consideration the organisation’s Code of Ethics and Code of Data Ethics as part of the Ethical Risk Assessment process. It considers ethics in three dimensions -?Ethics of Data,?Ethics of Practices?and?Ethics of Algorithms -?based?on?research by Luciano Floridi and Mariarosaria Taddeo.
A?Necessity Assessment?enables the organisation to determine whether AI/ML capabilities are the only or best solution, considering a comprehensive set of stakeholders. This is in the context of the lawful basis within the relevant jurisdictions, including analysis and determination of the vital inclusion of each Personal Datum collected and processed.
A?Proportionality Study?enables the organisation to assess the?tensions and trade-offs?between risks and sacrifices of the rights and freedoms of an individual or groups of individuals, balanced against the potential benefits and gains to an individual or group of individuals, in the context of the relevant legal frameworks.
In?this recording, Professor Floridi cited:
“Artificial agency generates potential problems as well as beneficial possibilities. After all, artificial agents have no intentions, motivations, mind and so on. Therefore, they are part of an ethical discussion centred upon the choices, made by humans, that occur as these systems are built and allowed to operate.”
“Overall, artificial agency can be a source of ethically good or ethically bad behaviour, but the ultimate responsibility is entirely, and will remain entirely, human.”
The core values of the organisation must be reflected in the way technology is adopted and deployed in the process of innovation in order to deliver its mission. These core values when examined, typically put people first.
Consequently, they should guide the?innovation pathways?undertaken by the organisation as they seek to deploy technologies to process personal data that can impact their employees, customers and consumers of their digital services.
However, we have not seen evidence of core values in the outcomes from some of the organisations that have deployed AI/ML.?EthicsGrade?provides an independent assessment of ESG ratings and to our knowledge, it is the only rating organisation that assesses: the adoption of ethics in the organisation’s digital innovation and how their stakeholders are engaged; the social impact from the organisation’s deployment of AI and treatment of personal data; and the efficacy of the deploying organisation’s governance structure and capabilities. You can see the ratings of some organisations?here.
Where we have had feedback from Technologists that ethics is difficult to quantify, therefore codify, the following quote provides a simple illustration of what it means:
“Just because you can, does not mean you should.”
For organisations that have curated and published their Code of Ethics and/or Code of Conduct:
“It is not what you say, but what you do and how you do it.”
We asked the question about whether organisations have the right mindset, talent and culture in this?article, where we looked at Meta’s Code of Conduct. This?article?in WIRED questions the effectiveness of Facebook’s Review Board. An internal oversight Board may not prove effective in maintaining the necessary separation of power to avert conflicts of interest and/or succumbing to internal pressures while enacting its duties.
Regulation and Policies
We continue to see pro-innovation groups, mostly technologists who regard regulation as friction. The exception would be those in highly regulated industries such as Financial Services, who are seeking regulatory certainty, so that they can innovate and be compliant by design.
The Report highlighted the following:
“It is therefore important that regulators continue to monitor, analyse and assess the evolution of AI to understand how best to support its safe adoption while identifying and helping to manage its risks.”
“Regulators should provide greater clarity on existing regulation and policy. Such clarification and any new guidance should not be overly prescriptive and should provide illustrative case studies. Alongside that, regulators should identify the most important and/or high-risk AI use-cases in financial services with the aim of developing mitigation strategies and/or policy initiatives.”
“More broadly, regulatory uncertainty is another significant barrier.”
At the Conference, we heard about the need for regulatory certainty related to the use of AI/ML in Financial Services. We also heard that Regulators are collaborating to regulate how AI/ML and autonomous systems are used in digital markets, since there is more to be done. Financial Services Institutions that are also operating in the EU will have the EU AI Act to comply with, once that becomes law, not to mention AI/ML related regulations in other jurisdictions in which they operate.
There will be?new?regulations introduced to address the risks inherent in STS and mitigate adverse outcomes for society. However, existing regulations will continue to apply.
GDPR applies to how AI/ML is used to process personal data. The Equality Act 2010 and the FCA’s principles of Treating Customers Fairly also apply to the?outcomes?from STS. If we then add the obligations from FCA’s Senior Managers & Certification Regime (SM&CR), Financial Services Institutions need to ensure that they comply with these obligations, even if they are not widely enforced.
领英推荐
The Report highlights how SM&CR can be adopted to ensure that there is accountability at the right level for AI/ML related innovation:
“One of the key challenges for AI governance is whether an organisation should centralise or decentralise responsibility for AI. A key question for all firms is who should ultimately be responsible for AI, including under the SM&CR, and whether this should be a single individual (e.g. Chief AI Officer) or shared between several senior managers (e.g. Chief Technology Officer, Chief Data Officer and Head of MRM). Whatever approach firms take, there should always be clear lines of accountability and responsibility for the use of AI at the senior managers and Board levels.”
“Reasonable steps is a key concept in the SM&CR and the?FCA Code of Conduct, and could be extended to the use of AI. While assessment of what constitutes reasonable steps is a judgment-based process, it could include: having an ethics framework and training in place, maintaining documentation and ensuring auditability, embedding appropriate risk management and control frameworks, a culture of responsibility (ethics, governance, inclusion, diversity, and training), clear lines of oversight, reporting and accountability between AI teams etc.”
If Financial Services Institutions in the UK are waiting for AI-specific regulations to provide guidance and regulatory guardrails, they may not be focussing on the outcomes from the STS they deploy, which are?covered by existing regulations. This also exposes crucial gaps between key internal stakeholders from functions who should be working in lockstep on such initiatives.
Policies established internally within organisations are?tools?that can be used to facilitate compliance through behaviour. Policies are often communicated to employees regularly through training. In essence, they help embed compliance into the culture. Policies could be adopted and enforced to drive behavioural change in the way the organisation innovates with AI/ML, which has a direct and instant impact on people in B2C settings.
Established Financial Services Institutions spinning off digital-first organisations have the opportunity to embed compliance by design into their organisational structure, operating model, procedures, processes and culture, futureproofing themselves by incorporating principles of responsible innovation as anticipated in the new regulations.
Similarly, those embarking on digital business transformations also have the opportunity to ensure that their target organisational structure, operating model, procedures, processes and culture have the same attributes embedded. This is more efficient and effective than having to change and retrofit principles of responsible innovation into existing complicated structures and processes.
Processes & Procedures
Regulated Financial Services Institutions have established processes and procedures that are audited regularly.
Many of the processes across the various functions have been automated through the introduction of platforms with workflow automation capabilities in the past couple of decades. However, where multiple platforms exist within large organisations, they are typically not integrated. Hence any enterprise-wide consolidated view needs to be manually generated.
Procedures are often formal documented processes that employees follow should incidents occur. They are vetted and signed off, providing clarity and certainty for those using them. Some of these procedures may be automated. However, there is usually some manual interaction and human intervention associated with them.
Procedures and processes often wrap around technical systems, since we do not live in a fully automated world. Depending on the scope, context, nature and purpose of AI/ML systems, the extent to which processes and procedures exist around them will differ.
If the interdisciplinary intricacies, interdependencies and interconnectivity of the components of the STS are like?organs?and?connective?tissues?in a human body, then the procedures and processes would be?the arteries and blood vessels that connect them.
It is crucial for those responsible for the technical systems to understand the importance of the processes and procedures that exist around STS. Taking into consideration the impact of digital services that incorporate automated decision-making and profiling on employees, customers and consumers. Particularly when limitations, flaws and risks are inherent in the related technologies that impact humans and possibly in the data that are being processed.
When dealing with models, systems, applications, solutions, platforms, chatbots, apps and services with embedded AI/ML, built or procured from?third-party providers, processes and procedures become even more important; as they also provide the?controls?that are key components of the?Operational Safeguards?in our framework.
The?rigour and consistency?around how processes and procedures are applied can make a difference in how risks are managed. The STS need to be?exhaustively tested?in real-world settings before they are released for public use.
Taking a broader view,?organisations are effectively dealing with operational risks?when STS are in operation, rather than technology risks alone.
People, Leadership and Culture
Our?last article?focused on people. We outlined key human capabilities required to produce and sustain inclusive STS:
“The two overarching human capabilities are Diversity of Lived Experiences and Diversity of Thought. You can find these in people who have achieved a certain level of maturity in their thinking, moulded and enriched by diverse experiences gained from a variety of settings. These attributes are invaluable when a multitude of interdisciplinary considerations are required to be examined. The mix of people with Diversity of Thought and Lived Experiences must be representative of the stakeholder community the STS will be deployed for.”
We also outlined other capabilities in the article that no organisation will find in any individual or team from a single function. Organisations aspiring to develop, procure, deploy and operate STS that are engaging, inclusive, safe and trustworthy should not treat them purely as technical systems. Viewing the world through one set of optics does not allow the richness of multi-disciplinary, human-first and social considerations to be incorporated throughout the lifecycle of STS.
Instead, leaders in organisations aspiring to grow their business in the digital world should ensure that their talent pool can provide capabilities that enable inclusive, engaging, safe and trustworthy STS to be developed, procured, deployed and operationalised with safeguards.
The Report also touched on culture, which is key to the operationalisation of ethics:
“Change management in organisations is an important aspect of fairness and bias considerations. Specifically, creating the right environment for conversations on ethics takes time and requires buy-in from senior leadership. The right skillsets are needed to discuss ethics and fairness. A multidisciplinary and diverse approach is helpful, including a culture that allows internal challenge.”
“Diversity of skills and background should be a key consideration when considering firm culture: greater diversity will lead to a richer set of questions, more effective challenge and oversight.”
Unless the CEO and Board of Directors?aspire?to have?inclusive and ethical outcomes?from STS?embedded?in business objectives, related efforts and intent could be seen as?conflicting?with the business goals linked to revenue growth and cost savings. Interestingly, The Report also suggests that:
“Accountable executives and senior managers should have an appropriate understanding of data, algorithms, models, and risks to fully consider any trade-offs. One of the main barriers here is the lack of necessary skills and buy-in from senior managers and business areas.”
Consequently, despite having curated statements on Corporate Social Responsibility (CSR) and Environmental, Social and Governance (ESG) commitments, the?outcomes?that emanate from the STS?are likely to reflect the opposite.
It is therefore critical for the Board of Directors and the CEO to set the right strategic objectives and the tone for the right culture to be shaped and sustained for?engaging,?inclusive,?safe?and?trustworthy?STS to be developed, procured, deployed and operationalised with safeguards; where?Human Autonomy, Human Agency and Human-Centricity?considerations are also preserved in the outcomes.
Accountability – as discussed in the section on Regulation & Policies – along with effective Governance and Oversight are also key components of?Operational Safeguards?for STS.
Financial Services Institutions that have elected to procure?third-party?systems, applications, solutions, platforms, chatbots, apps and services with embedded AI/ML capabilities, should also resolve?who the accountable person(s) are under SM&CR.?This will ensure that those systems are?robustly?governed with?effective?risk management.
Whilst it may be possible to incorporate the right level of Governance and Oversight for STS into existing governance structures, the interdisciplinary intricacies, interconnectivity and interdependencies of STS components will require an integrated and holistic approach.
We will conclude this article with the?Catalysts for Growth?and?Dynamics of Change?in the next instalment.
Chris Leong?is a Fellow and Certified Auditor (FHCA) at ForHumanity and the Director of Leong Solutions Limited, a UK-based management consultancy and licensee of ForHumanity’s Independent Audit of AI Systems.
Maria Santacaterina?is a Fellow and Certified Auditor (FHCA) at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping organisations build a sustainable digital future.
Archaeologist and Data Scientist | Patterned Data is Information, but Patterned Information is Knowledge
1 年Agreed, though I mildly disagree with Dr. Floridi over the appropriateness of the term "artificial agency" (as much as I am a fan of much of his work). Bruno Latour and his Actor-Network Theory notwithstanding, the active agency still resides with the human decision-makers that agree to implement the policies and systems. The artificial systems are tools, not agents (i.e., have no intentionality). Even at that, we are a very long way off from automating ethical decisions.