Responsible Innovation: Have you unwittingly onboarded business risks through Socio-Technical Systems?
Chris Leong, FHCA
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
How do CEOs of organisations operating in the digital space steer their businesses in the right direction in a dynamically changing world that is also volatile, uncertain, complex and ambiguous?
The Eurasia Group identified the Techpolar World as the #2 risk in its Top Risks 2022 Report released in Jan 2022. It predicted:
“It’s 2022. Your personal information will be hacked. Algorithms fed with biased data will make destructive decisions that affect how billions of people live, work, and love. Online mobs will create chaos, inciting violence and sparking runs on stocks. Tens of millions of people will be dragged down the rabbit holes of conspiracy theories. The one thing that all of these realities have in common is that they emanate from digital space, where a handful of big tech companies, not governments, are the main actors and enforcers.”
These and other related risks have been highlighted in many articles we have written and read on a daily basis. Sadly, many people have experienced the manifestation of these risks in adverse outcomes.
These risks remain present in our daily lives as more and more of our physical world is replicated in the digital space. Increasingly, a dematerialised existence is being predicated in the ‘metaverse,’ an arena where it will prove difficult to identify bad actors, the so-called ‘elves.’ ?Here is Louis Rosenberg’s caution:
‘The most dangerous part of the metaverse: agenda-driven artificial agents that look and act like other users but are actually simulated personas controlled by AI.’
?The Eurasia report echoes these sentiments:
‘The biggest technology firms are designing, building, and managing an entirely new dimension of geopolitics. In this new digital space, their influence runs deep, down to the level of individual lines of code. They’re writing the algorithms that decide what people see and hear, determine their economic and social opportunities, and ultimately influence what they think. Individuals will spend more time in digital space in 2022, at work and at home.’
It is not just the biggest technology firms, by the way.
It is effectively every organisation aspiring to be like those big-tech firms, currently investing significant amounts of their financial capital in digital transformation initiatives, so that they possess similar technical capabilities. In doing so, these and other related risks are surreptitiously onboarded and will cause harm if they are not fully understood and mitigated by design.
The unintended consequences of Digital Transformations
We acknowledge that cybersecurity risk remains a priority of many CEOs, their leadership team and their Boards. We see security and data breaches in headlines on a regular basis. Last month, we saw, ‘Customers’ personal data stolen as Optus suffers massive cyber-attack.’
While the focus is on cybersecurity risk, digital transformation initiatives are also delivering a digital-first approach to business operations designed to impact the business model, customer experiences, operating model and technology infrastructure. What is not obvious is how such initiatives address the people, culture, ethics, governance, risk management, compliance, legal and financial considerations, as the organisation evolves. These elements do not typically appear in the Digital Transformation Playbooks.
It may appear that technology is at the heart of the transformation, hence the perception that such initiatives should be driven through the Technology and Data lenses. We often see the following mechanisms being introduced:
Whenever Socio-Technical Systems are introduced, all outcomes matter since they impact human consumers directly and instantly. And yet despite potential harms and unintended consequences, only the upsides attainable from the use of these technologies (for example, growing revenues) have primarily been the focus of the proponents of Digital Transformation.
Based on the increasing number of adverse outcomes reported from these systems, it appears that little or no attention has been paid to the limitations and flaws of these technologies. Consequently, inherent downside risks are unknowingly onboarded into the organisation.
Apart from the Cybersecurity risks that the CISOs are focused on mitigating, there is a lack of awareness of the myriad of hidden risks that are unwittingly onboarded through Digital Transformation initiatives. Effectively, the hidden risks exacerbate the Business Risks that the CRO needs to manage on behalf of the CEO and the Board.
So, what might these be?
The scope of hidden Business Risks from Socio-Technical Systems
Socio-Technical Systems (STS) are complex systems with an interdependent, interconnected and intricate set of elements that transcend disciplines. Examples include platforms, portals and apps with personalisation, profiling and automated decision-making capabilities, Automated Employment Decision Tools (AEDT), chatbots and personal digital assistants.
When we then consider that ‘AI’ technologies are non-deterministic by nature, the complexity is compounded.
STS are not just about Technology and Data.
Processes and Procedures in operation within the deploying organisation need to be considered along with the Policies and Regulations that need to be adhered to by the deploying organisation.
Governance, Oversight and Accountability structures need to be underpinned by Multi-Stakeholder Feedback Loops, that allow for Diverse Inputs to be collected throughout the STS’s lifecycle.
Risk Management must be woven into the STS lifecycle, and Operational Safeguards designed and deployed across the enterprise network and its Value Chain to ensure that downside risks are managed, and the residual risks are mitigated.
Most importantly, Human Autonomy, Human Agency and Human-Centricity considerations must be afforded to the human consumer through the consistent deployment of the organisation’s Ethics-based Principles, Values and Culture, while being reflected in the outcomes for a sustainable digital future.
When we consider all the interdisciplinary intricacies, interconnectivity and interdependencies of the constituent parts of a Socio-Technical System, it is crucial to understand how they need to be orchestrated to function optimally and beneficially for the recipient, namely the human consumer.
The risks that are associated with these constituent parts include:
Collectively, they reflect organisational risk and represent business risk. How many of the above risks are managed as part of your STS deployment, including those by your third-party providers?
When the STS is deployed, failure to manage these risks collectively can impact the organisation’s performance and bottom line. Remember this when you hear people insist that the risks related to STS are all Technology Risks!
So, why is it not all Technology Risk?
The simple answer is two-fold:
Firstly, STS by their nature interacts with humans directly. As they interact with humans, they capture data relating to the interaction that are inputs for the algorithms embedded in the system to process. Any processing of personal data is governed by different regulations that are currently in force across multiple jurisdictions. Furthermore, any impacts from automated decisions and profiling of humans are not only subject to existing and future regulations, but also place liability on the deploying organisations, their CEOs and individual members of the Boards in the event of adverse outcomes.
Secondly, whilst the underlying family of ‘AI’ technologies powering the STS are limited in capabilities and inherently flawed, they are deployed for purposes that Technology and Data alone cannot fulfil satisfactorily and consistently due to their non-deterministic nature.
When both parts of the answer are considered, we see why deploying organisations are exposed to the myriad of risks outlined.
The following quotes from this article in the LA Times summed it up nicely:
领英推荐
‘When news articles uncritically repeat PR statements, overuse images of robots, attribute agency to AI tools, or downplay their limitations, they mislead and misinform readers about the potential and limitations of AI,’
‘What is typically left out of much AI reporting is that the machines’ successes apply in only limited cases, or that the evidence of their accomplishments is dubious.’
‘AI hype is not only a hazard to laypersons’ understanding of the field but poses the danger of undermining the field itself. One key to human-machine interaction is trust, but if people begin to see a field having overpromised and underdelivered, the route to public acceptance will only grow longer.’
Another area where the deployment of STS has resulted in adverse outcomes is recruitment. This BBC news report recently published the results of a study:
In 2020, the study notes, an international survey of 500 human-resources professionals suggested nearly a quarter were using AI for "talent acquisition, in the form of automation".
But using it to reduce bias is counter-productive and, University of Cambridge's Centre for Gender Studies post-doctoral researcher Dr Kerry Mackereth told BBC News, based on "a myth".
"These tools can't be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race," she said.
Organisations with operations in New York City using Automated Employment Decisions Tools (AEDT) need to have them independently audited before they can continue using them from the 1st of January 2023.
If the introduction of NYC Local Law No.144 impacts your organisation and your organisation uses AEDT, who is ensuring that you can comply from that date?
More importantly, what steps are you taking to ensure that you are doing the right thing for your candidates when it comes to using AEDT for recruitment?
Mitigating risk exposures
The CEO and the Board need to urgently increase their level of awareness of the business risks related to the STS deployed by their organisation since the accountability for adverse outcomes rests with them. ?
Any business aspiring to operate and grow digitally needs to have all organisation and business risks, unwittingly onboarded through their STS, as part of their digital transformation initiatives, understood, managed and mitigated, so that they are prepared for the unintended consequences.
The performance and operation of all STS, including those deployed by your third-party providers in an outsourced arrangement, need to be continuously monitored for the very reason that they impact human consumers directly and instantly. This article highlights the problem of concept drift:
“Coping with drift is a huge problem for organizations leveraging AI,” he said. "That's what really stops us currently from this dream of AI revolutionizing everything."
“Adding to the urgency for users to address the issue, the European Union plans to pass a new AI law as soon as next year requiring some monitoring. The White House this month in new AI guidelines also called for monitoring to ensure system performance does not fall below an acceptable level over time.”
Achieving transparency when implementing an STS is crucial, not just for the human consumer needing to understand how the automated decision was made, but also for the deploying organisation as hidden risks may be inherent, as this article highlights:
‘The Easylife fine shows how some companies may be engaged in profiling without realising.’
Disinformation is an ongoing and significant problem, especially when a copious amount of data is gathered and processed by machine learning algorithms in the real world autonomously. Granted that some of the outcomes from disinformation and misinformation are by design, the risk that your organisation’s STS can unwittingly produce disinformation through data poisoning should be a concern. Conversely, being aware of how disinformation is generated can help you ask the right questions. This useful 'Disinformation Playbook' outlines the 5 methods used in disinformation.
‘To be clear: most companies don’t engage in disinformation. The deceptive practices that make up the Playbook are used by a small minority of companies—and yet, as we show, they are found across a broad range of industries, from fossil fuels to professional sports.’
Thinking differently
If gaps currently exist within your organisation in the areas described above, then we suggest the CEO, the Board and the CRO need to act and think differently.
Fundamentally, there has been a lack of Diverse Inputs and Multistakeholder feedback allowing for the diversity of thought and lived experiences to be incorporated into the lifecycle of the STS.
Why were ethics-based principles not a priority?
This Curiosity Killed the HIPPO article highlights the fact that:
‘In our fast changing, complex world where no one person can be the expert in anything, this is increasingly a problem. Mental models that have served us well are no longer relevant.’
In his book Rebel Ideas, Matthew Syed talked about the value of having diverse teams so that more is known about problems before deciding on a course of action. “The more diverse the team, the more diverse the opinions and thought processes are and the better the decision making.”
Socio-Technical Systems are not just about Technology and Data. ?This is simply because you cannot automate and digitise everything. Not at this juncture where:
This article states the obvious:
“For over a century, pundits have been trying to apply an engineering mindset to human affairs with the hope of taking a more “scientific approach.” So far, those efforts have failed. In reality, these ideas have less to do with science than denying the value of human agency and limiting the impact of human judgment. We need to stop making the same mistake.”
“The problem is that boiling down the success of an enterprise to the single variable of shareholder value avoids important questions. What do we mean by “value?” Is short term value more important than long-term value? Do owners value only share price or do they also value other things, like technological progress and a healthy environment?”
Consequently, we should all step back, slow down, pause, reflect and realign our purpose to how we use these technologies. Dumbing down human intelligence is not an option. Ultimately, it starts with Purpose.
So, why should you care?
CEOs and Boards of regulated businesses and any organisation ought to care about the impacts on human beings and the environment, including all the risks that have been unwittingly onboarded through STS. The manifestation of any of the risks mentioned above may result in adverse outcomes that disadvantage, discriminate or harm human consumers of digital services.
Consequently, there needs to be greater awareness and a better understanding of the STS being deployed within the organisation, at Board level and across the whole business ecosystem. These systems have wide-ranging consequences on the business, the organisation and operations, as well as the employees, customers, suppliers, shareholders and they also impact the reputation, performance, growth plans and trustworthiness of the organisation as a whole.
The truth is you need to care about each and every interaction a human being has with the STS you have deployed, so that the person will not experience any adverse outcomes. Building resilience requires adaptation in real-time.
In addition, you need to take care that investments in your digital transformation initiatives will deliver greater efficiencies, cost reductions and improved profit margins, as promised. More importantly, you need to ensure you have adequate operational safeguards in place, so that the Socio-Technical Systems deployed in your organisation are not going to cause you any problems in the future.
STS are complex adaptive S-Y-S-T-E-M-S!
Are you aware of the business risks you have unwittingly onboarded through your Socio-Technical Systems?
AI Governance | AI Procurement | 100 Brilliant Women in AI Ethics? | Certified AI Auditor | Certified Change Manager | Vice Chair IEEE P3119 | Executive Board Member at ForHumanity
1 年This is a topic near and dear to my heart. Most RMF's and related scholarly research have focused on risk mitigation practices that the developers/producers are expected to address. The industry hasn't spoken as broadly as it could about the downstream risks that can be transferred to the buyers/deployers. Your article does! Thank you!!! Buyers/Deployers are not absolved of AI risks. In fact, they are sitting squarely in the crosshairs of residual risks that were overlooked, forgotten, missed, or intentionally accepted (as sometimes is the case) by the producers/developers. Establishing good governance as a buyer/deployer is the only way to mitigate legal, financial, reputational, and operational liabilities. My favorite line in your article is "you need to care about each and every interaction a human being has with the STS you have deployed." That, my friends, is an epic statement that far too many buyers/deployers of these systems should take very seriously. Thank you for another great article Chris Leong, FHCA and Maria Santacaterina. Well said.
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
1 年Chris Taylor Mark Chillingworth Trevor Hunt Caroline Gorski Marco Meyer Richard Foster-Fletcher ?? Christiane Wuillamie OBE FIRL John R. Childress Emma Parry Sarah Sinclair Ni?l Malan Michael Georg Speller Julia Chamova, MBA Emilie Sundorph Jessica Rose Morley Elena Sinel Paul Childerhose?Elena Kozhemyakin Wayne Cleghorn Maria Luciana A. Ansgar Koene Ajay Singh Tim Clement-Jones Deb Donig Hilary Sutcliffe Judith Ratcliffe, CIPP/E Angelica Figueiredo White Mark Bowden ? Paul Barnett, Founder Enlightened Enterprise Academy Vicky McKeever Aurelie Jacquet David Ryan Polgar Jolanda Rose
Patient and lived-experience advisor #neuro for NHS E
1 年Issues with security and are always a priority for developers in#healthtech You can read about companies which have found solutions in the latest Digital Health Playbook from Department of International trade published yesterday. While caution is important, the potential benefits for patients are so great , fear shouldn’t be used to delay the transformation of patients’ lives.
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
1 年Maria Santacaterina Ryan Carrier, FHCA Shea Brown Markus Krebsz Charles Radclyffe Sundaraparipurnan Narayanan Emmanuelle Shaaravii Heidi Saas Carissa Véliz Dr. Saskia D?rr Dr. Dorothea Baur Enrico Panai Paolo Volpe, FHCA Vikas Malhotra Esther Y. Chung, Esq., FHC Vibhav Mithal Dr Carolina Sanchez Hernandez Hema Lakkaraju Lukas Madl, FHCA Michael McCarthy, PhD Cheryl Rego Katrina Ingram Cari Miller, FHCA Tristi T. Sarah Clarke Rohan Light, GRCP FRSAJo Stansfield Jeffrey Kluge, FHCA Jeff Jockisch Jodi Masters-Gonzales, FHCA Joshua Bucheli Elle Brooker BA BPPM MPPM FGIA FIAAIS AMICDA Philippa "Pip" Penfold Carol Anderson Greg Elliott Antonio Placido Buffelli