Responsible Innovation: Leaders need to act and think differently
Chris Leong, FHCA
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
Contrary to popular belief, human beings have cognitive heuristics - mental shortcuts - which from an evolutionary perspective have proven most useful. Since they have helped us survive and activate fear-flight response to escape imminent danger without thinking.?It is an instinctive reaction rather than a rational response. That comes later after the action was taken, to make sure it was the right response. Similarly, our worldview is shaped by our lived experiences and social interactions as well as our surroundings. We thereby form a set of beliefs and gain an understanding of the world, and adapt to our environment accordingly.
Clearly, in today’s information-rich world, there are more stimuli or influences which may impact our judgments and responses to a given situation.?We have access to information 24/7 and depending on what we see, our assessment of a given situation may be influenced to varying degrees. Discerning though we might like to think we are, we need to admit that ‘stress-testing' our assessment of a given situation by socialising the problem, is more than likely going to lead to a better assessment of the situation at hand.
There is of course a temporal dimension to this, and indeed our views are likely to change over time just as much as they may change when the context changes. Given a set of circumstances, we think and act differently based on our views (cultural norms, beliefs, behaviours) and our social interactions, which may lead to the formation of collective wisdom. Fundamentally, we grow to recognise a set of shared values and generally accepted beliefs that are delineated in our societal norms. When humans interact with each other, the outcome of that interaction may be influenced by the heuristics - or the quick initial response each person has to a given situation. When significant decisions need to be made that impact another person, prior deliberations made with a diversity of thought and experiences are more likely to deliver an outcome that could be reasonably regarded as fair.
Fundamental human traits such as empathy, intuition, care, reasoning, conscience, and intelligence are some of the mechanisms that can provide the best chance for the right decisions to be made that impact another human being. We often hear the phrase, “Common sense will prevail.” Although there could be trade-offs or compromises made in the deliberation of any human-impacted decision, the outcome is likely to be made with the best interest of the human being in mind, taking into account multiple facets in a given situation. AI, Algorithmic and Autonomous systems are only operating on a narrowly defined set of rules and very often little attention is paid to the quality of the datasets used to train the models. Therefore, it is possible the systems in use do not adhere to the principles or representativeness or proportionality that is required relative to the task at hand. For example, an applicant may be refused a bank loan without a clearly explained reason. The ‘black box’ algorithm may simply be tuned to historical constraints which do not adequately represent the current situation and may adversely affect the person concerned.
The rapid adoption of socio-technical systems
When leaders and the Boards of commercial organisations decided to invest in solutions that are powered by AI, algorithmic or autonomous systems to process personal data, on the premise that such capabilities would allow them to gain competitive advantage or disrupt industries and further reduce costs through automation, were they also informed about their limitations, downside risks, responsibility, legal obligations, and accountability?
The computational power of these technologies enables large amounts of data to be processed rapidly. They identify patterns through statistical averages and probabilities and then provide outputs to support faster decision making. However, they do not provide insights, that is for the human to understand and evaluate, though in reality this rarely happens. It is simply assumed the machine output must be the right answer.?There is an over-reliance on inference which is then applied for targeted and automated engagement with consumers, customers, and users – fundamentally, they are all human beings. We all know data fuels AI, Algorithmic and Autonomous systems, but it should not be indiscriminately fed into the machines without going through the proper process of qualification, and that requires an ethical assessment. Have you asked the critical question, namely - how have all the data been collected and were they collected with informed consent for the scope, context, nature, and purpose of your intended processing in accordance with Article 14 of GDPR?
The problem with inference based on statistical averages is that it misconstrues individual needs. So, the outcome of the model may not be proportionate to the input - in this case, the Data Subject. We also know that inference is not fact. The likelihood that something may happen is not the same as saying something will happen or that something that is likely to be true, is actually true.
The forms of engagement can vary depending on the application, and examples include: socio-technical systems recommending content based on personal data to individual consumers – thereby influencing their actions and decisions; automated employment decision tools that could exclude strong candidates from being considered; and/or creditworthiness checks that could deny creditworthy individuals the opportunity to secure funds to support their financial needs.
Having invested significantly in such technologies, it is easy for organisations to hide behind them when adverse outcomes occur. There are many instances where the recipients of automated decisions are not afforded their legal rights through provisions under GDPR. The law specifies 8 rights for natural Data Subjects, namely human beings, including the right to withdraw consent and the right to be forgotten.
Did the projected return on investment (ROI) from these systems include provisions for increased investment in risk management, development of human capabilities, compliance (for those in regulated industries), governance, oversight and independent audits? If not, were there provisions made for regulatory fines and/or potential litigation costs and pay-outs when their stakeholders including employees, customers, suppliers and/or members of the public are adversely impacted?
Perhaps not. Since the leaders and Boards of established organisations with reputations to protect are typically focused on legal and regulatory compliance . In addition, they are typically risk-averse, and those investments for such technologies would likely not be approved. ?Unless their operationalisation strategy was to de-risk by design. This would mean incorporating all necessary elements - bias mitigation, privacy, ethics, cybersecurity, risk management, compliance and governance - into the way their business operated, as well as into the fabric of their organisational culture from the Board level down throughout the transformation journey.
Since so little attention has been given to the downside risks and limitations of AI, Algorithmic or Autonomous systems that process personal data by the vendors that market and sell them, or by other entities that promote them, the rapid adoption by organisations over the past few years has increased the risk of adverse impacts for people on the receiving end of machine-generated outcomes. At the same time, it has also increased systemic and structural risk for the organisations deploying them.
There may be an absence of adequate controls, insufficient due diligence and risk management practices, inflexible governance structures and procedures, combined with a lack of transparency and accountability. All of which will not position the company in a favourable light, if the law should need to be enforced. Moreover, it will not protect the organisation against ensuing reputational damage in the event system errors or unintended consequences in data processing should emerge. Technology just like humans is not without its weaknesses and may expose the organisation to unforeseen risks resulting from the unintended consequences of machine outcomes when processing personal data, particularly in high-risk situations.
The fallacy that socio-technical systems are mature enough to replace humans in decision-making is starting to be revealed across many industries that have adopted AI, Algorithmic or Autonomous systems to process personal data. Whilst their ambitions are to reduce operating costs and to provide superior customer experience, in actual fact the costs may be higher than expected and it is not a given that the customer experience is much improved. For instance, a customer services agent' upholding an algorithmic decision imposed by the bank's automated CRM, when a customer is unhappy with the outcome.
Furthermore, many of us have encountered challenges with unintelligent chatbots, when all we really needed was an opportunity to engage with a human being at the nearest branch, who can help us with a query and/or actually resolve an issue. Funnily enough, customers like to have human-to-human experiences. They are not so keen on KPIs and efficiency metrics, rather they need effective answers for their problems.
You can outsource responsibility but not accountability
The mere fact that the outcomes from these non-deterministic socio-technical systems can impact humans in ways that are disadvantageous, discriminatory or harmful places great responsibility on the organisations. More specifically, on their leaders who chose to deploy them. Are they making every effort to ensure the right decisions are made throughout the lifecycle of these socio-technical systems? Are they being held responsible and accountable for their decisions and ultimately those machine-generated outcomes?
While a handful of organisations have the financial resources to design, develop and deploy AI, Algorithmic and Autonomous systems, most will end up procuring solutions from third-party solution providers; and/or they will leverage such capabilities through outsourced service providers.
Regardless of the approach undertaken by the organisation and business leaders to leverage the power of AI, Algorithmic and Autonomous systems processing personal data, these socio-technical systems may not deliver the desired benefits, and the accountability rests with them for all outcomes , whether or not they are intentional.
It’s time to bust the biggest myth of all time: technology is not the solution for everything. However, it is an amazing tool. AI, Algorithmic or Autonomous systems today are among the most sophisticated tools we have ever had. But they are narrowly defined by a set of rules or constraints, based on mathematical laws, namely statistical averages and may only provide probabilistic outcomes. Accuracy based on poor quality data sets is not going to provide a valuable outcome and may even generate unnecessary risks.
Companies should think twice about the data being collected, the data being used and the purpose for which an automated system may or may not be needed. Just because data can be collected does not mean that it should be. Myopic deployment of these sophisticated computational systems in the interests of efficiency, may not adequately solve the problem. They may create residual risks that have not been considered due to unintended consequences intrinsic to complex systems when operationalised. This article sums up brilliantly the hype and limitations of AI, Algorithmic and Autonomous systems that have been deployed into the real world.
Consequently, careful consideration needs to be made by the same leaders on how these socio-technical systems can be deployed, where they are deployed and at a minimum ensure basic human rights are protected, privacy is preserved and human beings are not adversely impacted in ways that are harmful. Safety is assured, and data subject rights are afforded under the relevant legal frameworks and Data Protection regulations.
To fulfil the promise of this technology, we need to recognise that it is a powerful tool. It can augment human intelligence, but it does still require human cognition, emotional intelligence, and ethical guidelines, in order to attain desirable impacts. ?For example, helping us clean up the oceans, optimise for less energy consumption and so on. With great power, comes great responsibility.
Why were these not thought through before
There could be several reasons why organisations that have deployed socio-technical systems on humans, have not considered the associated downside risks beforehand. Let alone ask the question - if the use of AI, algorithmic or autonomous systems to process personal data was actually necessary!
Lack of awareness at the Board and C-suite level has featured prominently in the various surveys we have seen, as outlined in a previous article . The Board and the CEO need to take leadership roles in ensuring the organisation is equipped to implement the required ethical training. A body of knowledge that evolves over time, developed through lived and shared experiences. Everyone in the organisation becomes responsible and accountable over time, both within their individual roles and through collective working practices. It is an evolutionary dynamic critical for the organisation’s survival and future longevity. It is not a nice to have. It is a critical business imperative.
Where there was awareness of such downside risks by those deploying these socio-technical systems, perhaps there wasn’t the right focus by the organisation and business leadership to prioritise their mitigation. Mitigation requires resources to be allocated in a timely manner. Prevention is always better than a cure.
However, selling the ‘need’ upstream for senior operational managers, where additional investments are required, if none were provisioned can be challenging. Often in such cases, rewards and incentives that are set at the Board level and operationalised through the C-suite level across the organisation top-down drive behaviours. Moreover, KPIs that only reference quantitative alignment with the strategy will miss the mark. If they do not also entertain qualitative attainment, the end game will be less likely to generate beneficial outcomes for humanity.
Any form of bias that may exist throughout the organisation will prevail. Take a look at this for the different types of commonly known human biases. Some researchers have even asked, “What do we do about the biases in AI? ”
领英推荐
Furthermore, human heuristics which may find their way into AI, Algorithmic or Autonomous systems processing personal data, are then amplified through the automated decisions that humans are subjected to. Especially when these socio-technical systems were designed, developed, and deployed in silos.?Siloed thinking may induce errors and increase the likelihood of biased or adverse outcomes.?A natural ‘data subject’ may suffer life-changing harm (whether physically, mentally, morally) and a malfunctioning machine may need to be switched off.?
Whilst there are numerous initiatives launched across industries that prescribe tools for delivering responsible AI, we believe these will not be enough to transform the organisation into one that can innovate responsibly and be accountable for the outcomes.
Established organisations will have set approaches to innovating, shaped by the mindset of the organisation and business leaders who build like-minded teams. If these organisations have not demonstrated responsible innovation, it is highly unlikely they will do so if left to their own devices. This would be an opportunity for the Board to mandate differently. There will always be conflicting priorities that place revenues and profits ahead of the likelihood of their socio-technical systems adversely impacting segments of society that interact with their digital services. Often these become someone else’s problem after the completion of their digital transformation programme has been celebrated.
An organisation’s ability to innovate responsibly can be linked to the maturity of its digital ethics. For us, any digital transformation needs to incorporate elements of social as well as environmental impact, with the “S” in ESG fast becoming a priority on every Board’s agenda. Considering that data, including personal data, fuels digital organisations, ethics, bias remediation, privacy, cybersecurity, and trustworthiness are key ingredients that need to be baked into every digital transformation programme. EthicsGrade provides a good indication of an organisation’s maturity from a digital ethics perspective. Look up your favourite organisation to see how it is progressing.
Diversity of thought
Socio-technical systems are not like traditional deterministic systems. There is nothing deterministic about probability. Probability means this is the likely outcome, but it's not necessarily the only outcome. There are other possibilities, and from a strategic perspective, this requires human capabilities to understand, interpret, discuss, debate what the 'signals' mean. The machines do not provide insights. Human beings do.?Through a diligent process of critical thinking and discernment, observation, open debate, and discussion to arrive at the ground 'truth' which is relevant to the context, situation, group of people and so on.?And of course, the decision may need to change as circumstances change.
Organisations also require a different perspective to examine their downside risks, consider their potential impact and associated risks, deploy controls and mitigants and publicly document their residual risks and limitations for consumers of related digital services. This different perspective is best summed up in this post announcing a paper published by ForHumanity on Diverse Inputs and Multi-Stakeholder Feedback.
Understanding downside risks related to any AI, algorithmic or autonomous system processing personal data requires a multi-disciplinary set of skills, experiences and perspectives that are best realised with a diversity of thought and lived experiences in the groups of people providing them.
If we consider the complexity of the demographics in the dynamic and real-world in which these socio-technical systems are deployed, it can be overwhelming for any group of people within those organisations to try and foresee all possible adverse outcomes that these non-deterministic socio-technical systems can generate.
Nevertheless, organisations and business leaders need to understand what those downside risks are, deploy risk controls and mitigants to manage them, and be accountable for the adverse outcomes that could manifest from the residual risks that they accept.
Boards can certainly contribute by mandating the right governance structures, as outlined in ForHumanity’s IAAIS .
A human-centric organisation culture
Having the right organisational culture is critical for responsible innovation. An organisation with a human-centric culture will put the well-being of humans first and ensure the outcomes of socio-technical systems deployed do not adversely impact them.
All decision-makers must be well aware of all adverse outcomes that can manifest from unmitigated downside risks related to these socio-technical systems.
Robust controls backed by an effective governance and oversight structure with accountability and transparency must be operationalised.
Ethics must be embedded, codified, and serve as a beacon as well as a reference point through the lifecycle of AI, algorithmic and autonomous systems, whether they are designed, developed, deployed or procured from third-party providers.
The privacy of data subjects must be respected through compliance with respective data protection regulations. Compliance here is not just a box-ticking or regulatory reporting exercise, but a demonstrable operationalisation of compliance that can be proven through an independent audit, such as ForHumanity’s IAAIS.
Biases must be recognised and mitigated. All residual risks must be documented and disclosed to the user or consumer of digital services that leverage automated decision making by AI, algorithmic or autonomous systems processing personal data. Fairness in outcomes from automated decision making must be ensured, and the appropriate rights are seamlessly afforded to the user or consumer that are subjected to inferred automated decisions, as provisioned by the relevant legal frameworks.
Organisation and business leaders must recognise that diverse inputs and multistakeholder feedback are valuable in that it enables diversity in thought and experiences when sourced externally while obtaining feedback throughout the lifecycle of these socio-technical systems from a variety of stakeholders allow continuous engagement, continuous improvement and facilitates trustworthiness. Diversity in thought from a multi-disciplinary community also enables preconceptions and internal biases to be challenged, examined, and reconsidered.
Business objectives and organisation values must incorporate human-centric values. The values of your third-party providers within your digital ecosystem need to be aligned with your organisation's values since the digital world is interconnected and any adverse impact is amplified and immediate.
Organisation and business leaders who have decided on deploying AI, algorithmic or autonomous systems that impact humans in the real world must embrace the concept of disclosure .
All outcomes from socio-technical systems must first protect the fundamental rights of the human being. A human-centric organisation culture is a mandatory requirement for any organisation embarking on the design, development, deployment and/or procurement of AI, algorithmic or autonomous systems that process personal data and infer automated decisions that impact humans.
Want to differentiate?
As digital services are provided by socio-technical systems, the Board and organisation leaders must understand the implications of getting the ‘socio’ part of the equation wrong. The power of social is underrated among the leaders of established businesses, as reflected by the lack of focus and prioritisation of managing downside risks from these systems.
Trust is a fundamental ingredient and the trustworthiness of the organisation or business deploying these socio-technical systems is the key to unlocking and enabling social engagement. This article sums it up. Trust is a must for doing good business. And it is ethical values that generate value over the long term.?That is in the best interests of your shareholders, employees, customers, suppliers, the communities you serve, society and humanity as a whole.
As more and more organisation and business leaders continue to adopt and deploy socio-technical systems powered by AI, algorithmic or autonomous systems blindly, without considering their downside risks and without operationalising innovation responsibly, the glossy promise of gaining a competitive advantage or disrupting industries and further cost reductions, can so easily be offset by regulatory fines, civil litigation, reputation damage and ultimately, loss of trust.
Considering that the digital world is fast evolving and the rate of change is accelerating, it is necessary to act now. The tech giants have already experienced the burden of heavy fines imposed by the EU and there will be more instances as legislators and regulators catch up with the technological advancements.
We are looking forward to hearing your thoughts.?If you are interested in advancing your organisation for long term success, please get in touch with Maria and Chris to explore how we can help you innovate responsibly.
Chris Leong is a Fellow at ForHumanity and the Director of Leong Solutions Limited , a UK based management consultancy and licensee of ForHumanity’s Independent Audit of AI Systems
Maria Santacaterina is a Fellow at ForHumanity , CEO and Founder of SANTACATERINA , a UK based Global Strategic Leadership & Board Executive Advisory, helping organisations build a sustainable digital future.
Entrepreneur / Serial Disruptor / Champion of an ever-evolving #TruerSelf, #HuSynergy and an emergent #HumanSingularity / Accelerating #HumanEvolution, Self-Coherence, #YOUniqueness, #TruerPurpose / #HuEcoSystem(s)
2 年Chris Leong, FHCA thank you for this... "Fundamental human traits such as empathy, intuition, care, reasoning, conscience, and intelligence are some of the mechanisms that can provide the best chance for the right decisions to be made that impact another human being." This is the work of our startup Chris... ... the individual discovery, valuing, and intentional living of these innate and natural gifts and dispositions within the individual... ... the as yet unknown and ever-evolving #YOUniqueness... ...and the parallel & interactional manifestation of #TruerPurpose... ...and WITH this "ever-evolving" emerges, also in interactional-parallel, the #HuResonance(s) & #HuSynchronicity(s) attracting & manifesting powerful #HuSynergy(s) over One's life-time w Other's... ... whether it's wi organizations, between couples, strategic partnerships & colleagues, or the world of humanity ... the undiscovered power of a truly synergistic Humanity. ... and #AI/#ML, #blockchain, #SSI all have a critical role. Data By The People For The People ... for the first time. The #HumanSingularity Alejandro Corpe?o John Morley David Ross - VUCA Strategist. Author Christopher Patten Jon Porter :) What we working on... Dan Canobbio
Great article Chris!
Group Head of Security and Information Governance
2 年Fantastic article Chris and thanks for sharing, I do hope a lot of decision makers read this and maybe reconsider their approach to some of the AI, algorithmic and autonomous solutions prior to implementing.
Your opening paragraphs made me think of Daniel Kahneman's Thinking Fast & Slow https://youtu.be/9ivtvPVkFkw Chris Leong. Any thoughts on whether AI systems can/do 'think' in the same way ?
Accounting Graduate/Law Student/Author/Blogger at "Il Tè Giuridico & Culturale site:attipensieri.altervista.org"
2 年https://docs.google.com/document/d/1Jir6QdQM5Ew6A_T6KhXi1xXzMRDd9LoI3apYMGHxvs8/edit?usp=drivesdk