Responsible Innovation: Why Inclusion of Civil Society is Important for Trust
Chris Leong, FHCA
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
Socio-Technical Systems (STS), powered by non-deterministic algorithmic technologies processing our personal data, are so pervasive that we interact with them without realising the fact that we are engaging with autonomous agents that influence our behaviours.
Virtual personal assistants such as Siri and Alexa, recommendation engines in the apps we use such as Amazon, Netflix, and Spotify, social media app(s), price comparison app(s) or website(s), search engine(s), and banking platform(s) are some of the examples of STS that provide digital services in our everyday lives, with limited choices to help us achieve our goals.
In the above examples, STS provide you with options based on their inferences from your digital profile, in exchange for additional personal data obtained during your engagement with the platform, that they also collect. The more you use, the more personal data is collected. Users unwittingly place their trust in the providers of the STS to keep their data safe, secure and private.
Then we have STS that you are forced to interact with, which use your personal data against you, to profile, autonomously infer and make decisions on your behalf that impact you directly and instantly, without any explanation about how those decisions were made. These types of STS have been classified as “high-risk” (by the EU and seemingly the US will follow suit) and these can be found in:
These non-deterministic algorithmic technologies are optimised based on the dataset(s) used to train their models. Recipients of automated decision-making and profiling, with characteristics or attributes that are outliers to the dataset(s) used, are very likely to be disadvantaged, discriminated against, or harmed, without any explanation and opportunities to engage with the deploying organisation(s) for feedback or redress.
Unless of course, organisations and Corporations that deployed these high-risk STS have innovated responsibly and implemented operational safeguards that preserve fundamental human rights, human dignity, human integrity, human agency and autonomy, while mitigating the potential risks, such as discrimination, biases, social stereotypes, erosion /denial of privacy rights, cybersecurity threats, safety concerns, environmental damage, adverse effects on mental health and compliance with the relevant legal and regulatory frameworks.
Each one of us is unique and it is entirely possible we will find ourselves in situations where we do not fit the curve, and so we will be excluded and forever remain among the ‘outliers.’
This means we may not be able to access vital services such as health care, we may be denied a bank loan or forfeit employment opportunities, forego further education or denied support from social services if circumstances change. For no reason other than the fact that our ‘digital profile’ does not fit the algorithm’s set of rules determined by its ‘unknown’ creator.
We all need to realise this now and ensure that we, as part of Civil Society, are included in the considerations of the other stakeholders in the value chain, especially when the interconnectedness of the digital ecosystem ultimately impacts humanity and the environment.
A Key Stakeholder group in the value chain
We previously wrote: “We believe that three stakeholder groups can together influence change towards responsible use and accountability when these powerful non-deterministic algorithmic technologies are deployed in STS:
Civil Society ultimately either benefits or is harmed by the adverse outcomes from the inferences generated by non-deterministic algorithmic technologies within the STS deployed by Corporations.
It would make sense for Corporations with ethical ambitions to innovate responsibly and include Civil Society considerations within their Purpose, enacted by their innovation culture, throughout the lifecycle of their STS. ?
Our previous article discussed how Boards and CEOs of Corporations can embrace Responsible Innovation and lead their organisation on their journey towards trustworthiness.
Civil Society is also the stakeholder group that Regulators should be protecting within their jurisdictions, through laws and regulations. It would therefore be reasonable to expect Civil Society to be at the front and centre of regulatory initiatives and interventions designed to mitigate social and societal harms from these emerging, yet powerful and non-deterministic algorithmic technologies.
Policymakers in Governments should look at the impact of automation on Civil Society. Where Corporations have deployed non-deterministic algorithmic technologies with automated decision-making and profiling capabilities in the hiring process but struggled to fill open positions, we also find a growing number of citizens seeking but unable to secure work. The over-reliance and dependency on these STS is likely to be a key contributor to this problem.
Similarly, countries suffering from over-automation have experienced falling productivity, which is a key indicator of future economic growth. This does not serve governments or civil society, as it means there will be fewer employment opportunities and reduced consumption also impacts production and income.
This discussion paper by the Ada Lovelace Institute proposes three strategies for EU policymakers to expand Civil Society participation:
We suggest that non-experts should be added to the aforementioned fora, to broaden the perspectives and deepen the understanding of potential foreseeable or unlikely but plausible risks.
The rapid progress of these non-deterministic algorithmic technologies and the harms arising from their deployment in STS suggest that Civil Society is not as integrated with the other stakeholder groups as it should be within the value chain. Consequently, we see the erosion of trust and the increasing concern within Civil Society about the potential adverse systemic societal impacts.
The Nature and Impact of Non-Deterministic Algorithms
Algorithms that are non-deterministic in nature and optimised to infer outcomes statistically, based on historical data that may not be representative of its recipients, often leave under-represented, underserved, vulnerable and marginalised members of Civil Society excluded.
Further, they may be disadvantaged, discriminated against and/or harmed perpetually unless the algorithm is changed or they have access to the means for redress. This typically occurs when autonomous agents in STS deliver automated decisions and profiling where recipients do not have agency and are not afforded the opportunity by the deploying organisation for redress.
In such instances, decision-makers in deploying organisations were unlikely to have understood or anticipated the unintended consequences arising from the technical limitations and downside risks associated with STS. They are likely to have incorrectly assumed that the outcomes from non-deterministic algorithmic inferences are consistent, predictable, and accurate. Therefore, no operational safeguards were provisioned and alternative procedures were not implemented for the recipients to engage with employees in the event of unintended consequences for redress as a matter of urgency.
Boards and CEOs of Corporations also need to understand that the technology deployed may not be fit for purpose in circumstances where the scope, nature, context and purpose may have changed or are no longer aligned. Focusing on technology alone is not sufficient to enable their organisation to comply with the raft of regulations applicable to the outcomes produced by STS, for which the organisation and responsible parties are fully accountable.
Human risks must be incorporated in the Corporation’s risk registers when STS are deployed for automated decision-making and profiling.
Perhaps we should consider the precedence set in the financial services industry, specifically regarding the forecasted future performance of investment funds, which are typically derived by applying certain assumptions to the statistical processing of historical data, where regulated entities are required by law to disclose disclaimers. For example, Chapter 4 of the FCA Handbook/Article 44(6) of the MiFID Org Regulation, states:
“Where the information contains information on future performance, investment firms shall ensure that the following conditions are satisfied:
(a) the information is not based on or refer to simulated past performance;
(b) the information is based on reasonable assumptions supported by objective data;
(c) where the information is based on gross performance, the effect of commissions, fees or other charges is disclosed;
(d) the information is based on performance scenarios in different market conditions (both negative and positive scenarios), and reflects the nature and risks of the specific types of instruments included in the analysis;
(e) the information contains a prominent warning that such forecasts are not a reliable indicator of future performance.”
Through the communication of these disclaimers, recipients of investment forecasts are informed and hence made aware of the risks and the nature of the predicted future outcomes. ?
Perhaps the Regulators can leverage a similar approach, specifically in reference to (e) and mandate Corporations leveraging non-deterministic algorithmic technologies in their STS, to disclose the risks relating to the automated decision-making and profiling outcomes that recipients are subjected to in high-risk use cases, such as those listed by the ICO and in the EU AI Act.
Examples of harms
The nature of non-deterministic algorithmic autonomous agents embedded within STS is such that they are targeted in their actions, and optimised to achieve intended outcomes. Any inherent biases in their training data are perpetuated in their inferences, profiling and decision-making at scale, as soon as they are deployed.
Harms arising from the use of non-deterministic algorithmic technologies continue to rise in the absence of effective new regulations and enforcement of existing regulations. We provide a small list of examples below, but you can also cross-reference repositories and websites maintained by industry bodies such as the?AI Incidents Database,?AlgorithmWatch,?AIAAIC Repository?and the?GDPR Enforcement Tracker?that record adverse outcomes and regulatory fines resulting from algorithmic systems in use:
What can you do when adversely impacted by STS?
Any STS deployed for automated decision-making and profiling will be processing your personal data, which is then governed by the General Data Protection Regulation (GDPR) if you are a citizen of the UK or the EU.
Under GDPR (UK & EU), every citizen of these jurisdictions is afforded data protection rights. If you were subjected to automated decision-making and profiling in an STS,
Articles 13 & 14 give you the right to be informed of:
Article 15 gives you the right of access to:
Article 21 gives you the right to object to processing of your personal data, specifically including profiling, in certain circumstances.
Article 22 gives you the right not to be subject to a solely automated decision producing legal or similarly significant effects. There are some exceptions to this and in those cases it obliges organisations to adopt suitable measures to safeguard you, including the right to obtain human intervention; express your view and contest the decision.
领英推荐
In the UK, the Equality Act 2010 also applies and the ICO has also provided the following guidance for organisations and Corporations deploying STS:
“If you are using an AI system in your decision-making process, you need to ensure, and be able to show, that this does not result in discrimination that:
You can exercise your rights by contacting the deploying organisations or Corporations in the first instance.
If you are not satisfied with their responses, you can file a complaint with the Data Protection regulator within your jurisdiction, or the ICO if you are in the UK.
The Financial Conduct Authority (FCA) in the UK outlined the regulatory responsibilities for regulated financial services institutions responsible for ensuring that their customers are treated fairly:
“Our?principles?(PRIN) include explicit and implicit guidance on the fair treatment of customers. Principle 6 says: ‘A firm must pay due regard to the interests of its customers and treat them fairly’, but other principles also apply to this area of business behaviour.
These principles apply even for firms that do not have direct contact with retail customers. Risks and poor conduct can be carried from wholesale to retail markets.”
We wrote in a previous article about the potential impact of STS on the deploying organisation’s ability to treat their customers fairly if they focused solely on the technology. Instead, the deploying organisation should also be focusing on its internal capabilities to ensure that the technical limitations and downside risks associated with non-deterministic algorithmic technologies are mitigated; and beneficial outcomes are experienced by the recipients of automated decision-making and profiling from these STS.
The incoming suite of EU risk-based regulations governing the use of ‘AI’ that include the EU Digital Services Act,?EU Digital Markets Act,?EU AI Act,?EU AI Liability Directive,?and EU Product Liability Directive, will raise the bar for impacted organisations and Corporations deploying STS with embedded non-deterministic algorithmic technologies, offering further protections for EU citizens.
The European Parliament has also recently passed the EU AI Act, which addresses the use of ‘generative AI’ tools.
How should we trust our digital interactions?
In our Responsible Innovation Framework, we start with Purpose and the human consumer embedded.
Ensuring that Civil Society is at the front and centre of the Corporation’s Purpose can be the starting point from which its Board and CEO can lead the organisation on its Responsible Innovation journey towards trustworthiness.
Civil Society inclusion in the lifecycle of STS enables the diversity of thought and diversity of lived experiences to be encapsulated through diverse inputs and multistakeholder feedback.
As consumers of digital services from STS, we see many curated public statements from organisations and Corporations about their commitment towards “safe,”’ “trusted” “ethical” ‘AI’ and yet we continue to see outcomes from many that are anything but.
The release of ‘generative AI’ capabilities by the Big Tech firms to the general public and Corporations has in itself created a number of challenges for all key stakeholders in the value chain. While some Corporations have banned their use, others appear to embrace the capabilities offered by this technology.
We have seen a long list of reports from researchers and citizens about the adverse behaviours being experienced, owing to embedded features and the datasets used to train ‘generative AI’ models.
Some of the regulators have issued warnings about how they should not be used, while an increasing number of lawsuits are appearing in the law courts, following misuse whether intentional or not.
There are 4 significant risks associated with the use of ‘generative AI’ capabilities:
Those who understand the nature of generative large language models will be aware of their technical limitations, unreliability, inaccuracy and risks, and will be able to decide if they are fit for purpose.
It is crucial for Civil Society to be aware and informed of the same.
When we add the challenges and risks from these emerging ‘generative AI’ technologies to those from the other non-deterministic algorithmic technologies, we should ask ourselves,
“Why should we trust the organisations that deploy STS and how can we trust their outputs?”
The following diagram proposes the four stages of communication that facilitate the building of trust between organisations deploying STS and the recipients of automated decision-making and profiling.
1.??????Say: Corporations publicise their commitment and strategy towards enacting Responsible Innovation, including public disclosures of their Code of Ethics and Code of Data Ethics.
2.??????Do: Corporations demonstrate that they have operationalised their commitment towards Responsible Innovation. Consumers of their digital services, including the recipients of automated decision-making and profiling from their STS, are treated fairly, equally and equitably. This means transparency in terms of the obligations for the company and respective rights for the consumer seamlessly afforded during and post engagement with the digital service or product. The Corporation’s core values are reflected in the outcomes from their STS.
3.??????Verify: Corporations deploying STS with automated decision-making and profiling voluntarily submit their STS for an Independent (Compliance) Audit against applicable regulatory framework(s), leveraging Independent Audit criteria where the rights of Civil Society and their respective best interests are embedded.
4.??????Trust: The outcome(s) of the Independent (Compliance) Audit(s) are transparent and verifiable, enabling consumers to trust the integrity of the independently audited STS.
Since trust is the currency of engagement for Civil Society in the digital space, it is critical for Corporations deploying STS for automated decision-making and profiling to be in a position to have their trustworthiness verified through Independent External Audits, that are not only mapped to the applicable legal frameworks and regulations, but also designed with the core interests and fundamental rights of Civil Society.
Verifiable trust
ForHumanity has compiled Independent Audit Certification Schemes aligned to the major legal and regulatory frameworks, applicable to the use of personal data being processed by AI, Algorithmic and Autonomous Systems, such as GDPR (UK & EU), the EU AI Act, the EU Digital Services Act, CCPA, the New York City Local Law No. 144 for AEDTs and the UK Children’s Code. It has also done so through crowdsourcing, which allows for the interests of Civil Society to be embedded in each audit criteria.
It recently celebrated its third anniversary, and its dedicated community has over 1400 citizens from a multi-disciplinary mix of professions and diverse backgrounds representing 86 countries. Fellows and contributors bring cultural richness and depth to all the discussions, diversity of thought and diversity of lived experiences to enrich the many debates. This is part of the meticulous process ForHumanity has successfully established to create and enhance audit criteria that mitigate the downside risks for everyone.
ForHumanity’s Independent Audit Certification Schemes also incorporate controls for risk management, governance, oversight and accountability, while addressing the many challenges in privacy, ethics, bias, cybersecurity and trust.
When a ForHumanity Certified Auditor conducts an Independent Audit using any of the ForHumanity Certification Schemes, the audit is carried out for all citizens and for Civil Society as a whole, rather than solely for Corporations. The independence of both the Auditors and Audit Schemes is a unique characteristic, which amplifies the value of this verification for all stakeholders across the value chain.
Corporations choosing to submit their STS for a ForHumanity Independent Audit will therefore seek to earn verifiable trust from Civil Society.
It is transformative on both sides
On the eve of new regulations and publicised interventions by regulators requiring greater transparency obligations and accountability, it is not surprising to see fewer established large complex organisations disclosing or publicly promoting the use of ‘AI’ in their organisations and B2C platforms.
There is no denying that the power of these non-deterministic algorithmic technologies is transformative. But, with ‘great power comes great responsibility and accountability.’
Boards and CEOs of Corporations need to think differently and understand the true value of trust in the digital world.
Deploying STS with automated decision-making and profiling solely for the purpose of increasing shareholder value, without taking into consideration the unintended consequences and harms from the embedded non-deterministic algorithmic technologies will not be sustainable.
It is through effective and meaningful Civil Society inclusion across the value chain that the fundamental rights of citizens can be protected and preserved through regulation.
Civil Society needs to be relevant in the Corporation’s purpose and core values, when they decide to deploy STS with automated decision-making and profiling. Going forward, trust can only be realised through Responsible Innovation, since all outcomes matter. A different mindset and critical thinking is required.
Consumers have an increasing array of choices. Where reputation-based trust had taken years and decades to build through repeated and exceptional customer experiences, unsatisfactory digital interactions can instantly erode that trust and undermine customer engagement.
The inclusion of Civil Society in Board level decisions is crucial for the successful onboarding of ‘AI’. It’s a transformative choice for Boards and CEOs of Corporations to make.
Chris Leong?is a Fellow, Certified Auditor (FHCA) and a Member of the Ethics Committee at ForHumanity and the Director of Leong Solutions Limited, a UK-based Management Consultancy and Licensee of the ForHumanity Independent Audit of AI Systems helping you succeed in your digital business transformation through Responsible Innovation and Differentiate Through Trustworthiness.
Maria Santacaterina?is a Fellow, Certified Auditor (FHCA) and a Member of the Ethics Committee at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping you revitalise your Core Business Strategy, Create Enduring Value and Build a Sustainable Digital Future.
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
1 年'The head of the world’s largest sovereign wealth fund has called on governments to speed up the regulation of artificial intelligence as it revealed it would set guidelines for how the 9,000 companies it invests in should use AI “ethically”.' - https://www.ft.com/content/594a4f52-eb98-4da2-beca-4addcf9777c4 #institutionalinvestors #esginvesting
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
1 年You might be interested in checking out the following: 1. Interesting article in the New York Times today, "'The Godfather of A.I' Leaves Google and Warns of Danger Ahead" - https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html. 2. IAPP article on Privacy, "Most consumers want data privacy and will act to defend it" - https://iapp.org/news/a/most-consumers-want-data-privacy-and-will-act-to-defend-it/
Advocate at Supreme Court of India | Making businesses LawReady | Legal Adherence Audit, Training, Contract Mgmt., IP valuation & IPR mgmt., | Cyber, Data Privacy, AI & Environment Law expert
1 年Fabulous write up Chris Leong, FHCA . Thank you for the share. Subscribed to the newsletter, hope to keep reading more and more amazing articles in times to come.
Thank you so much for sharing this wonderful article with us. I believe, many people will find it as interesting as I do.
AI Strategy Advisor | Enterprise Digital Transformation Leader Driving Innovation & Customer-Centric Growth | Agile Transformation | Expert in Global ERP Implementation & Digital Adoption | Digital Mfg & Supply Chain
1 年Great article Chris. I completely agree that It's crucial for companies, regulators, and individuals to collaborate in promoting responsible innovation and safeguarding civil society from the unpredictability associated with non-deterministic algorithmic technologies. To mitigate potential biases and promote inclusiveness, it's important not only to conduct independent compliance audits but also to embrace diverse perspectives during the development and implementation of these technologies. Additionally, transparent communication can establish trust and facilitate collaboration among all stakeholders that in many cases have very different incentives. This is a critical conversation that needs to continue as this issue will only become more important moving forward. Thanks again Chris for your great insights!