Responsible Innovation: How ‘good’ are your Checks and Balances? – Part 2

Responsible Innovation: How ‘good’ are your Checks and Balances? – Part 2

In Part 1 , we looked at some of the challenges and risks Boards are facing in the current turbulence. We continue with strategies to help organisations get back on track.

Start with Purpose, and manage all the risks??

Every established business and organisation publishes its vision, purpose and/or mission on its website. The strategies adopted by business leaders to achieve their mission and purpose are likely to change over time, in response to the world around them.

In an increasingly digitalised world, every organisation has adopted new approaches towards their strategic objectives, which involve digital transformations based on the promise of what those technologies can help achieve.

The pace of technological progress has disrupted many of the societal norms that had hitherto established trust structures, underpinning how we live our lives. This has caused societies to adapt to the digital world, shaped coarsely and for the most part by those who created these algorithmic technologies.

The balance of power and control has very quickly shifted from governments to those who possess and control these digital platforms, technical capabilities and data, including indiscriminate collection of personal data of citizens.

This report ?from Citi GPS proposes the need for a “holistic digital policy that both: (1) realizes the full economic potential of digital, and (2) ensures the equitable distribution of benefits. If properly designed, digital policy would complement monetary and fiscal policy, and provide a third tool for governments to help manage economic growth. It would give stakeholders — from big tech companies down to individuals — much needed longer-term visibility to help them pursue their objectives.”

This would be a far more democratic process implying fairness is the primary objective we are pursuing. Regrettably, the asymmetry of information demonstrates this is a far cry from where we are today. Only the ‘privileged few’ are privy to the means of sharing knowledge and information in the appropriate manner.

We continue to hear digital transformation leaders refer to “the need to be data-driven,” but do Board members and CEOs understand the material and non-material consequences of how automated decision-making and profiling are currently impacting their employees, customers, suppliers and communities through their use of STS? Are they paying attention to the quality and provenance of the data?

An organisation’s purpose is usually aligned with its core values. Due to the limitations and downside risks of non-deterministic algorithmic technologies processing personal data, effective risk management is necessary, to ensure the organisation’s core values remain aligned with its purpose, throughout its ongoing digital transformation journey.

Non-deterministic algorithmic technologies are just tools and enablers. Human beings decide how these tools and enablers are used. Furthermore, due to the inherent technical limitations and downside risks of these tools, their continued performance and the social, societal and environmental outcomes cannot be guaranteed.

This paper outlines the risks and harms of Large Language Models (LLM), of which ChatGPT is one. Gary Marcus reminded us in his recent blog post :

“I said the problem wasn’t math per se, but that large language models didn’t represent robust world models of how events fold over time; I stand by this 100% even today.”

Yann Le Cun has expressed similar views in this tweet .

More often than not, organisations deploying these tools have not implemented operational safeguards to address technical limitations and mitigate foreseeable downside risks, fully. Consequently, often unknowingly, they are allowing adverse social, societal and environmental outcomes to materialise, that impact citizens directly and instantly.

The Board of Directors and the CEO have a moral obligation to ensure that their organisation’s core values are reflected in the digital services delivered by their STS.

Regulatory obligations

We listed in our previous article , some of the new regulations that will affect organisations using non-deterministic algorithmic technologies to process personal data.

Few organisations currently recognise existing regulations, such as GDPR already apply not just from a data protection perspective, but also to automated decision-making and profiling. The ICO in the UK, recently reminded us of what this means in this post :

“Automated individual decision-making refers to decisions made without any human involvement, for example:

  • an online decision after you have applied for credit; or
  • a recruitment aptitude test using pre-programmed algorithms and criteria.

Profiling means your personal data is used to analyse or predict such things as:

  • your performance at work;
  • your economic situation; or
  • your health, personal preferences and interests.”

Every UK and EU citizen is afforded the following rights:

“You have the right:

  • not to be subject to a decision that is based solely on automated processing if the decision affects your legal rights or other equally important matters (e.g. automatic refusal of an online credit application, and recruiting practices without human intervention)
  • to understand the reasons behind decisions made about you by automated processing and the possible consequences of the decisions, and
  • to object to profiling in certain situations, including for direct marketing.”

Even if you have consented to be subjected to automated decision-making and profiling, the organisation “should offer simple ways for you to:

  • express your view on the decision
  • get an explanation of the decision
  • request human intervention in the decision-making process, and
  • challenge a decision.

It must also tell you about the circumstances in which you can object to profiling.

If you have asked an organisation not to make an automated decision, it should tell you in writing whether or not it agrees with you and give reasons.

Generally speaking, this is not adhered to in practice. Rarely do large organisations afford the individual citizen any of the rights explained above by the regulator or implement ‘best practice,’ particularly where automated decision-making concerns ‘employability’ of the individual concerned.

An algorithm assessing ‘human performance’ can hardly provide ‘fair and equal treatment’ and an impartial, ethics-based value judgment, let alone an informed decision of the individual being ‘categorised’ against existing ‘historical’ data-classifications, which may be inherently flawed and riddled with human and non-human biases.?

Most large global organisations have already implemented e-recruitment services provided by third-party platform providers. Are their Board of Directors aware that many of these have embedded “pre-programmed algorithms and criteria” that are subject to GDPR’s Article 22 cited above, as well as being captured as one of the High-Risk AI Systems in Annex III of the upcoming EU AI Act , and now also subject to the New York City’s Local Law No.144 ?

If you, as a candidate, have been subjected to “a recruitment aptitude test using pre-programmed algorithms and criteria,” perhaps you can ask for an explanation of the automated decision and see if it makes sense.

In the Credit Scoring Approaches Guidelines produced by the World Bank Group, the use of non-deterministic algorithmic technologies processing personal data, recognised as ‘innovative methods,’ was examined:

“The use of innovative methods for credit scoring, however, also raises concerns about data privacy, fairness and potential for discrimination against minorities, interpretability of the models, and potential for unintended consequences because the models developed on historical data may learn and perpetuate historical bias.”

We will address the downside risks and the social and societal impacts relating to these ‘innovative methods’ in a future article.

The report also notes that:

“A summary of key regulations related to credit scoring models includes those of the FSB, Basel Committee on Banking Supervision (BCBS), European Banking Authority (EBA), European Data Protection Board (EDPB), European Securities and Markets Authority (ESMA), and the U.S. Federal Reserve System (the FED).”

These credit-scoring systems, as used to determine creditworthiness of individuals are also included in the list of High-Risk AI systems in Annex III of the upcoming EU AI Act .

Not only are the moral obligations unfulfilled, but Directors will now face legal and regulatory compliance challenges, given the complexities and overlapping nature of multiple legal frameworks.

It begs the question as to how large multinationals may transparently and cost effectively demonstrate that they are actually complying with these regulations?

Meanwhile, we continue to see regulators across jurisdictions enforcing existing regulations relating to the use of non-deterministic algorithmic technologies, such as this fine by the FTC , this finding by Italy's Data Protection Agency and this fine by the ICO .

The incoming EU Digital Services Act will impact organisations providing services to EU citizens through Intermediary services, Hosting services, Online Platforms and Very Large Online Platforms. This will apply to any provider of these services located inside or outside the EU. ?

Effective governance and the value of trust

Whilst organisations may have different perspectives of what good governance looks like, we believe that the systemic nature, interdisciplinary intricacies, interconnectivity and interdependencies of the constituent parts of a Socio-Technical System, require the following 8 principles to be adopted, as outlined by the United Nations:

1.??????Participatory

“refers to the opportunity for active involvement by all sectors of society in the decision-making process regarding all issues of interest.”

2.??????Consistent with the rule of law

“a principle of governance in which all persons, institutions and entities, public and private, including the State itself, are accountable to laws that are publicly promulgated, equally enforced and independently adjudicated, and which are consistent with international human rights norms and standards.”

3.??????Transparent

“exists where the process of decision-making by those in power can be scrutinized by concerned members of society.”

4.??????Responsive

“exists where institutions and processes readily serve all stakeholders in a prompt and appropriate manner so that the interests of all citizens are protected.”

5.??????Consensus-oriented

“ensures that the existing systems serve the best interests of society.”

6.??????Equitable and inclusive

“exist where everyone has opportunities to improve or maintain their well-being.”

7.??????Effective and efficient

“exist where processes and institutions make the best use of resources to produce results that meet the needs of society.”

8.??????Accountable (Rothstein and Teorell, 2008; UN, 2009)

“is based on the principle that every person or group is responsible for their actions, especially when their acts affect the public interest.”

Since the outcomes from STS impact citizens directly and instantly, there is public interest in the level and effectiveness of governance within organisations deploying them.

Increasingly, the focus is on how organisations are meeting their obligations to uphold human rights. Boards of Directors will need to make this a strategic priority, particularly in view of mandatory regulatory disclosures.

We can all see and may also experience the adverse social, societal and environmental impacts from many organisations deploying STS. Organisations that elect to deploy non-deterministic algorithmic technologies with limitations and downside risks to process personal data without operational safeguards, not only onboard business risks, but also increase the level of distrust in their brand.

Society is becoming more informed and motives are being questioned. Enforcement of existing and more stringent incoming regulations will also impact the organisation’s reputation and trustworthiness.

We are also rapidly approaching the point that disinformation and misinformation are very much part of what we see presented and hear through digital services.

Should we already assume that the information served to us digitally may not be true until verified?

If so, who do we trust to verify the authenticity of the information served to us, digitally?

The Chair, Board of Directors and CEOs, along with their Private and Institutional Investors, have a unique opportunity to think and act differently. They can choose to embrace responsible innovation, review their governance models, processes and structures, embed diverse inputs and multi-stakeholder feedback in their decision-making processes and differentiate through trustworthiness.

The Chair, Board of Directors, CEOs, along with their Private and Institutional Investors also have a crucial role to play, and ensure that citizens are protected from discriminatory outcomes, arising from automated decision-making and profiling, while encouraging creativity in responsible innovation.

Implementation of ‘AI Governance’ must not be focused only on mitigating financial risk, but it should also include all the other business risks, previously outlined.

Let’s consider what Responsible Innovation means

Organisations seeking to transform their businesses successfully and achieve sustainability over the long-term through the use of STS, should embrace and operationalise responsible innovation.

Organisations wishing to harness insights from data, gain operational efficiencies from automation and grow their business through digital channels need to adopt an ethics-based approach, particularly when using non-deterministic algorithmic technologies processing personal data.

The adoption of our principles of responsible innovation will require leaders to think differently. More thoughtful processes are needed to realign the organisation’s purpose to human-centric values and thereby embrace change towards beneficial outcomes.

Our Responsible Innovation Framework describes the systemic nature, interdisciplinary intricacies, interconnectivity and interdependencies of the constituent parts of a Socio-Technical System.

It promotes accountability by design, alignment with industry standards and facilitates differentiation through trustworthiness, when embedded in the organisational culture.

Organisations choosing to adopt our Responsible Innovation Framework can expect it to deliver the following benefits:

  • reduces the cost of compliance;
  • enhances and matures your risk management and adaptation capabilities;
  • improves collaboration and social cohesion within your organisation;
  • aligns your corporate purpose with human values;
  • facilitates the execution of your strategy through effective communication and leadership;
  • prepares your organisation for independent scrutiny, allowing it to differentiate its competiveness in international markets through trustworthiness; and
  • creates enduring and sustainable value for all your stakeholders.

Our Responsible Innovation Framework enables organisations to continually develop and enhance their risk management capabilities in response to global challenges. Further, it enables organisations to evolve their governance structures and meet their compliance requirements more efficiently and effectively.

Technology may not be the answer for everything and ‘AI’ is sadly not the solution for all our problems. However, algorithmic technologies are powerful tools and they can be enablers towards finding better solutions; but humans must always be in control of these tools.?

"If one would build AI models correctly, there might be no need for governance ... the reality however shows that there might be biases, inaccuracies, and more. One has to catch these, no matter if one manufactures the models in an internal team, via a consultancy, or purchases it embedded in an application." - Bj?rn Preu? , Chief Data Scientist at 2021.ai

Validation of trustworthiness

Organisations at a suitable level of maturity who voluntarily submit themselves to external scrutiny, by independent third-party auditors, such as ForHumanity representing citizens’ interests, are far more likely to be trusted.

In the digital world, increasing disinformation, misinformation, manipulative and exploitative content amplified through the use of non-deterministic algorithmic technologies, necessitates external independent validation of trustworthiness.??

Ethical organisations are also far more likely to survive and thrive. Simply by differentiating through trustworthiness, they will attract and retain talent far more easily and retain customer loyalty.

Moreover, organisations with online platforms need to comply with the EU Digital Services Act and EU Digital Markets Act . They are expected to manage a broad range of risks; thus they should prepare to be independently audited by third-party auditors, as this article explains.

Leveraging the best practices from the financial audit industry, ForHumanity is building an infrastructure of trust that leverages an ecosystem of participants as follows:

  • The creation of third-party audit criteria and maintenance (ForHumanity)
  • The certification of individuals for External Independent Audit (ForHumanity)
  • Entities licensed to use ForHumanity Audit Criteria act as External Independent Auditors and award Certification in accordance with the relevant Certification Scheme to the Auditee (Auditors)
  • Certified Practitioners for each Certification Scheme (ForHumanity Certified Auditors – FHCA )
  • Government-appointed Accreditation Services
  • Target of Evaluation specified by the Auditee being certified
  • Pre-Audit Service Providers
  • Public (Citizens)

Questions to ask yourselves

If you are a Chair or Senior Non-Executive Director or a Member of a Board of an established organisation, or a Private Investor, Venture Capitalist or Private Equity firm investing in a company deploying STS, or an Institutional Investor with investment(s) in organisations with ESG and CSR agendas:

  • Are you asking the right questions so that you are aware of and fully understand the breadth of limitations and downside risks related to the use of non-deterministic algorithmic technologies by your organisation?
  • Is there adequate governance and oversight over the determination of ethical choices, compliance with regulations and acceptance of associated residual risks – together with explainability and interpretability of the models in use, in order to fulfil transparency and accountability requirements, fiduciary duties towards the stakeholders and reporting obligations?
  • Is accountability reasonably assigned for automated decisions, as the outcomes from the STS in use, impact citizens directly and instantly?
  • Do you fully understand your own liabilities and those of your organisation in the event there are regulatory breaches and/or outcomes that adversely impact your organisation’s stakeholders, including citizens?

Responsible Innovation, Accountability by Design, Operational Safeguards and Differentiation through Trustworthiness are essential pillars of strength. If you would like us to help your organisation get back on track, please get in touch with Maria and Chris.

Chris Leong ?is a Fellow and Certified Auditor (FHCA) at ForHumanity and the Director of Leong Solutions Limited, a UK-based Management Consultancy and Licensee of ForHumanity’s Independent Audit of AI Systems.

Maria Santacaterina ?is a Fellow and Certified Auditor (FHCA) at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping you create enduring value to build a sustainable digital future.

Chris Leong, FHCA

Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own

1 年

Further to the publication of this article, I shared two separate posts on the risks associated with #chatgpt and similar #generativeai #chatbots, which you may also be interested in reading: 1. ChatGPT is a data privacy nightmare: https://www.dhirubhai.net/posts/chriscleong_chatgpt-is-a-data-privacy-nightmare-if-you-activity-7029463041616269312-HOuU?utm_source=share&utm_medium=member_desktop 2. Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT https://www.dhirubhai.net/posts/chriscleong_amazon-begs-employees-not-to-leak-corporate-activity-7030157131853004800-_p8T?utm_source=share&utm_medium=member_desktop

要查看或添加评论,请登录

Chris Leong, FHCA的更多文章

社区洞察

其他会员也浏览了