Responsible Innovation: The Uncertainty of ‘AI’ in Socio-Technical Systems
Image by Gerd Altmann from Pixabay

Responsible Innovation: The Uncertainty of ‘AI’ in Socio-Technical Systems

If there is one silver lining we can gather from the unintended outcomes from ChatGPT, Bard, and Sydney, it is the very public revelation of their flaws, limitations and downside risks.

We have consistently raised awareness in our previous articles about the limitations and downside risks of non-deterministic algorithmic technologies, often referred to in the mainstream media as ‘AI’. We share our concerns in the hope of rebalancing public awareness regarding the capabilities of these emerging technologies.

Whereas proponents of these non-deterministic technologies will continue in their quest to anthropomorphise ‘AI’, the scale of the potential adverse impacts on citizens, societies and the environment can no longer be ignored.

The fact that these emerging technologies are non-deterministic means unsupervised interactions with humans in real-world settings can lead to unpredictable and potentially dangerous outcomes. Particularly, if ethics are not being considered and operational safeguards are not implemented prior to deployment, as these articles highlight:

The uncertainty of the outputs delivered by these chat-bots remains, despite reported efforts by Microsoft to embed guardrails in Sydney.

There are even suggestions that these generative large language models can be updated through edits, but how effective would this approach be? Considering the world continues to change around them, the editor will constantly be reacting to gaps that other people have reported with the potential for further biases or inaccuracies to become embedded in the system.

After all, chatbots are basically computer programmes or software systems. They do not have the capacity of human intelligence. They are not sentient human beings who have consciousness, nor are they able to contextualise and analyse meaning beyond stored memory. They do not possess common sense, instinct, or intuition. They do not know how to behave respectfully in society, uphold laws and the human rights of other people. They cannot apply human-level reasoning and adapt to changes in the real world as quickly as humans do, or solve complex new problems creatively. Let alone spontaneously engage with another human being without being prompted by a human being. Nor can software or chatbots be expected to act meaningfully in the real world without human assistance.

Powerful: Yes, but Performance is Uncertain

These non-deterministic algorithmic technologies are fundamentally software programmes that enable significant amounts of data to be statistically analysed to infer outputs.

In the case of ChatGPT, this article explains the approximated attributes of the outputs.

Inferences are not facts, consequently, when inferred automated decisions and profiling are applied directly to people, they matter as the consequences may be discriminatory and harmful to individuals or vulnerable groups of people who do not fit the datasets and parameters used to train the model.

Non-deterministic algorithmic technologies are often deployed in Socio-Technical Systems (STS) without operational safeguards, which effectively allow unintended consequences to be realised.

Business leaders should recognise the concomitant risks from the use of ‘AI’ in situations that directly and instantly impact people, whether they are employees, customers, or consumers of their digital services.

CEOs, their Boards of Directors, and Investors should take the time to consider their corporate values and purpose against the potential unintended consequences of these emerging non-deterministic algorithmic technologies, especially as more and more established businesses seek to rapidly transform themselves into technology-driven companies.

The power of ‘AI’ together with the inherent technical limitations and downside risks needs to be managed effectively.

Whilst proponents who introduce these tools into businesses through digital transformation initiatives promote their upsides and potential benefits, few are fully aware of the implications and business risks that have been onboarded.

Like the mythical genie in the bottle, its purpose can be uncertain, depending on the circumstances. In situations where this genie is let loose to interact directly with humans through STS deployed within the organisation without implementing operational safeguards through people, policies and processes, multiple business risks invariably escalate to the Board of Directors, who need to resolve the issues.

Consequently, the organisation’s mission and purpose could easily be derailed unintentionally by those uncertain outcomes resulting from the unmanaged ‘power,’ technical limitations, and downside risks of ‘AI.’

The key stakeholders in governance functions within deploying organisations must look for ways to exert effective oversight and governance on the use of these emerging non-deterministic algorithmic technologies, as well as understand how the STS process personal data.

Within large complex established organisations, siloed operating models provide significant challenges, which must be overcome for responsible innovation to succeed, if this is the aspiration of their leaders.

Within smaller SMEs and startups, the onus rests with their founders and investors, who need to take a strategic view and be aware of the inherent risks that are onboarded when their business is underpinned by these emerging non-deterministic algorithmic technologies processing personal data.

The hidden risks in third-party supply chains

Third-party supply chains also need to be more heavily scrutinised for their use of emerging non-deterministic algorithmic technologies and how personal data is processed. There are inherent challenges including ethics, privacy, regulation and cybersecurity risks that need to be managed and owned by the organisation sourcing third-party software, solutions, services and platforms. Vulnerabilities in machine learning algorithms should be assessed by cybersecurity teams as these articles highlight:

The interconnected digital world has allowed data to be easily transferred via the Internet. Multi-billion-dollar businesses have been built through the trading of personal data, often without the person’s (‘Data Subject’ under GDPR) knowledge, awareness and informed consent, since the makeup of our digital profiles are key to how digital platforms interact with people.

Data protection laws and privacy regulations were introduced to offer protection to citizens and ensure that basic human rights are preserved in the digital world. However, the intent of these regulations can only be realised through enforcement. In the interim, we continue to see instances of sensitive personal data being traded as this article outlines, while personal data attributes of Virtual Reality (VR) participants can be de-anonymised to identify individuals according to this article.

While some instances of unconsented trading of personal data are being challenged by regulators and privacy campaigners in law courts, privacy concerns have been raised around the data collected and processed through these generative large language chatbots.

The increased scrutiny on third-party software, solution, service, and platform providers will require greater transparency and risk management to be exercised across the value chain, specifically in the case of STS.

This can be challenging for third-party providers who have not invested in and accounted for such requirements, let alone the need to explain how the outcomes arising from their STS are derived without giving away their intellectual property.

The Perfect Storm is on the horizon

Whilst in some cases it may be easier to innovate with these emerging non-deterministic algorithmic technologies, it requires a different set of competencies, skillsets, mindset, experiences, and crucially the ambition to consider technical limitations and downside risks.

How often have we heard these or similar sayings:

“Regulation stifles innovation.”

“What does ethics mean?”

Getting an ‘AI’ powered digital platform or service out quickly appears to be the underlying mission that aspiring digital businesses, from small to large are aiming to achieve. Consequently, speed, use and a limited view of what success looks like often defines those journeys. ?

No alt text provided for this image

Given the growing incidents of adverse outcomes related to STS in recent years, the regulators around the world have had to act.

If you have not been following the upcoming regulatory initiatives, here’s a sample:

Existing laws and regulations continue to apply, specifically the UK GDPR and EU GDPR, as well as CCPA.

For any organisation operating across these and other jurisdictions, there is a myriad of regulations that they need to comply with when their STS are in operation.

It would be advisable for their CFOs to provision for financial penalties and liabilities based on the residual risks the CEO and leadership teams decide to underwrite. However, this requires all the associated risks from the use of these emerging non-deterministic algorithmic technologies processing personal data to be managed effectively.

A further exposure that organisations deploying STS face, is the risk of civil and class action litigation. The numbers are increasing, as a result of direct adverse impacts that often include, but are not limited to the loss of privacy, discrimination and other harms citizens experience when they are protected by relevant laws.

Organisations where the STS are found to have direct adverse impacts on citizens will suffer reputational damage and lose their trustworthiness in society. They can also expect engagement on their STS to diminish and experience an erosion of their shareholder and market value over time.

Here are some related news items and articles that might be of interest:

Considering highly significant amounts of data get processed by algorithms in organisations and the machine outputs directly impact society and the environment, reflecting the effectiveness of governance structures, Boards with ESG and CSR agendas should also review the impacts of the STS being used within their organisations.

Some policy makers may choose to encourage innovation through the introduction of light touch regulation. Nevertheless, it is crucial to recognise where responsibilities and accountabilities lie within organisations that deploy these STS, so that the likelihood of adverse outcomes can be reduced and potential liabilities mitigated.

More Early Warning Signs

What if a product, solution, or service offering that you released was found to produce responses that did not meet the expectations of your target audience?

According to this article that Google executives knew about the issues surrounding Bard, yet, Google decided to release it.

If you read about users’ experiences with Sydney a.k.a. Bing, you might also be interested in reading this opinion piece. Gary Marcus also asked, Is it time to hit the pause button on AI?

Investors need only look at the events surrounding the launch of ChatGPT, Bard and Bing to understand the nature of these non-deterministic algorithmic technologies. It is also interesting to note how the cumulative loss of public trust that stemmed from user feedback was reflected in the erosion of market value for these organisations. While we recognise that market sentiment drives share prices, these organisations now face an uphill task towards earning back trust from those who question their decision to release tools for public consumption despite being aware of their limitations, downside risks and potential dangers.

Another area that is impacted by the power of these generative large language models is copyright and the threat to intellectual property. Where any published works on the internet can be indiscriminately scraped, copied and used to derive machine generated content without consent, it does appear that control for your own work can so easily be taken away from you by the organisations that deploy these generative large language models. Neil Turkewitz wrote in his recent article,

“This shifting of wealth from the public to powerful companies has now come into crystal clear vision with the emergence of generative AI models.”

Meanwhile, the law courts are deciding on the validity of ‘AI’ generated works.

There are also concerns with generative large language chatbots being used in social media to spread misinformation and disinformation, and with the potential to deceive.

It will be interesting to see if investors?funding generative technologies take note of the associated limitations and downside risks of these?generative large language models?driving their investments, along with the expected headwinds from regulations, lawsuits and public backlash.

They can of course choose to?think differently?and embrace?responsible innovation.

There needs to be a better way forward

Effective risk management, oversight and governance of STS that organisations deploy are not simply box ticking and self-assessment exercises.

Prior to any STS being deployed internally or externally, the accountable person(s) needs to be assured that all known risks are managed, all residual risks are disclosed, the organisation is compliant with all relevant regulations and ethical considerations have been deliberated with decisions well documented, so that the rights of the individual and citizens are protected.

The accountable person(s) should be the CEO and Chair of the Board / Chair of the Ethics Committee.

In our view uncertainty isn’t about the regulations, as many within industry have cited as a faux challenge impeding innovation. We believe there is far less uncertainty about the regulatory landscape, than the uncertainty surrounding the effectiveness of these tools in certain settings, particularly if the organisation is seeking to quickly achieve their ROI.

The uncertainty is also about the behaviour and outcomes of these STS, when they are “optimised to achieve specific objectives, but lack the common sense that (most) human possess,” as this article explains. Fundamentally it’s about the attributes of the non-deterministic algorithmic technologies that seek patterns in data, look for averages and leave outliers for the decisions makers to resolve when issues arise. Data quality, data provenance and data legitimacy (consent) issues are also significant contributing factors.

If the data that is purported to be describing you happens to be an outlier relative to the datasets used to train these algorithmic models and outliers have not been catered for in the design of the STS, there is a very good chance that any automated decision or profiling served to you, will not be in your favour, as this article highlights.?

Does anyone within your organisation conduct impact assessments from a human rights perspective before deploying STS internally or externally?

This paper by the Danish Institute for Human Rights provides a good reason why you should.

Managing the uncertainty you can control

What do CEOs and/or their Board of Directors, or Investors of organisations deploying STS need to do differently to navigate through the external forces of that perfect storm on the horizon, while progressing their business goals?

Here are 5 fundamental questions to start with:

1.??????Are they willing to think differently?

2.??????Is there a will to embed ethics in their innovation culture?

3.??????Is there an appetite to manage risks associated with the use of these emerging non-deterministic algorithmic technologies, especially when processing personal data?

4.??????Is the CEO and the Board of Directors open to reviewing their organisation’s purpose and core values along with their digital strategy?

5.??????Is the Board of Directors open to the commitment of submitting their STS to external scrutiny, such as an Independent Audit?

Most of us are aware of the “70% of digital transformations fail” to meet expectations statistic. When we review publicly documented limitations and downside risks of these emerging non-deterministic algorithmic technologies processing personal data, and overlay the regulatory initiatives, as well as the threat of civil and class-action litigation, the stakes have never been higher for those leading digital transformations using ‘AI’ in organisations today.

So, how do large complex established organisations with siloed operating models recalibrate their business model and operating model to incorporate the right governance structures?

How can all the associated business risks be effectively and efficiently managed?

How can leaders realign their talent pool, hire and empower the right people, and seek external diverse inputs and multistakeholder feedback to effectively transform their digital business into one that is truly ethical, human-centric, legally compliant and accountable for all outcomes from their STS, while strengthening their trustworthiness?

Hint: Leaders need to fully understand the intricacies and interconnectivity of business functions, as well as the power dynamics and cultures within their complex organisations that may support or hinder performance, change and transformation.

Where your organisation has deployed STS with embedded ‘AI’ to replace employees, how much additional time and resources are expended in addressing and remedying issues arising from those transformation initiatives? Were these tracked and linked back to the ROI metrics, or were they treated as Business-As-Usual (BAU) activities and funded by operating costs?

This article discusses the myth of efficiency and concluded that,

“Automation may empower some people, but in the process, it's making things a lot harder for the hidden workers keeping everything moving.”

If you are a third-party solution and/or services provider and ‘AI’ is embedded in your offerings:

  • How would you prepare for scrutiny and demonstrate that you comply with all relevant regulations in the jurisdictions your customers are in?
  • Are you prepared to disclose the residual risks inherent in your software, solutions and service offerings that have ‘AI’ embedded?

CEOs and their Board of Directors should invest in strengthening risk management, governance and oversight functions to optimise the organisation’s chances of realising the ROI from their digital business transformation initiatives.

How these non-deterministic algorithmic technologies are deployed within, as well as, by your organisation, is within your control. Consequently, you also have the responsibility and accountability for all outcomes from the systems powered by these technologies.

The CEO and its Board of Directors ultimately should be making informed decisions, with full knowledge and understanding of their capabilities, limitations, downside risks, risk mitigations, residual risks and further exposure to externalities both for the business and its shareholders. We asked the question of ‘How good are your Checks and Balances’.

Our?Responsible Innovation Framework?describes the?systemic?nature, interdisciplinary intricacies, interconnectivity and interdependencies of the constituent parts of a Socio-Technical System.

It promotes?accountability by design, alignment with industry standards and facilitates?differentiation through trustworthiness, when embedded in the organisational culture and governance frameworks.

Organisations choosing to adopt our Responsible Innovation Framework can expect it to deliver the following benefits:

  • reduces the cost of compliance;
  • enhances and matures your risk management and adaptation capabilities;
  • improves collaboration and social cohesion within your organisation;
  • aligns your corporate purpose with human values;
  • facilitates the execution of your strategy through effective communication and leadership;
  • prepares your organisation for independent scrutiny, allowing it to differentiate its competiveness in international markets through trustworthiness; and
  • creates enduring and sustainable value for all your stakeholders.

Our Responsible Innovation Framework enables organisations to continually develop and enhance their risk management capabilities in response to global challenges, and address the uncertainty about the behaviour and outcomes of these STS. Further, it enables organisations to evolve their core business strategy and governance structures to meet their compliance requirements more effectively and efficiently.

Chris Leong?is a Fellow and Certified Auditor (FHCA) at ForHumanity and the Director of Leong Solutions Limited, a UK-based Management Consultancy and Licensee of ForHumanity’s Independent Audit of AI Systems, helping you succeed in your digital business change and transformation through Responsible Innovation so that you can Differentiate Through Trustworthiness.

Maria Santacaterina?is a Fellow and Certified Auditor (FHCA) at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping you revitalise your core business strategy, create enduring value and build a sustainable digital future.

Rémy Fannader

Author of 'Enterprise Architecture Fundamentals', Founder & Owner of Caminao

1 年

AI is just a tool, responsive innovation can only be carried out through the integration of business and collective incentives. https://caminao.blog/book-pick-evolution-resilience/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了