Responsible Innovation: Do organisations have the right Mindset, Talent and Culture?
Adobe Stock

Responsible Innovation: Do organisations have the right Mindset, Talent and Culture?

In the past week, we read about the inquest into the sad death of Molly Russell - a 14-year-old who, according to the coroner, “died from an act of self-harm while suffering from depression and the negative effects of online content.”

At the inquest, “A senior executive at Instagram’s owner has apologised after admitting that the platform had shown Molly Russell content that violated its policies before she died.”

Meta, which owns Instagram has a publicly accessible Code of Conduct – headlined “Keep Building Better: The Meta Code of Conduct. A foundation for making a positive impact ,“ which outlines its commitment to Innovate Responsibly on page 28. There, it outlines what it means to ‘Innovate responsibly’:

No alt text provided for this image

All the above aspirations are admirable, except for the lack of reference to addressing their “AI” algorithms, which determine which content to serve to human consumers. Effective content moderation is difficult to achieve, considering the vast amount and dynamic nature of the content published on the Internet, but how the algorithms behave and are deployed is within the organisation’s control.

Furthermore, ask yourself this: of all the aforementioned policy statements to innovate responsibly, how many of them are demonstrable, based on your observations and experiences of the outcomes?

Such policies and published aspirations are easy to curate. They echo most of what many are proposing to innovate responsibly. However, organisations are discovering that they are not easily executed and are difficult to substantiate. So, what might be the reasons?

Why are organisations not walking the talk?

Scrutiny & Regulation

The level of scrutiny has been gradually increasing, since the harms that have been reported and proven to be linked to the use of ‘AI’ technologies, impact human consumers directly and instantly.

In the case of online harm to children, the UK’s Children’s Code was introduced in September 2021 to provide a “code of practice for online services, such as apps, online games, web and social media sites, likely to be accessed by children.”

A draft Online Safety Bill is also progressing through UK Parliament to “put an end to harmful practices, while ushering in a new era of accountability and protections for democratic debate” as this press release from the UK Government announced.

In the US, the California Age-Appropriate Design Code Act has just become law. It “requires online platforms to consider the best interest of child users and to default to privacy and safety settings that protect children’s mental and physical health and wellbeing.”

More generally, a raft of new regulations is consequently being introduced to supplement existing regulations that remain applicable to the deployment of ‘AI’ technologies processing personal data, that directly and instantly impact human consumers.

This includes the EU’s “New liability rules on products and AI to protect consumers and foster innovation ,” which recognises liability on the part of the organisation deploying ‘AI’ which contributed to the harm.

Most of these regulations require greater transparency and explainability for the automated decision-making and profiling inherent in ‘AI’ technologies.

How ready are organisations to disclose fundamental information (not Intellectual Property) that is sufficient to account for the choices that were made throughout the lifecycle of these systems?

‘AI’ is NOT intelligent

‘AI’ algorithms fundamentally analyse a lot of historical data (past events), look for regularities in patterns and make automated decisions based on inferences derived from their analysis. ‘AI’ that has been deployed to date, does not see the real world. It does not understand the human language; let alone be trusted to make the right decisions for individual human consumers.

There has been an increasing realisation and acknowledgement by academics researching the field of ‘AI,’ that the approaches used thus far, work only in some instances, where the scope, context, nature and purpose are narrowly defined, but they are not fit for purpose in others.

Yann LeCun , the recipient of the 2018 Turing Award together with Yoshua Bengio and Geoffrey Hinton, for their work on deep machine learning, acknowledged the current limitations in this article , with the following observations:

"I think AI systems need to be able to reason."

"You have to take a step back and say, Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there."

Reinforcement learning will also never be enough, he maintains. Researchers such as David Silver of DeepMind, who developed the AlphaZero program that mastered Chess, Shogi and Go, are focusing on programs that are "very action-based," observes LeCun, but "most of the learning we do, we don't do it by actually taking actions, we do it by observing."

"We're not to the point where our intelligent machines have as much common sense as a cat," observes Lecun. "So, why don't we start there?"

The purely statistical approach is intractable, he says. "It's too much to ask for a world model to be completely probabilistic; we don't know how to do it."

Gary Marcus , the award-winning professor of psychology and director of the NYU Center for Language and Music, where he studies evolution, language and cognitive development has been highlighting the flaws in his articles. In this recent Nautilus article , he shares his insights:

Deep learning, which is fundamentally a technique for recognizing patterns, is at its best when all we need are rough-ready results, where stakes are low and perfect results optional.

Deep-learning systems are particularly problematic when it comes to “outliers” that differ substantially from the things on which they are trained. Not long ago, for example, a Tesla in so-called “Full Self Driving Mode” encountered a person holding up a stop sign in the middle of a road. The car failed to recognize the person (partly obscured by the stop sign) and the stop sign (out of its usual context on the side of a road); the human driver had to take over. The scene was far enough outside of the training database that the system had no idea what to do.

Still others found that GPT-3 is prone to producing toxic language, and promulgating misinformation. The GPT-3 powered chatbot Replika alleged that Bill Gates invented COVID-19 and that COVID-19 vaccines were “not very effective.”

As AI researchers Emily Bender, Timnit Gebru, and colleagues have put it, deep-learning-powered large language models are like “stochastic parrots,” repeating a lot, understanding little.

Brian J Ford , the award-winning Research Biologist, Broadcaster, Lecturer and Author shares his insights in an article titled ‘AI: Artificial, Yes. Intelligent, Not ’:

Today's digital automation and machine learning are wonderfully efficient, but they pale in comparison to the genuine intelligence and complicated mechanisms of the living cell.

Machine learning is wonderfully efficient, but it exists in a sterile environment detached from intellectual rigor or social insight. The best of today’s digital automation is extraordinary, but it isn’t intelligent. It’s incredibly crude compared to the simplest of living organisms, which have been around for more than 4 billion years.

Forget what the gurus and visionaries proclaim about AI. It’s the microbes that perform the miracles. Always have, always will.

We recall the hype that accompanied the release of GPT-3. Now that this and other Large Language Models have been scrutinised, it is clear that they have flaws. They are limited in capabilities and are fraught with risks as this article reveals.

If your organisation is still thinking of deploying ‘AI’ models, are the Board of Directors and CEO aware of the risks and potential adverse impacts?

The missing ‘Socio’ element

A Socio-Technical System (STS) comprises hardware, software, the human consumer and society. Due to its ability to directly impact humans, its design must involve stakeholders from the communities being impacted, as well as those in the deploying organisation. Its design must also take into consideration an understanding of the social structures, the roles and the rights of the human consumers and the impact on society from all possible outcomes.

All systems that are powered by ‘AI’ that impact human consumers directly and instantly through automated decision-making and profiling are Socio-Technical Systems. Examples include platforms and portals with personalisation features, Automated Employment Decision Tools (AEDT), chatbots and personal digital assistants.

Except, most of them lack the ‘Socio’ characteristics in their capabilities, hence organisations deploying STS have failed to interact and engage with their human consumers with dignity and fairness, let alone facilitate a user experience that is satisfactory when the algorithms encounter situations that are not within their scope, nature, context, and purpose.

Which stakeholders were involved when the STS was designed and/or deployed?

Were ethics-based principles being used to evaluate potential adverse outcomes from the automated decision-making and profiling, and were unintended consequences considered before deployment?

Were the ethics-based principles monitored throughout the lifecycle of the STS, if so, who was making those ethical choices? If ethics are not being considered, the question is why not?

Interdisciplinary Intricacies, Interconnectivity and Interdependencies of Socio-Technical Systems

Socio-Technical Systems are complex systems with an interdependent, interconnected and intricate set of elements that transcend disciplines. When we then consider that ‘AI’ technologies are non-deterministic by nature, the complexity is compounded.

The ‘Socio’ considerations cover:

  • Human Autonomy, Human Agency and Human-Centricity, as afforded to the human consumer
  • Values and Culture, as reflected in the outcomes
  • Ethical Principles and Sustainable futures, as reflected in the outcomes

The ‘Technical’ considerations cover:

  • how ‘AI’ technologies in the Value Chain are used
  • how first, second and third-party data are sourced and used for processing, then managed

The ‘System’ considerations cover:

  • Processes and Procedures in operation within the deploying organisation
  • Policies and Regulations that need to be adhered to by the deploying organisation
  • Governance, Oversight and Accountability structures within the deploying organisation

These considerations are underpinned by Multi-Stakeholder Feedback Loops, that allow for Diverse Inputs to be collected throughout the STS’s lifecycle.

Risk Management is weaved into its lifecycle, and Operational Safeguards are designed and deployed across the enterprise network and its Value Chain to ensure that downside risks are managed, and the residual risks are mitigated.

All the abovementioned considerations are from interdisciplinary areas, which are interdependent and interconnected. Therefore they need to be intricately orchestrated for responsible innovation and other related outcomes to be realised.

So, who or which function within your organisation does this?

The gaps in organisations

Technology-driven change, particularly with deterministic technologies has brought about the realisation of operational efficiencies through process automation and supported rapid and effective decision-making by humans.

All forms of technologies are first and foremost tools and enablers for humans to achieve the desired outcomes. The assumption that more can be achieved through the use of non-deterministic emerging technologies is flawed, as the growing evidence of adverse outcomes has shown.

So, why have some Boards and CEOs not realised this? Let’s explore some possible reasons:

Aspirations to follow the success of the Big Tech firms

  • If the Big Tech firms have done it, so can we!

The hype and ease of access

  • The technology is readily available and accessible, and everyone’s getting in on the act!

The mantra of ‘Fail Fast, Fail Often’

  • Often lumped together with references to ‘Agile’ and ‘Lean’. This article is a must-read.

Lack of awareness of the limitations and flaws in the ‘AI’ technologies

  • As a result of being blinded by the hype. Associated limitations and flaws are rarely disclosed upfront by vendors. Upsides provide the premise for business cases and investment, but the risks related to potential adverse outcomes get unknowingly absorbed into the organisation.

Ethics is not a priority

  • This has been echoed time and time again within organisations whose initiatives are driven by technology.

Risk management, Compliance, Internal Audit and Regulation are impediments to innovation

  • These are typically secondary considerations after technology and data, carry additional costs and require effort and time.

Siloed mentality and organisation dynamics

  • Who is driving the digital transformation agenda?
  • How closely aligned are the business and cross-functional stakeholders to those at the helm of technology-driven initiatives?

Lack of diverse inputs and multi-stakeholder feedback

  • Which stakeholders have been involved in providing inputs and feedback throughout the STS lifecycle?
  • Which stakeholders are sitting at the decision-making table, throughout the digital transformation journey, to make decisions on ethical choices and risk appetites?

Ineffective cross-functional and stakeholder involvement

  • Where business stakeholders are expected to be closely involved throughout the transformation journey, the reality is often not the case due to the conflicting need for business stakeholders to be performing their day jobs.

A void in the understanding of Socio-Technical Systems to drive successful change and transformation

  • Do your change and transformation Leaders understand what it takes to deliver STS that: help rather than hinder; benefit rather than adversely impact; engage rather than disengage; and, are trusted rather than shunned by human consumers; while complying with a myriad of existing and planned regulations?

We keep reading about the challenges organisations face with their digital business transformations and their failure rates. We know it is not a technology problem.

So, what’s missing in the playbooks they follow?

Ethics & Culture

We have heard technologists mention that ethics is hard to define.

Olivia Gambelin , Founder of Ethical Intelligence and AI Ethicist, offered the following two simple definitions in her class at ForHumanity University :

  • Human context: A reflexive tool that enables reflection and determination of one’s actions as right or wrong given the context
  • Technological context: A decision-making tool for risk mitigation and innovation in accordance to the use case and/or technology

Ethics does not get prioritised for consideration throughout the STS lifecycle, either because it is not understood within the organisation, or it is wrongly perceived to cause friction during the technology-driven transformation journey. Not dissimilar to the prevailing perception of having the risk management, compliance, legal, information security and internal audit functions being intertwined through the same journey.

Consequently, if the Board and the CEO of an organisation decide to prioritise ethics alongside their growth objectives, investment and effort are required to bring on board different talent, view alternative perspectives and adopt a different mindset. This added investment is crucial to delivering the human-first and human-centric outcomes from STS that can engender trust, which is so critical to engagement that a digital-driven business needs, to grow and thrive.

The risks and stakes are too high to ignore.

It would be wise for the purpose of the STS and ethics to be considered a priori, as well as throughout the lifecycle of any STS being deployed, that will impact human consumers directly and instantly. Since the cost of not doing so for the Board, its shareholders and the CEO of the organisation, will be exceedingly high, should a myriad of risks manifest themselves in adverse outcomes impacting the recipients of the automated decisions and profiling from the embedded ‘AI’ algorithms.

There is nothing more effective than having ethics and the values for responsible innovation woven into the fabric of the organisation, starting from the Board and the CEO through its culture, to ensure that any STS impacting their end consumers are fit for purpose from the human’s perspective.

Change is needed

Effective change in any organisation starts with the Board, its CEO and its people.

Inside-out views need to be enriched by outside-in views. Diversity of thought and diversity of lived experiences, along with multi-stakeholder feedback are key considerations to accommodating meaningful and relevant ‘Socio’ characteristics in STSs.

Processes and Procedures within the ‘System’ and other considerations will need to be reviewed frequently and changed to bridge the gaps and coordinate activities across silos, along with the implementation and operationalization of oversight, governance and accountability structures.

The use of non-deterministic technologies and all associated personal data will need to undergo scrutiny, to ensure that mechanisms exist, to involve humans within the organisation to make the necessary decisions on ethical choices throughout the STS lifecycle. The aim is to optimise the outcomes while mitigating downside risks from their assumed capabilities. This applies regardless of whether your organisation produces or procures STSs - the intent remains the same, although how scrutiny is executed may differ.

The gold standard for organisations to aspire to is to have their STS independently audited by certified independent auditors, using independently curated industry agnostic audit criteria, aligned to relevant regulatory frameworks.

The Boards and CEOs of organisations deploying STS that impact end users or human consumers directly and instantly, have the accountability, responsibility, and duty of care to ensure that any STS deployed that impacts their end consumers are fit for purpose from the human’s perspective. They are ultimately liable for all outcomes that adversely impact humans.

All regulations must be complied with when these STS are deployed.

Embedding responsible innovation throughout the organisation, starting from the Board and the CEO can only be done through a culture change. Effective communication is required to ensure engagement so that these values are lived daily by all leaders, employees and stakeholders in the dependent value chain.

Technology is only a tool and enabler for Human-Centric outcomes that preserve Human Dignity, Human Autonomy and Human Agency.

The Board and the CEO need to act and think differently before they embark on any change and digital business transformation involving STS, since all outcomes matter. Perhaps now is a good time to slow down, pause, reflect, realign, and reset. ‘Socio’ considerations cannot be ignored if STS is deployed.

They will also need a different mindset and talent that understand the interdisciplinary intricacies, interconnectivity, and interdependencies of the STS considerations to orchestrate effective change in an integrated manner, including cultural change.

The goal, for any organisation deploying STS, must be for them to be able to differentiate through trustworthiness. It is through trust that they can achieve their business objectives in the digital world.

If you are the Chair, member of the Board, or the CEO, we’d like to propose answering the following on a regular basis:

  • Where are the hidden risks in your organisation?
  • How are you operationalising ethics-based principles in your organisation?
  • Does your culture reflect your true values?
  • How are you developing your people?

Do we have the right capabilities to build a sustainable digital future? Or are we heading for a train crash?

If you would like any help making ‘good’ change happen, while raising performance, productivity and profitability levels for your organisation, please get in touch with Chris and Maria .?

Ni?l Malan

Helping Boards and Executive teams drive Reinvention with their most valuable resources. Board Member | CEO | MD | CDIO | Startups | Implementor | Digital Transformation PE, FMCG, Energy, Supply Chain, Start-ups

2 年

Thanks for sharing Chris Leong, FHCA This terrible case of a young and under-aged person’s death also requires asking the question about the responsibility of parents and adults for that matter. Take for example the sale of alcohol beverages. Although its no fool proof safeguard, legislation is aimed at protecting under aged buying and consumption. In a more perfect world it implies parents and adults need to set the right example, rules and values to avoid breaking the law. It is expected from the companies producing and selling alcohol beverages to comply. Is this any different from Social Media abuse? Same should apply to access to- and use of- social media but in my view this is not taking place at best. Applications and its AI capabilities should help avoiding under-aged ‘use4abuse’ and help parents to act if any of this is picked up. In conclusion, its not just one party’s responsibility! Happy to hear any thoughts.

Gregory Esau

The Richest Man in Babylon

2 年

Thank you for continuing to shine the light on these consequences of algorithms' efforts on people's lives and the deleterious effects on the human condition, Chris Leong, FHCA ????????

Alan Cooney

Helping CIOs Deliver Exceptional Experiences | Head of Enterprise Sales at Voxxify

2 年

Chris, I take it from this article that you don't subscribe to the Silicon Valley mantra of "Move fast and break things" then? The Outcomes for the End Users needs to matter?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了