Responsible Innovation: A Journey and A Choice
Background Image: Matt Howard | Unsplash; Door Image: Paint 3D

Responsible Innovation: A Journey and A Choice

When the UK Government’s White Paper on AI Regulation was published, it acknowledged the risks associated with these non-deterministic algorithmic technologies: “These risks could include anything from physical harm, an undermining of national security, as well as risks to mental health. The development and deployment of AI can also present ethical challenges which do not always have clear answers. Unless we act, household consumers, public services and businesses will not trust the technology and will be nervous about adopting it. Unless we build public trust, we will miss out on many of the benefits on offer.”

The UK Government signalled that it did not intend to introduce new legislation, citing “By rushing to legislate too early, we would risk placing undue burdens on businesses. But alongside empowering regulators to take a lead, we are also setting expectations.”

Instead, the onus is placed on the existing regulators to ensure that organisations innovate responsibly in line with the 5 proposed principles set out in the white paper.

Navigating the Mixed Bag of Regulatory Approaches

Despite a common mission expressed by most regulators to ensure that the ‘AI’ risks do not harm their citizens, we see a variety of different approaches taken by regulators across jurisdictions on regulating the use of these non-deterministic algorithmic technologies.

The lack of a consistent approach reflects varying degrees of urgency and priorities of governing bodies in those jurisdictions. It collectively sends mixed signals to industries seeking regulatory guardrails to innovate against and creates confusion about how each corporate deploying Socio-Technical Systems (STS) should seek to comply with the diverse regulations across all the jurisdictions they operate in.

Corporations operating and deploying STS across multiple jurisdictions need to be aware of the most stringent set of regulations they must legally adhere to. It would be operationally efficient to ensure compliance with the most stringent regulations and laws for all deployed STS, rather than attempt compliance in isolation only to find some requirements have slipped through the cracks.

Regulations matter and it is mandatory for Corporations to comply with them when deploying their STS in the respective jurisdictions.

What are the incentives?

Any Board or CEO of organisations leveraging non-deterministic technologies in the Socio-Technical Systems (STS) who have a well-rounded understanding of ‘responsible innovation,’ will structure their organisation in a way that facilitates the deployment of STS responsibly and demonstrably.

It’s not just about marketing and company PR statements, but rather about demonstrating adherence to the highest ethical standards has been independently verified.

For those who haven’t, what incentives do they have to change course and invest in responsible innovation? Which guidelines are put in place by the regulators to ensure companies need to do the same?

We are confident that there is unanimous agreement that the change needs to happen now, as a matter of urgency.

The short-term promise of commercial gains may be alluring but investments in serious risk management, ethics and good governance must be prioritised over the race to adopt the newest powerful technology tools, which are as yet unproven and likely to cause potential social and societal harms, impacting the business across the time horizons.

Let’s look at what we mean by Responsible Innovation . It is not about just the technology and the data processed.

The Main Characteristics of Outcomes

We have always said that all outcomes matter .

We list the main characteristics of the beneficial outcomes derived from Responsible Innovation. These characteristics will be evident in the STS that citizens and society engage with:

  • There is alignment with human values, ethics-based principles, and the STS reflects the deploying organisation’s publicly available Code of Ethics and Code of Data Ethics.
  • Organisations deploying them are transparent with the necessary disclosures including the provenance of training data, model architectures, ethical decisions made during the STS lifecycle and the residual risks inherent in these systems.
  • They are oriented towards preserving privacy; all data protection and privacy rights are respected and demonstrably so.
  • They are fair and all automated decisions are easily explainable.
  • They are reliable and robust and the STS operate within the published scope, context, nature and purpose. ?
  • They are provided with operational safeguards that are clearly disclosed, with feedback mechanisms provided by the organisation that deployed them for recipients to challenge, as required, with a demonstrable willingness by the deploying organisations to engage with citizens to redress any unintended harms that may be caused;
  • Backed by clearly disclosed organisational accountability and named representatives.
  • Derived from verified data and informed consent by algorithms operating in cyber-secure environments.
  • The STS, their outcomes and the organisation that deploys them are compliant with all relevant legal frameworks. These are verifiable through public disclosure of Independent External Audits.

While STS deployed by some human-centred mission-driven organisations can demonstrate many of the above outcome characteristics, not many STS deployed by private or public organisations will be able to demonstrate all of the above characteristics.

Hence, adverse outcomes continually occur as we outlined in our last article , overshadowing the benefits that they may bring to consumers of fair, transparent, equitable STS. This rising number of adverse incidents has triggered societal calls for enforceable regulations, which should not overburden companies administratively but rather provide clear guidelines for appropriate actions to be taken to mitigate potential social and societal harms.

Responsible Innovation starts with Purpose

If organisations are currently not innovating responsibly, why should they start doing so?

Boards and their CEOs, along with Institutional Investors and Shareholders can start thinking critically and holistically in order to gain a better understanding of the nature of these emerging non-deterministic algorithmic technologies, as we have highlighted throughout our articles in the Responsible Innovation Newsletter . The limitations and downside risks associated with these emerging non-deterministic algorithmic technologies need to be considered alongside the potential commercial and organisational benefits.

Once the Boards and their CEOs recognise the value of trust and decide to embrace Responsible Innovation, their journey can begin. It is easy for organisations to curate marketing and PR messages to say that they subscribe to or embrace certain principles and ESG, but doing so in actuality requires full commitment, strong conviction and radical culture change.

Especially in established and complex organisations that may have entrenched beliefs and/or working practices that may contradict what is actually required to operationalise responsible innovation.

No alt text provided for this image

Embracing and operationalising Responsible Innovation requires real change and will lead to business transformations, necessary to maintain competitiveness and relevance in the market place.

It is a journey that the Board and the CEO will lead their organisation through, culminating in the beneficial outcomes we previously outlined. Before you ask, every organisation must extend the scope of its own unique ‘responsible innovation’ paradigm to the environment; working to achieve success by taking a comprehensive view of the structural and process changes required cognisant of their respective social and societal impacts.

It starts with ensuring that the organisation’s Purpose supports the journey of change and transformation towards attaining verifiable trustworthiness as a constant throughout the journey.

Every organisation’s journey will be different, depending on its current level of organisational maturity. Where some of the well-known ‘trustworthy AI’ ethics-based principles have been adopted, Boards and CEOs can address the wider spectrum of consequences produced by the adoption of STS in their journey.

Our Responsible Innovation Framework is industry agnostic. Thus, we work with organisational leaders to derive maximum benefits during their change and transformation journey, based on the organisation’s capabilities and level of maturity.1

Strategic Benefits to Corporations

Once the Boards and their CEOs understand the strategic value of trust in the digital world, the benefits of Responsible Innovation to Corporations become clear.

We list the benefits of adopting our Responsible Innovation Framework as follows:

  • It reduces the cost of compliance;
  • It enhances and matures your risk management and adaptation capabilities;
  • It improves collaboration and social cohesion within your organisation;
  • It aligns your corporate purpose with human values;
  • It facilitates the execution of your strategy through effective communication and leadership;
  • It prepares your organisation for independent scrutiny, allowing it to differentiate its competiveness in international markets through trustworthiness; and
  • It creates enduring and sustainable value for all your stakeholders.

Our Responsible Innovation Framework enables organisations to continually develop and enhance their overall corporate strategy and risk management capabilities in response to global challenges, while addressing the uncertainty about the behaviour and outcomes of STS. Further, it enables organisations to evolve governance structures to meet incoming compliance requirements more?effectively and efficiently.

Responsible Innovation Is a Choice

With the exception of the few, most Corporations have adopted non-deterministic algorithmic technologies in their own STS or deployed third-party STS with embedded automated algorithms. They have leveraged industry playbooks and approaches that prioritise speed to market rather than taking a more long-term view to assure company success and secure shareholder and stakeholder value.?

With the exception of the few, it seems many have not yet updated the purpose and core values of their organisation, in alignment with their fiduciary duties and duty of care to ensure human agency, human autonomy, human integrity, human dignity and fundamental human rights are front and centre of the decision-making process. Thus, it is unlikely that the potential benefits from the STS have been realised for the organisation and the consumers of their automated decision-making and profiling.

Although there are many ways in which Corporations can meet compliance requirements, being compliant may not necessarily reflect Responsible Innovation when it comes to the human experience when engaging with the STS.

Boards and their CEOs need to understand the difference here.

Trust is the new currency for engagement in the digital space. Corporations will need to differentiate through trustworthiness to stay ahead of the curve and remain competitive over the long-term.

Responsible Innovation is a choice for Boards and CEOs. In fact, Responsible Innovation is the only ethical choice that is practically conceivable, given the high stakes when deploying these STS.


Chris Leong ?is a Fellow and Certified Auditor (FHCA) at ForHumanity and the Director of Leong Solutions Limited, a UK-based Management Consultancy and Licensee of ForHumanity’s Independent Audit of AI Systems, helping you succeed in your digital business change and transformation through Responsible Innovation so that you can Differentiate Through Trustworthiness.

Maria Santacaterina ?is a Fellow and Certified Auditor (FHCA) at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping you revitalise your core business strategy, create enduring value and build a sustainable digital future.

Hilary Sutcliffe

Director SocietyInside and The Addiction Economy

1 年

Nice post. I did my masters thesis 23 years ago saying very similar things. It was called ‘Soft Issues Hard Cash, why business can’t ignore their responsibilities’ or something like that. But disappointingly not one single thing has really changed since then, in face it is worse, as you say because incentives and ideologies remain the same. Nice article.

Woodley B. Preucil, CFA

Senior Managing Director

1 年

Very Insightful!!

Ni?l Malan

Helping Boards and Executive teams drive Reinvention with their most valuable resources. Board Member | CEO | MD | CDIO | Startups | Implementor | Digital Transformation PE, FMCG, Energy, Supply Chain, Start-ups

1 年

Once again, a very insightful and interesting post Chris Leong, FHCA I would like to ask you to write about - where our data goes when using GPT and other AI tools, - does it end up in the public domain, - how to be careful in activating queries with personalised/confidential info - what are the legal obligations AI platforms need to comply with from a data privacy pov Perhaps this would be more than one post but topics that are very important for users of these tools to know.

Dr. Saskia D?rr

Driving Corporate Digital Responsibility ?? | Author, Innovator & Sustainability Advocate | Founder of AI-driven Corporate Sustainability Solutions

1 年

Indeed, Chris Leong, FHCA - responsible innovation is an entrepreneurial choice, a strategic decision to “do good” with digital tech.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了