Go Digital: Why are diverse inputs and multi-stakeholder feedback so important for trust?
Adobe Stock

Go Digital: Why are diverse inputs and multi-stakeholder feedback so important for trust?

I was inspired to write about diverse inputs and multi-stakeholder feedback this week and share additional thoughts on why they are so important in AI Ethics Governance, after the insightful and thought-provoking presentation within ForHumanity by Katrina Ingram, Dorothea Baur, Joshua Bucheli, and Tristi Tanaka.

Their presentation encapsulates context, definitions, consideration, an example of a use case, limitations and challenges faced by organisations – all carefully considered and outlined for discussions, which I was privileged to be part of, to kick off the process to debate, obtain feedback, further develop and refine through future iterations.

We live in an interconnected digital world that has benefitted from the upsides of innovation and suffered the downsides of unintended consequences.

For organisations that are transforming to be digital-first and innovating to grow, they also need to do so responsibly and ethically. Their ability to ensure that there is diversity in inputs, as well as incorporating feedback from all their stakeholders, is critical to embedding ethics into the design and operation of their algorithmic, AI and autonomous systems – assuming they do so in-house.

If organisations procure information, services or solutions from third-party providers that leverage algorithmic, AI and autonomous systems, they need to understand if and how diversity in inputs and feedback from multiple stakeholders, had been incorporated in their processes.

Diversity, Equity and Inclusion Policies

Nowadays, almost every organisation will have their Diversity, Equity and Inclusion (DEI) Policy. However, they typically address and apply to the organisation’s workforce and culture but are unlikely to have been designed to apply and be embedded within the design of algorithmic, AI and autonomous systems – unless perhaps they are technology organisations in the life sciences industry where ethics is a core consideration.

Some may argue that organisation level DEI policies if implemented, would be sufficient to warrant their outcomes to apply at the level where algorithmic, AI and autonomous systems are designed and deployed, or sourced from external third-party providers. However, I do not believe that assumption is enough.

Based on the surveys I have cited in my previous articles , it would appear that organisations leveraging algorithmic, AI and autonomous systems, are generally not considering potential ethical issues during their design, deployment, operations or sourcing phases. They are also unlikely to have AI Ethics Governance structures or frameworks in place. Therefore, they are unlikely to be soliciting diverse inputs and multi-stakeholder feedback during risk and impact assessment processes throughout their AI/ML lifecycle.

For the purpose of assessing risks and potential downsides of algorithmic, AI and autonomous systems, the presentation’s proposed definition makes sense: ‘Diverse inputs and multi-stakeholder feedback refers directly to the collective of risk assessors and whether those assessors have the ability to represent the risk to a wide range of impacted persons.’

When is diversity diverse enough?

One of the key questions posed in the session was around when is diversity diverse enough?

There were several dimensions of diversity suggested and discussed: diversity in characteristics (such as, ‘but not limited to race, ethnicity, sex, gender identity, sexual orientation, age, social class, physical ability or attributes, religious or ethical value system, national origin, and political beliefs’); diversity of thought; diversity of experience; and diversity of perspectives. These dimensions and perhaps more – when reflected in the datasets used to train algorithmic, AI and autonomous systems, as well as in the stakeholder groups providing feedback, can all contribute to ensuring equity, fairness, reasonableness, proportionality, accountability and explainability in the outcomes consumed by consumers, customers, and society in general. These are also key ingredients for responsible and ethical innovation. Having people within your organisation that can offer these will make the world of difference.

I agree with the consensus that it is not possible to consult all stakeholders representing all the dimensions of diversity when seeking inputs and feedback.

It should however be practical for the Ethics Officer or the Ethics Committee to determine, within the context of the situation or use case in question, the extent of diversity within the stakeholder community required to be engaged to provide the necessary risk and impact assessments. This was also proposed under the sub-heading of ‘reasonableness’.

In any case, the decisions undertaken in this process shall be documented along with the outcomes of the risk and impact assessment for future reference, contributing to the explainability and transparency of the algorithmic, AI and autonomous systems.

Who should be the stakeholders?

This was debated during the Q&A section after the presentation was delivered. The view I presented applies to algorithmic, AI and autonomous systems designed, deployed, and operated internally within an organisation. Those stakeholders would be anyone within the organisation that can optimise the delivery of positives outcomes from those algorithmic, AI and autonomous systems, in ways that unintended consequences with negative outcomes or downside risks can be foreseen as much as possible upfront, through diverse input and multi-stakeholder feedback, then managed and mitigated. The key components here include extending the population of stakeholders to the GRC community and the Ethics Committee within the organisation that can risk- and impact- assess potential negative outcomes in conjunction with the data scientists and developers creating the solutions. Once in operation, additional feedback can then be captured from the consumers of the outcomes – who effectively are also stakeholders and then fed back via the monitoring process to ensure that models are performing as intended.

Proactive mitigation of downside risks is preferable to reactive management of negative customer impact resulting from issues arising from unmitigated downside risks.

Third-Party Information, Service or Solution Providers

What about algorithmic, AI and autonomous systems leveraged by third-party information and solution providers?

In organisations where many of its functions and capabilities are outsourced, this will be an area that requires significant focus, especially if they currently do not have AI Ethics Governance structures or frameworks in place, hence unlikely to be scrutinising the downside risks from the use of algorithmic, AI and autonomous systems – internally or externally.

Once AI Ethics Governance structures and frameworks are established, their third-party risk, vendor management and procurement teams will need to enhance their knowledge, mature their capabilities and collaborate with their GRC community and the Ethics Committee to obtain diverse inputs and multi-stakeholder feedback, to be able to monitor and assess related downside risks in their digital supply chain, beyond what they currently do around cybersecurity, to establish trust that downside risks related to their algorithmic, AI and autonomous systems are managed. In essence, your organisation would be a stakeholder to your third-party provider that leverages algorithmic, AI and autonomous systems, and should be consulted accordingly as part of their AI Ethics Governance process.

In the article I wrote about the need to marry innovation with risk management, I noted that ‘The scope of the risk management function within organisations needs to urgently be expanded to include emerging risks associated with innovation and related regulations. Whilst IT risks have been managed separately from other risks within organisations, predominantly by the way the organisation is structured, the interdependencies between technology, data, security, people and processes when AI solutions are deployed for consumption via a supply chain of third-party providers requires a re-think of how AI-related risks are identified and managed.

I do not believe any organisation – especially one that is regulated, and their leadership would want to be dealing with a public issue involving harm or discriminatory outcome experienced by their customer(s) caused by a decision made by an algorithmic, AI or autonomous system used by one of their third-party information, service or solution provider.

The benefits of diversity in collaboration

The notion of involving a diverse community of stakeholders as defined above at various junctures of the algorithmic, AI and autonomous systems lifecycle - either internally within the organisation or externally with third-party providers – to conduct risk and impact assessments, is effectively a collaboration exercise.

Those of you who understand the value of collaboration should be able to appreciate that diversity in the community of collaborators who are all aligned on the same goal can enrich the outcomes. ?

I experience this within ForHumanity, every time I collaborate with co-contributors who collectively bring such a diverse spectrum of competencies, skills, experience, perspectives and disciplines to the discussions and the debates we have. Through the opportunity for all contributors to collaborate, we are able to source diverse inputs and multi-stakeholder feedback, resulting in enriched outcomes that we are also able to collectively deliver.

This is a rewarding model for leaders to adopt when building their AI Ethics Governance structure and framework, which will no doubt also transform and prepare their organisation to design, deploy and operate their algorithmic, AI and autonomous systems ethically in the digital world.

I look forward to hearing your thoughts. Feel free to contact me via LinkedIn to discuss and explore how I can help.

Katrina Ingram

AI & Ethics. Privacy (CIPP/C). Diversity. Speaker. Founder, Ethically Aligned AI I Linked In Instructor I Adjunct Faculty

3 年

Well said Chris Leong!

Emma Parry

Non-Executive Director | Senior Advisor | Conduct, Risk & Governance | Writer | Speaker | Expert Witness

3 年

Another great article Chris Leong in which you raise a number of very pertinent points (as always!). You note that companies may have in place Diversity, Equity and Inclusion (DEI) policies. But, to your point, would these actually address algorithmic, AI and autonomous systems design and development or only the organisation’s workforce and culture? If it’s only the workforce and culture, then there are tanglible actions a company can take to start drafting a suitable policy that covers the AI / algorithm gap! And , they know who to speak to if they need any advice along the way ...!

Chris Leong good thoughts here, thanks for posting. As soon as I hear #trust I think of humility and recall reading on this earlier - especially in the context of what could be ‘arrogant’ AI. Humble Human intervention is sometimes needed https://www.forbes.com/sites/charlestowersclark/2021/01/25/can-we-trust-ai-when-ai-asks-for-human-help-part-one/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了