Responsible Innovation: How humane is your AI?
Adobe Stock

Responsible Innovation: How humane is your AI?

It wasn’t too long ago that enduring business relationships were built from interactive customer engagements and sustained with trust earned over time through the peaks of successes and troughs of challenges, by humans interacting with each other. That form of how business is done thankfully is still practised in 2022, albeit in much rarer settings and between far fewer people.

It was only in the past decade, with the emergence of powerful technologies that can rapidly process large amounts of data to derive insights and infer decisions, that commercial organisations have started to deploy AI, algorithmic and autonomous systems to automate many of the customer-facing business processes and reduce operating costs by displacing humans in those processes.

Those of us who have been subjected to these technologies, more appropriately known as socio-technical systems, are unlikely to provide glowing references from our experiences. How many happy and satisfactory engagements have you had with chatbots? If you applied for jobs and were forced to engage with automated hiring processes, how many of them kept you informed throughout and provided you with meaningful and timely updates? If you had to undertake an online assessment, were you informed of the use of algorithms and subsequently provided an explanation of how they were used in the decision-making process? If your only means of engaging with your service provider was through an app, website or a telephone bot, how easy was it to contact a customer services representative if you needed to speak with another human being?

Personalisation

Personalisation is fundamentally ensuring that the customer you are serving is getting the service that is relevant for him/her. In the non-digital world, this requires detailed knowledge of that customer you are serving.

Imagine that you have been frequenting a café regularly over the past few months. In that time, the proprietor has gotten to know you very well – which table you like to sit at, the type of coffee you regularly order, perhaps accompanied by a particular Danish on certain days of the week. The next time you walk in, that table will be available for you and just as you are making yourself comfortable at your favourite table, your cup of coffee will be made and brought over to you with your Danish if it is the right day of the week. In return, you feel valued through the personalised service you receive. The intimacy you get in this environment ensures your next return. You trust that you will always be looked after every time you step into that café, which transforms into your loyalty. A personalised service that keeps customers happy, differentiates.

Personalisation in the digital world aspires to replicate human-led personalised service at scale through the use of AI, algorithmic and autonomous systems processing large amounts of data about the consumer of digital services, but it is challenging to get right according to this article . “Get it right, and it can provide customers with a seamless experience, and one that they will come back to.” Given that “getting it right” requires a significant amount of relevant and accurate data, platforms that you regularly frequent are likely to stand a better chance of doing so, as opposed to those that you rarely use, assuming the data that is collected about you are data that you consented to provide.

Depending on the types of personalised services the digital service provider intends to offer you, the amount of data that they need to collect about you can vary, but in any case, it will be significant. There is also the question of where these digital service providers are legitimately getting your data from? Are all the data they have about you accurate? Are they valid? Are they relevant? What inferences can they derive from the collective data they have about you? Can you verify their authenticity and accuracy? Is their provenance disclosed to you?

Privacy issues

I don’t mind if Amazon recommends me relevant choices while I am searching for something, or it reminds me that I need to reorder a particular item that it determines, from my order history, has likely run out. What I do mind is if I suddenly get ads on a social media platform about something that I searched for using Google, as I would not have consciously consented to my search data being sold and distributed to third parties to then show me ads relating to what I was searching for. Similarly, I would not expect to hear of a private conversation I had with a friend being played back by someone else I didn’t know.

Privacy issues are still unresolved in the digital world despite the presence of Data Protection regulations such as GDPR, as they are not robustly enforced throughout all industries operating in the digital world. What we have been seeing instead is action from privacy groups such as the Irish Council for Civil Liberties (ICCL) , Panoptykon Foundation and Ligue des Droits Humains to address the way our internet data is freely distributed and monetised by the ad-tech industry through the Real-Time Bidding (RTB) mechanism which broadcasts internet users’ behaviour and their locations to companies billions of times each day, as explained by Dr Johnny Ryan in this video clip .

In this announcement , we learned that in February 2022: ‘28 EU data protection authorities, led by the Belgian Data Protection Authority as the leading supervisory authority in the GDPR’s one-stop-mechanism, found that the online advertising industry’s trade body “IAB Europe” committed multiple violations of the GDPR in its processing of personal data in the context of its “Transparency and Consent Framework” (TCF) and the Real-Time Bidding (RTB) system.

The consent popup system known as the “Transparency & Consent Framework” (TCF) is on 80% of the European internet. The tracking industry claimed it was a measure to comply with the GDPR. Today, GDPR enforcers ruled that this consent spam has, in fact, deprived hundreds of millions of Europeans of their fundamental rights.’

Specifically, ‘The TCF consent system was found to infringe the GDPR in the following ways:

·??????TCF fails to ensure personal data are kept secure and confidential (Article 5(1)f, and 32 GDPR)

·??????TCF fails to properly request consent, and relies on a lawful basis (legitimate interest) that is not permissible because of the severe risk posed by online tracking-based "Real-Time Bidding" advertising (Article 5(1)a, and Article 6 GDPR)

·??????TCF fails to provide transparency about what will happen to people’s data (Article 12, 13, and 14 GDPR)

·??????TCF fails to implement measures to ensure that data processing is performed in accordance with the GDPR (Article 24 GDPR)

·??????TCF fails to respect the requirement for data protection by design (Article 25 GDPR)

·??????International transfers of the data do not provide adequate protection (Article 44, Article 45, Article 46, Article 47, Article 48, Article 49)’

My data, Your data, Our data

If you watched the video clip explaining how RTB works, you are likely to be shocked to learn about the extent of the data collected by the data brokers about you, all of which could be collated to describe you in great detail, based on your previous interactions with the internet.

It is the obligation of those digital service providers to preserve our privacy and safeguard our personal data. It is also their obligation to observe and afford our rights as data subjects, provisioned by data protection regulations such as GDPR. But are they preserving our privacy and safeguarding our personal data?

Regulators need to do more to enforce the existing regulations to deter non-compliance, as unethical collection, procurement, collation of our most sensitive data can easily be used through AI, algorithmic and autonomous systems to target and/or exploit data subjects. Dr Ryan, in his video clip referenced earlier, warned: “These secret dossiers about you – based on what you think is private – could prompt an algorithm to remove you from the shortlist for your dream job. A retailer might use the data to single you out for a higher price online. A political group might micro-target you with personalised disinformation.”

Meanwhile, savvy internet users have switched to using privacy-preserving browsers to prevent their actions from being tracked online. In 2021, Apple announced that they would provide the tools for their users to restrict online tracking, which in effect also disrupted Facebook’s business model which is based on leveraging users’ tracking data to enable advertising. Unfortunately, the majority of society is unaware of the extent of this data privacy crisis and remain victims of this abuse.

Profiling

Profiling is specifically referenced in the GDPR regulation alongside automated decision-making systems, and both are classified as high-risk data processing activities. The ICO defines profiling in UK GDPR Article 4(4) as ‘means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.’ Effectively, the outcome of digital profiling is an inference based on past information collected and collated.

Since inference is not fact, the accuracy of profiling can be questionable. When the consequence from any automated decision made on an inferred profile disadvantage, discriminate or harm that person, those outcomes also impact the organisations that deploy those socio-technical systems, as this informative BBC documentary, titled ‘Computer Says No ’ explains.

Whilst there is likely to be at least one inferred digital profile for every human, generated by algorithms for use by automated decision systems, I wonder how many of these profiles have been reviewed and validated by their corresponding data subject (human) to affirm accuracy? I am not referring to the profile on every digital account you have, which you can complete, but one that was composed about you based on information sourced from sources other than yourself. Do digital service providers afford you the opportunity to learn about what additional information about you that was sourced from third parties?

Unless your inferred profile can be validated by you and attested to be accurate, there will always be the risk that an automated decision generated based on your inferred profile is inaccurate, irrelevant, and invalid. Transparency about data provenance is as important as transparency about how those automated decisions were made. The assumption by organisations deploying socio-technical systems that the AI, algorithmic and autonomous systems are always right, is flawed and exposes them to regulatory fines, civil ligations, reputation damage and loss of trust in their brand.

Nudges, dark nudges, sludges

Digital profiles are usually targeted by organisations via their AI, algorithmic and autonomous systems to achieve certain outcomes, ranging from selling products and services, to social engineering using curated information as well as disinformation, through nudges, dark nudges and/or sludges.

So, what is a nudge, dark nudge or sludge? This article describes a nudge to simply be a ‘change in the decision-making environment that works with those making a decision to facilitate choices that are in their best interest.’ A dark nudge is ‘more than cunning designs that curtail behaviour change by adding (or not removing) friction’, while a sludge is a ‘type of change adjacent to dark patterns , which are essentially nudging for bad and involve customers or beneficiaries of policies being nudged towards making choices that could actively harm them.’

A well-publicised case study where algorithms were deployed to target profiles with personalised nudging is that of Cambridge Analytica and the role it played in the UK Brexit Referendum and the US Presidential Election in 2016. A detailed analysis is outlined in this paper .

Our digital existence

Organisations have rushed to embrace digital transformations on the assumption that it is necessary for them to establish a presence in the digital world to compete with technology companies that have disrupted established businesses. There is no denying the impact some of the notable digital disruptions has had on the incumbents of established businesses such as Kodak, Blockbuster, and Nokia. The successes of the pioneering technology companies have paved the way for the race to transform businesses into those that can operate effectively in the digital world. Many of those technology companies have become the Big Tech companies dominating the digital world with their platforms and ecosystems through which most of our data flow.

Additionally, the lack of regulation within these industries has allowed innovation to thrive with market domination the primary purpose and focus of the leaders and shareholders of these companies.

In the past couple of decades, the evolution of computing power and high-speed internet connectivity have enabled simple applications and digital services to mature into complex applications. Large amounts of data can now be processed rapidly by AI, algorithmic and autonomous systems, delivering inferred automated decision-making socio-technical systems that are deployed to directly engage with humans under the guise of intelligent automation to improve efficiencies and reduce operating costs.

While AI, algorithmic and autonomous systems have delivered benefits in narrowly defined settings, we have and continue to repeatedly see adverse outcomes resulting from these socio-technical systems that have disadvantaged, discriminated and harmed humans across industries, demographics and geographies. Capabilities developed in a vacuum that appeared to work, were packaged, marketed, promoted and sold as effective solutions to organisations desperate to join the digital club in a burgeoning industry that continues to grow annually . Since all outcomes matter , society needs to be aware of the downside risks that can manifest into adverse outcomes, while the positive outcomes continue to be promoted by the proponents of these technologies.

Earlier in this article, we explored the misuse of our personal data captured by ad-tech capabilities as we interacted with the internet and then unlawfully sold by data brokers without our consent. Notwithstanding the fact that any digital profile inferred about us are unverified and likely to be inaccurate and irrelevant and invalid for the scope, context, nature, and purpose of the subsequent processing by AI, algorithmic and autonomous systems, the risk of those unverified personal datasets contributing to an adverse outcome from a socio-technical system deployed by any organisation we are engaging with, is high, unanticipated and unmitigated.

Consequently, we face scenarios where socio-technical systems (deployed by humans in organisations into the real world), are engaging with inferred digital profiles which may not accurately represent the humans these organisations are supposed to be engaging with. ?This podcast , ‘The AI Placed You at the Crime Scene, but You Weren’t There’ discusses one use case where you might not be whom the algorithm thought you were.

Gary N Smith highlighted the limitations of the highly promoted GPT-3 and other pre-trained language model chatbots in this article , while Steven D Marlow in his recent article wrote, ‘History will show that the Deep Learning hill was just a landfill; the composting of human culture and social cohesion in failed effort to understand what it even means to be human.’

Maria Santacaterina and I wrote in our article about the ‘The fallacy that socio-technical systems are mature enough to replace humans in decision-making is starting to be revealed across many industries that have adopted AI, Algorithmic or Autonomous systems to process personal data.’

It is very rare to hear promoters of AI, algorithmic and autonomous systems talking about how downside risks are managed and mitigated so that they don’t manifest as adverse outcomes that disadvantage, discriminate and harm humans. Even when we hear about the need for AI governance, there is little or no reference to risk management. So, whose responsibility is it to manage downside risks from AI, algorithmic and autonomous systems?

When we combine the limitations of socio-technical systems that cannot sufficiently and meaningfully engage with humans, with the inherent issues of the collated data that supposedly represents us, it is not surprising that we have a crisis in reliability, robustness, validity, accuracy, resilience and crucially, trust between humans and machines.

When you then add in the lack of accountability by organisations that deploy these socio-technical systems when adverse outcomes occur, it is not surprising that we also see an erosion of trust by society in those organisations and their brand.

Why do Boards and leaders need to be aware and care?

Most Boards and leaders of established organisations digitally transforming and leveraging AI, algorithmic and/or autonomous systems to deploy socio-technical systems, are unlikely to be aware of how digital personalisation, profiling and nudging are so dependent on vast amounts of personal data to function effectively, let alone where all the data might have been sourced from and whether they were lawfully obtained. Therefore, they are also unlikely to be aware of the ethical, legal, and social issues that are inherent in these practices.

Here is why: The majority of organisations that have deployed socio-technical systems have not taken into consideration all outcomes to society that AI, algorithmic and autonomous systems can cause. The end-users typically become the focal point when these systems are designed. We often hear about the need to get the UX (user experience) and CX (customer experience) right, so that the objectives from those user or customer engagements can be realised. Lisa Talia Moretti , in this clip , talks about “how the focus on the end-user consequently dehumanises the people technologist try to establish empathy with. Additionally, the narrow focus on end users rather than the community or society filters out the need to impact assess possible changes to assumptions”. Finally, she highlights the unintended consequences of innovating at speed, where the drive to get the product to market in the shortest possible time de-prioritises contextual considerations surrounding the person engaging with that digital service. “If we are not considering the contexts in which humanity live their lives”, she argues we “cannot have a real conversation about ethics, because ethics is about how we want to live.”

Many established organisations have likely deployed AI, algorithmic or autonomous systems in their hiring processes. Katrina Ingram and I wrote in our article , ‘Organisations that adopt vendor solutions incorporating AI, algorithmic or autonomous systems processing personal data need to be aware of the limitations and downside risks of these solutions and take the necessary steps to monitor their performance and implement robust controls and risk mitigations’. A candidate’s experience from the recruitment process with any organisation provides valuable insights into that organisation’s culture. The impressions that those automated hiring processes leave with the candidates may not be the ones that the Board and the leader of those organisations might want to leave.

Given the ease with which socio-technical systems can be designed, developed, and deployed, but baked with limitations and downside risks, care and a human-centric focus need to be embedded into any digital transformation initiative.

The significant gap in capabilities that exists between these socio-technical systems and humans means that automated decision making based on inferences from historical personal data sets (that may not be representative of the human that is applied to), cannot be taken for granted to be accurate, relevant, or valid.

Ethics must carry an equal or higher priority than new revenue streams and profits. Humans and their well-being must be at the heart of all decisions when the Boards and leaders decide in investing in and deploy socio-technical systems.

Accountability through governance and oversight must be introduced, with diverse inputs and multi-stakeholder feedback operationalised when introducing AI, algorithmic and autonomous systems processing personal data. Diversity of thought and lived experiences must be leveraged throughout the lifecycle of these socio-technical systems.

The entire organisation, starting from the leadership need to embrace responsible innovation , which must also be weaved into the fabric of their DNA and organisation culture.

Boards and leaders need to seriously reconsider the implications of the technology barriers introduced by their digital transformations and the impact to their brand from degradations in customer service and trust. Introduce human-led feedback mechanisms to enable your customers to interact with employees seamlessly when required.

Introduce transparency, explainability, oversight, governance and be accountable for the decisions as well as the outcomes from socio-technical systems. Disclose all known residual risks related to your socio-technical systems.

ForHumanity ’s blueprint for an infrastructure of trust and their Risk Management Framework are ideal starting points for organisations looking to mitigate downside risks from AI, algorithmic and autonomous systems.

Since AI, algorithmic and autonomous systems are far from humane, and most organisations deploying them to deliver automated decision making are not mitigating downside risks from these socio-technical systems, human interaction with customers and candidates needs to be reinstated for effective and meaningful engagement.

Trustworthiness needs to be earned through human-to-human engagement.

I look forward to hearing your thoughts.?Please get in touch with me to explore how I can help you innovate responsibly.



Chris Leong is a Fellow at ForHumanity and the Director of Leong Solutions Limited , a UK based management consultancy and licensee of ForHumanity’s Independent Audit of AI Systems (IAAIS)

Jean-Marc Dompietrini

Analytics Translator | Global Marketing Performance, Technology and Automation at Dell Technologies

2 年

Thanks Chris for this insightful post Digital Personalization : ?“getting it right” requires a significant amount of?relevant?and?accurate?data, … that you consented to provide” As the only way to earn Trustworthiness is thru human-to-human engagement, it is critical to focus and govern 1st party data in our internal Sales and MarTech capabilities.

Maria Santacaterina

CEO | SANTACATERINA |Transforming business with AI (Ambition and Imagination) for a sustainable digital future | Independent | Non-Executive Director | FTSE100 | Global | Strategy | Innovation | Luxury Retail & Fashion

2 年

Brilliant article Chris Leong! There is a long way to go towards making Ai, algorithmic, automated systems humane. The first step is to recognise that they may not be!

Lukas Madl, FHCA

CEO of innovethic | The Tech Ethics company | building technology that people can trust | boosting business with good conscience | AI and bioethics | ForHumanity fellow and certified auditor

2 年

Chris, thank you for this very comprehensive and insightfull article. It helps a lot to get to the point of risks we have to be aware of.

Dr. Cari Miller

AI Governance | AI Procurement | 100 Brilliant Women in AI Ethics? | Certified AI Auditor | Certified Change Manager | Vice Chair IEEE P3119 | Executive Board Member at ForHumanity

2 年

Brilliant summary of all the issues in one cogent article! Problems need solutions and solutions need do'ers. I hope we can attract LOTS of do'ers. There certainly is plenty of work to go around. #forhumanity

要查看或添加评论,请登录

社区洞察

其他会员也浏览了