Responsible Innovation: Why We Need Digital Trust
Image by Tumisu from Pixabay

Responsible Innovation: Why We Need Digital Trust

While the hype around ‘Artificial Intelligence’ driven by ‘Generative AI’ tools such as ChatGPT, Bing, and Bard continue to dominate media headlines, there is an increasing awareness in civil society of both the opportunities and the associated risks. In particular, the negative impacts on individual privacy, data ownership, data sovereignty, future of work, civil liberties, discrimination, biases, physical and psychological harms, including perceived existential threats to humanity are polarising public opinion as well as undermining democratic values and beliefs.

Those who see progress in this field of technology are benefitting from the speculative ‘financial bubble’ and possible revenue opportunities, while those who are concerned about the uses and how these technologies have been deployed without the requisite ‘safety tests’ are perplexed these technologies continue to be developed in the absence of regulations. Consequently, existing laws are weakened and the unintended consequences of ‘brittle’ and unreliable systems are proving harmful to global citizens.

Our last article - Regulations Will Not Stifle Innovation , saw opinions divided between those who agree with the need for regulation, including the enforcement of existing and introduction of new regulations, and those who are opposed to regulation, especially those whose interests will be impacted. Subsequently, we observed Sam Altman, who previously called for regulations to be introduced , threatened to withdraw OpenAI’s services from the EU , then announced that it will continue to operate in the EU . The EU is after all a significant market of 460 million citizens eager to engage with OpenAI’s impressive offerings.

Meanwhile, various polls have been conducted in recent years to measure sentiment in the sampled population groups about their trust in ‘AI.’ Some of these are as follows:

Many in civil society would have experienced first-hand the adverse outcomes from automated decision-making and profiling by non-deterministic algorithms deployed in Socio-Technical Systems (STS) in areas such as:

  • hiring and human capital management;
  • access to public services, health care, benefits, products, or opportunities;
  • identification of natural persons based on biometric data;?
  • monitoring and surveillance;

The problem with the media referencing the words ‘Artificial Intelligence’ as an umbrella term to describe the family of machine learning, algorithmic and autonomous systems, is that every use case within the spectrum is tainted by the same brush in the ‘hype cycle.’ Therefore, positive and beneficial use cases where these technologies are deployed with a ‘human-in-the-loop’ and within a human-specified narrow scope, context, nature and purpose, get tarnished by use cases where STS have either caused harms or are likely to directly and adversely impact citizens and civil society through automated decision-making and profiling, often without recourse or any meaningful option for redress of any harms that may be caused.

Most of the findings from the surveys portray a different picture depending on the geo-political landscape and socio-economic circumstances. ?It would be interesting to see updates of some of the more comprehensive surveys conducted prior to the release of the ‘Generative AI’ tools.

Nevertheless, the family of technologies that encompass machine learning, algorithmic, generative, and autonomous systems exist to analyse data. While data analytics has been a scientific discipline and a part of work within organisations over the past few decades, it is the emergence of powerful, accessible, and affordable capabilities over the last 3-5 years that give rise to concerns. While organisations deploying STS can effectively and efficiently analyse and derive inferences from personal data for automated decision-making and profiling, citizens and civil society increasingly have legitimate cause for concern.

It is really about the data – (y)our data

The Centre for Data Ethics and Innovation (CDEI) in the UK published their independent report on Public attitudes to data and AI: Tracker survey (Wave 2) . Although it was last published prior to the launch of ChatGPT and the other ‘Generative AI’ tools, it cited the following 5 areas of concerns within its 7 key findings:

  • “Data security and privacy are the top concerns, reflecting the most commonly recalled news stories.”
  • “Trust in data actors is strongly related to overall trust in those organisations.”
  • “UK adults do not want to be identifiable in shared data - but will share personal data in the interests of protecting fairness.”
  • “The UK adult population prefers experts to be involved in the review process for how their data is managed.”
  • “People are positive about the added conveniences of AI, but expect strong governance in higher risk scenarios.”

As the digitalisation of our physical world continues more of what we do and how we interact with the digital services is captured, processed, monetised, shared, sold or tradedwith and without our informed consent. Our personal data is then inferred by computer software, namely the algorithms embedded within the STS deployed by various organisations mostly to influence how we are engaged through ‘personalisation.’

When ‘Generative AI’ technologies with their ability to mimic the natural use of language, are then deployed to enhance how these STS engage with us, those interactions can become intimate, as Yuval Noah Harari explains towards the end of his presentation at the Frontiers Forum recently.

These generative large language models that are at the core of popular and publicly available ‘Generative AI’ tools have been trained on data scraped from the internet as well as data introduced by their users and are likely to also include accessible personal data without the data subject’s knowledge, awareness, or consent. OpenAI confirms on their website , that “ChatGPT and our other services are developed using (1) information that is publicly available on the Internet, (2) information that we license from third parties and (3) information that our users or human trainers provide.” Consequently, the quality, legitimacy and accuracy of generated outputs have been less than optimal.

The likelihood of personal data being included in the generated outputs has prompted the Italian Data Protection Regulator to ban ChatGPT in March 2023 . Since then, OpenAI has made provisions to comply with GDPR, such as:

“Individuals in certain jurisdictions can object to the processing of their personal information by our models by filling out this form . Individuals also may have the right to access, correct, restrict, delete or transfer their personal information that may be included in our training information. You can exercise these rights by reaching out to [email protected] .”

Where the personal data of a data subject has been used to train the model, how OpenAI or the provider of a similar generative large language model can exercise a data subject’s right to have their data deleted and demonstrate that they have done so, remains to be seen.

This article expands on the privacy issues that are at the heart of large language models:

“Many of the issues raised by the Italian regulator are likely to cut to the core of all development of machine learning and generative AI systems, experts say. The EU is?developing AI regulations , but so far there has been comparatively little action taken against the development of machine learning systems when it comes to privacy.
“There is this rot at the very foundations of the building blocks of this technology - and I think that’s going to be very hard to cure,” says Elizabeth Renieris, senior research associate at Oxford’s Institute for Ethics in AI and?author on data practices . She points out that many data sets used for training machine learning systems have existed for years, and it is likely there were few privacy considerations when they were being put together.?
“There’s this layering and this complex supply chain of how that data ultimately makes its way into something like GPT-4,” Renieris says. “There’s never really been any type of data protection by design or default.”

Since ChatGPT and other similar publicly available ‘Generative AI’ tools were released, data provided by users have continued to enhance their respective foundation models. But the privacy, accuracy and quality issues remain.

We saw the headline that “Apple Blocks ChatGPT” which follows similar decisions made by other large corporations. However, the rapid adoption and embedding of these ‘Generative AI’ tools by third-party providers into their existing applications, software services and platforms has led to these tools being introduced into the workplace indirectly and at scale.

Has robust security due diligence been conducted beyond the privacy policies and contracts?

Do you know if corporate data and personal data do not leave the organisation’s firewall when used with these embedded ‘Generative AI’ tools?

The capabilities afforded by ‘Generative AI’ tools to generate content from data that is readily available on the Internet have spawned ideas for uses in many areas as described in this article :

“It’s also important that this phenomenon – whether we call it a heist or an opportunity - also speaks to the growing call for data dignity. How do we grant rights, royalties and recognition to the humans creating the value that AI mines into this conversation??
It’s very clearly a Wild West as people look to fence off the value of things in light of the rapid innovation in large language models. We need to think about a practical and just way to fence this new landscape before we quietly accept all the forms of our unrecognized assets have slipped behind new paywalls.”

All examples listed in the article referenced above assume that these use cases produce intended outcomes consistently, but do they?

Have the foreseeable risks of unintended consequences been anticipated, isolated and mitigated by organisations deploying these tools??

When Personal Data Is Collected and Mined

The digitalisation of our physical world has been progressing at an increasing pace over the past decade. This has resulted in an accumulation of data captured through every interaction and transaction.

The introduction of the iPhone and similar smartphones in 2007 accelerated the digitalisation of our personal worlds. If we look back at the last 16 years and chart the expansion of data that is generated, collected, processed, traded, and leveraged by STS, the curve is exponential and it is set to expand further with increased velocity.

This article describes how personal data is collected by Amazon’s Alexa, a ‘voice-activated smart assistant’ and used to train their large language model. It also outlines privacy concerns which users should be aware of.

We know publicly accessible ‘Generative AI’ tools such as ChatGPT were trained with personal data. Where training datasets, contain personal data, were they always obtained with informed consent for the purpose of training machine learning models?

The introduction of GDPR in 2018 afforded data protection rights to EU and UK citizens. The introduction of similar Data Protection and Data Privacy Regulations in other jurisdictions afforded their citizens respective data protection rights.

Nevertheless, technologies being developed and deployed by organisations processing personal data are often not carried out with privacy-by-design nor with compliance-by-design in mind. The perception held by many that any data available on the Internet is ‘freely’ accessible has given rise to the popularity of these ‘Generative AI’ tools, notwithstanding the resulting legal challenges that are underway.

The capabilities of the underlying technologies have matured to the stage where the resulting outputs – in written language, imagery and voice formats have introduced new risks to organisations, societies and democracies, as these articles suggest:

In some cases, these risks could be systemic, as this warning from the SEC Chair, Gary Gensler suggests.

In this video interview on “How artificial intelligence is being used to create ‘deepfakes’ online” Jack Stubbs, VP of Intelligence at Graphika concluded by noting that,

“But we are accelerating in terms of the speed at which we’re heading towards this situation that some people refer to as zero trust, you know, this environment, particularly online, where it is impossible to ascertain what is true or what is false. It is not just being presented with something that never happened to be convinced it’s real. But on the flip side, where there can be perfectly real world legitimate, authentic events, but it is impossible to verify that’s the case.”

What does our Future hold?

Every new gadget, device, app, and digital service we use not only immerses us more into the digital world, but enables more data points about who we are, what we do and how we behave to be captured, collected, and processed by algorithms generating inferences about our digital ‘personas.’ Just review the apps on your smartphone and examine what data is shared and to which entities.

Proximity data collected alongside our personal data, are used to enrich our digital profiles through algorithmic inferences by data brokers that may or may not be accurate. Dr Johnny Ryan explains how this works in this short video clip .

It would be interesting to validate what our digital profiles say about us. Yet, our digital profiles are leveraged by organisations deploying STS to profile us and influence how we interact with their digital services.

If you are following the progress of Neurotech, you might have seen this article citing feedback from the ICO about the possibility that “Companies in the future may use brain-monitoring technology to watch or hire workers.” It also warned that “there is a real danger of discrimination if "neurotech" is not developed and used properly.”

Take a look at this short video clip , written and produced by Dr Louis Rosenberg , which gives us a peek into the future of mixed reality. Consider the implications of having your thoughts and sentiments interpreted by algorithms, then transmitted to others and being subjected to manipulation. Then ask yourself if this is the future that you would like to be living in.

Zero Trust

“Zero Trust” is a reference to a security framework that requires all user access to an organisation’s network – from within or from the outside, to be authenticated, authorised, and continuously validated before being granted. This includes user access to applications and data within the enterprise.

When we consider the risks and consequences of all potential outputs from generative technologies and the likelihood of adverse outcomes arising from those and from the automated decisions and profiling deployed through STS, should we ask ourselves if we can trust what we read, hear, see, and receive?

More generally, we submit our personal data during each digital transaction and in exchange for the digital services we receive and interact with. If we are in jurisdictions governed by Data Protection and Data Privacy laws, we are led to believe that our personal data is protected by the organisations deploying these technologies, as they need to comply with the regulations.

Where consent is the lawful basis under GDPR, we expect that to be informed on how our personal data is used and processed, along with the provision of rights as afforded through Articles 5, 6, 13-18, 20, 21 and 22, if automated decision-making and profiling are in use.

In the US recently, the FTC fined Amazon for alleged privacy breaches relating to “voice recordings, transcriptions and precise location data collected from children via the Alexa voice assistant even after parents requested their removal.” Additionally, according to the FTC, Amazon’s Ring employees and contractors “were given unrestricted access to view videos taken at users’ homes” through their video doorbells and security cameras.

As more Smart, IoT and Wearable devices are introduced into our environments and our lives more data is captured and collected about us. How much do you trust the organisations that deploy them? Will they keep your data protected and secured in accordance with the data protection and data privacy laws?

Organisations also source our personal data through third-party data brokers who collect them through a variety of sources and methods. We wrote about this in a previous article titled ‘Are you aware of your digital profile?’

Do you know what your digital profile looks like?

Have you consented to the collection of your personal data by third-party data brokers?

Do you trust that all the information collected and inferred about you in your digital profile is credible and accurate?

This article proposes the adoption of the zero-trust paradigm for privacy and ‘AI.’

Trust in technology emerges when we build a digital ecosystem where consumer expectations fit reality, where widespread harms are mitigated and where consumers can accurately measure any differences in the riskiness of systems they choose to use.
In this way, trust emerges when it is questioned rather than assumed. Baseline trust in a technology may require government assurance. Above this, consumers must be given the tools they need to make informed decisions about the products and services they use. This is as true for privacy as it is for cars. The more we establish independent and verifiable indicators of data protection practices, the more we empower consumers to put their trust only where it is deserved.
But learning from the security field, we should also embrace a culture of zero trust — demanding transparency, validation, and metrics at every opportunity, and working together to build mechanisms that can demonstrate trustworthy privacy practices.

Enforced regulations will drive change

The lack of enforcement of existing regulations applicable to adverse outcomes from the deployment of automated decision-making and profiling and misuse of ‘Generative AI’ tools have resulted and will continue to result in the manifestation of new risks to organisations, society and democracy that will be challenging to mitigate.

We need enforced regulations – current as well as new, to provide the necessary guidance and operational safeguards within which these emerging technologies are deployed to process personal and corporate data.

Enforced regulations can drive the mitigation of existing risks from the deployment of automated decision making and profiling in STS used in high-risk use cases, such as those listed by the?ICO ?and in the?EU AI Act .

The incoming EU Digital Services Act will apply to online platforms. It will require organisations deploying online platforms “to be more transparent about their algorithms, beef up processes to block the spread of harmful posts?and ban targeted advertising based on sensitive data such as religion or sexual orientation” according to?this article . ?We highlighted some of the amendments to the EU AI Act in our last article which will impact the deployment of ‘Generative AI’ tools.

Enforced regulations is a critical component to ensuring that our personal data is protected and foreseeable risks that can result in adverse outcomes from automated decision-making and profiling by non-deterministic algorithms deployed in Socio-Technical Systems (STS), are mitigated with accountability.

What can organisations do to earn digital trust?

In the likelihood that citizens and civil society adopts and embraces zero trust principles when engaging with STS that provide digital services, deploying organisations cannot afford to maintain the status quo and expect engagement that is critical for their business survival, let alone business growth.

The uncertainty citizens and civil society will have about how their personal data is and will be treated, and how automated decisions and profiling will impact them, will likely result in no engagement, if they cannot trust what they read, hear, see, and receive. We saw this outcome a couple of years ago when More than a million people opted out of NHS data-sharing in one month in a huge backlash against government plans to make patient data available to private companies and since then, a class-action lawsuit was brought against Google for passing patient data shared by the Royal Free NHS Trust in London to Deepmind, without the patients’ knowledge or consent .”

If you are Board member or the CEO of the deploying organisation, you will need to think and act differently, embrace the principles of responsible innovation, and operationalise safeguards.

Recognising that Data Governance is a key component of organisation capabilities and abiding by the Data Protection and Data Privacy regulations within your jurisdiction that govern the use and treatment of personal data is fundamental to any organisation with aspirations to operate successfully, in the digital world.

Achieving compliance with regulations is mandatory.

Our Responsible Innovation Framework describes the foundations upon which organisations can build public trust, while embracing change and embarking on a smooth digital transformation journey. We start with Purpose and the human consumer is embedded, addressing the fundamental rights of citizens and outcomes to civil society.

Organisations deploying STS that process personal data need to be able to demonstrate what they say they do. This will be reflected in the outcomes from the STS and experienced directly by the consumer of digital services but can also be externally validated through a third-party independent audit that leverages audit criteria that incorporates the rights and interests of civil society, such as ForHumanity ’s Independent Audit of AI Systems .

Being transparent, responsible, and accountable is critical to earning trust.

Transparency provides Clarity

If you are Board member or the CEO of the deploying organisation, can you:

  • be certain that all data within your enterprise is governed, protected, secured, and accounted for? ?
  • trust that the datasets with personal data your STS were trained on for their automated decision-making and profiling capabilities were obtained with informed consent, accurate, fair, and compliant with the respective regulations?
  • trust that your algorithms designed to deliver automated decision-making and profiling in your STS that impact citizens and civil society can consistently produce decisions that are ethical?
  • trust that your organisation can explain all outcomes from your non-deterministic algorithms deployed in Socio-Technical Systems (STS)?
  • trust that citizens impacted by your STS and whose personal data your organisation holds can exercise their fundamental rights as provisioned by the Data Protection and Data Privacy regulations that your organisation must comply with?

It may be a significant gap to fill for some organisations that have deployed STS without prioritising and investing in data governance, compliance, risk management, ethics, and governance by design. But the gap needs to be addressed as a matter of priority and here’s why:

  • If Boards and CEOs decide to do nothing and hope that adverse outcomes do not eventuate, and the regulations will never be enforced, be prepared for the mounting costs when it happens.

Is this a risk that your shareholders or investors are prepared to take?

  • If adverse outcomes from your STS impacts citizens, not only is your organisation liable for the regulatory fines, potentially facing lawsuits, subsequent loss of reputation and brand value, but ultimately you will also lose trust from your consumers, customers and civil society.

Is this a risk your employees are prepared to take?

  • If trust in your brand and digital services is lost, your organisation will find it very challenging to attain the level of engagement required to realise the growth your Board expected when they approved the budget for your digital business transformation programme.

Is this a risk that your Board is prepared to take?

Each stakeholder outlined above should be aware of the risks associated with any STS your organisation has deployed that process personal data and deployed with automated decision-making and profiling capabilities. This includes the use of ‘Generative AI’ tools. ?

Simply focusing on the upsides from the deployment of these technologies that also process personal data without understanding and mitigating associated downside risks is no longer a viable strategy going forward.

Citizens and civil society will adopt zero trust principles when deciding whether they will engage with your digital services. It makes sense to do so. In anticipation of this adoption, Boards and CEOs will need to think and act differently as a matter of urgency.

?

Chris Leong ?is a Fellow, Certified Auditor (FHCA) and a Member of the Ethics Committee at ForHumanity and the Director of Leong Solutions Limited, a UK-based Management Consultancy and Licensee of the ForHumanity Independent Audit of AI Systems, helping you succeed in your digital business transformation through Responsible Innovation and Differentiate Through Trustworthiness.

Maria Santacaterina ?is a Fellow, Certified Auditor (FHCA) and a Member of the Ethics Committee at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping you revitalise your Core Business Strategy, Create Enduring Value and Build a Sustainable Digital Future.


Alexander Bagg

OmniFuturist | Media Tech Comms Innovation and Analysis | Advanced UI Design | Composer | Audio Visual Synthesist

1 年

Oh my goodness! There's so much here that I'm not sure where to begin. But much of the problems raised, and discussed in the comments, are really all about the same thing. I'm surprised it's still taking so many, so long to figure it out. Though it does require radically shifting your focus and perspective on how the internet should really work as an information and communication tool. But you have to start from scratch. The real key to it all is the word broadcast! I'll be back in a little while, after cogitating on it and deciding on how to best respond to some of the main concerns.

Ni?l Malan

Helping Boards and Executive teams drive Reinvention with their most valuable resources. Board Member | CEO | MD | CDIO | Startups | Implementor | Digital Transformation PE, FMCG, Energy, Supply Chain, Start-ups

1 年

Thanks Chris once again an important topic to be taken seriously. Trust has migrated from a handshake, a signature and an ID document to the digital era of multiple additional layers beyond the capabiity of an individual’s control. This is where legislation, governance, standards, rules and responsibilities, and policing to name but a few, are needed to protect everyone.

Chris Leong, FHCA

Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own

1 年
Chris Leong, FHCA

Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own

1 年

要查看或添加评论,请登录

社区洞察

其他会员也浏览了