Go Digital: Duty of Care
Chris Leong, FHCA
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
In this past decade, the introduction of 4G mobile wireless technology, supplemented by mass adoption of broadband and faster CPUs in computing devices, has accelerated the evolution of 4IR technologies that catapulted our society into the digital world. Our lives have since been transformed. Smartphones have become our appendages. We have been able to do more, achieve more, access more through these devices. Not just the Gen-Z, but also the Gen-Y/Millenials, Gen-X, and dare I say it – Baby Boomers as well. Information and an increasing number of online services are almost instantly available, at your fingertips – literally, or at the end of your voice command for those who prefer Siri or Google Assistant, or Jarvis (if you are a diehard Iron Man fan). This article from The Guardian discusses the findings from an embedded report about the impact of smartphones on our lives.
According to this Intel article, 5G mobile wireless technology promises to further transform our digital world. Like it or not, our lives will be transformed further: “Smart homes and cities will also take a giant leap forward in the future of 5G. Using more connected devices than ever, AI will be taken to places it has never been before with edge computing. From houses that give personalized energy-saving suggestions that maximize environmental impact to traffic lights that change their patterns based on traffic flow, 5G applications relying on added network capacity will impact nearly everyone.”
It’s all about our data
Digital is all about data, and a significant amount of data fuels AI systems which are becoming more prevalent in apps and websites humans interact with. 5G, faster internet, and more powerful processors will no doubt enable more data to be processed at scale and speed by AI systems. More and more of our physical world will be connected digitally through the introduction of IoT devices – capturing data from points they are attached to. We cannot escape the fact that more data about what we do, in addition to who we are and what we like will be captured, stored, processed, used, shared (with permission I hope), and possibly sold to third parties. Unless of course, you choose to escape to a far-flung island that is remote and off-grid.
When we interact with the digital world, we entrust our data to the providers we choose to interact with. They use our data to provide us with a personalised user experience, which in essence can only be delivered by harnessing a variety of information about ourselves to determine (through inference) what matters, to engage us. They also have the responsibility to use our data ethically, fairly, purposefully, accurately, transparently, securely, and lawfully. When service providers then add AI to the mix, they need to ensure that their AI/ML algorithms are also subject to the same standards and scrutiny. This article discusses, from the EU GDPR’s perspective, the wider implications of automated decision making – citing: Article. 22 (1) GDPR does prohibit automated decision making by stating that any “data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling”. Interestingly, earlier this week BBC reported that most cookie banners “do not comply with the requirements of the GDPR”. If you ever wondered what the UK political parties do with your data, this article and the embedded report by the ICO provide the insights.
What is innovation without trust?
We have a situation where innovation has rapidly progressed without the downside risks of unintended negative consequences being thought through carefully and ethically by those leveraging powerful transformative digital technologies such as AI/ML and our data. The increasing evidence of negative societal impact over the past few years has resulted in regulators stepping in with proposed new regulations, while applicable current laws remain enforceable. We will see more macro-level change, as regulators in other countries follow suit and introduce their set of regulations to govern the use of AI within their jurisdictions. Earlier this week, Eric Schmidt criticised the transparency rules in the EU's proposed AI regulation, citing that it "requires that the system would be able to explain itself. But machine learning systems cannot fully explain how they make their decisions," My view is simple – if you cannot explain how your AI system made its decision, it should not then be used or adopted in situations that impact humans. It should not pass necessity assessments. Being able to explain how your AI system made its decision is part of your accountability to your customers, consumers, and society.
There is the suggestion that regulation such as the one proposed by the EU, stifles innovation. On the contrary, I believe the lack of trust will be the factor that stifles the adoption of innovation in the digital world. Trust is key to engagement in the digital world. The lack of trust will be the headwind for businesses aspiring to grow in the digital world, where customers also have choices. There is too much at stake when AI/ML algorithms cannot be explained and people could be disadvantaged, hurt, or discriminated against, as a result of decisions undertaken by these AI systems. Regulators such as the EU and the, FTC have recognised the value of introducing regulations to enable outcomes such as transparency, accountability, explainability, fairness, and trust to be derived as part of products, services, and solutions that leverage AI systems. Change is happening at the macro level.
Another key concern at the macro level relates to the increasing malicious use of AI as reported in this article. Awareness is critical for customers to understand the related security risks.
It’s all about the culture
Change will also need to happen at the micro-level within organisations using, buying, or selling products, services, or solutions with embedded AI systems. Regulated organisations that have adopted AI, need to urgently understand and quantify the gaps in their AI and Data ethics and governance capabilities, as we explored in my last article.
This article in WIRED talks about “Techlash” which sums up the “crisis of trust with the public”. Technology companies dominate the digital world, so traditional companies strive to become technology companies. Reducing the time to market for products has been one of the main drivers for many digital-first technology companies. “The priority for most engineers is to ship their products fast. That is how they are primarily evaluated, and that is what the culture prizes in most tech companies.” So, we have DevOps being introduced to deliver Continuous Integration/Continuous Deployment capabilities to release in the shortest possible timeframes. “It’s all about the culture”, the article cites. Therefore, it is imperative that the culture that drives these organisations, also embed the principles and practices of robust AI and Data ethics and governance, so that the outcomes from their products, solutions, and services are ethical, fair, accurate, transparent, secure, safe, and lawful for their customers, consumers, and society. Organisations that use AI systems directly or indirectly via their third-party digital supply chain (which I will cover in further depth in a future article) have a duty of care to their customers, consumers, and society.
Differentiation
In the digital world, there is tangible value for any organisation using AI systems to be able to continually assure and provide added confidence and trust for their customers, consumers, and society to engage, by being able to demonstrate that they have:
· robust AI and Data ethics and governance structures and capabilities in place – putting humans at the heart of all AI-driven decision-making outcomes;
· assurance by their internal audit function that their AI systems comply with all relevant regulations in jurisdictions they operate in, independently and externally verified using third party audit rules such as those defined by organisations such as ForHumanity;
· been able to obtain a favourable rating from an independent ethics rating agency such as EthicsGrade;
If you are a financial services organisation using AI systems, or a technology company using AI/ML technologies within your products, services, and solutions that you supply to financial services organisations, this is certainly an area where you can differentiate yourself from your competitors in the digital world, that is at the cusp of transforming more of the physical world through further advancement in technologies such as 5G and IoT.
So, who wants to lead?
I look forward to hearing your thoughts. Feel free to contact me via LinkedIn to discuss and explore how I can help.
Board Member & Advisor | Conduct, Risk & Governance | FinTech | RegTech | Expert Witness
3 年Great article Chris Leong - thank you for sharing. As you note, ‘being able to explain how your AI system made its decision is part of your accountability to your customers, consumers, and society.’ This is at the core of the regulators’ focus on treating customers fairly with an intersection point with regulatory accountability regimes.
Great article and insights Chris Leong in addition to Continuous Integration/Continuous Deployment , leading firms are now deploying Continuous Compliance platforms to ensure adherence to robust controls.