Go Digital: What does Quality mean?
shutterstock

Go Digital: What does Quality mean?

Those of us familiar with software, understand that no software is bug-free.

A CEO of a software company once joked to his audience at a Christmas drinks event that, if software companies made airplanes like they made software, many of them would be falling from the sky! Those of us who have implemented software systems can certainly attest to the quality issues discovered during the testing phase, some more than others but the lack of quality has frustrated many project team members and embarrassed respective software vendors.

Does quality matter?

Quality in software mattered more for some than others. Scenarios in which software was used, were typically risk assessed against the criticality and impact of failure for their business. Many argued that the recovery measures committed by third-party providers in their SLA agreements were adequate to cover eventualities of failure. The issues experienced by TSB in 2018 following their IT system upgrade, impacted 1.9 million customers for over 4 weeks, severely denting the bank’s reputation in the marketplace as well as incurring additional costs to the tune of several hundred million pounds.

In a world where failure within the interconnectedness of the digital supply chain on the internet can have a significant impact, the integrity and security of software deployed within the digital ecosystem are paramount, as the SolarWinds breach has demonstrated. Earlier this week, issues at a cloud computing provider Fastly resulted in an outage, lasting about an hour, which impacted many of the major websites and consequently their users, businesses, and organisations that rely on them. 

Whilst cybersecurity has been headlining concerns in the digital supply chain, another area of concern that we must be aware of relates to downside risks and unintended negative consequences of AI/ML systems, as an increasing number of them are used in decision-making situations that impact humans. This 2018 interview with Alexander Nix, CEO of Cambridge Analytica, provides a reality check of the seismic outcomes from personalisation without guardrails. It is interesting to note his references to morals, ethics, and laws when many will argue the lack of, as evidenced by the outcomes. There are lessons to be learned, but the question remains on whether the same mistakes can be avoided.

Current gaps

In my article about Gaps, Trust and Accountability – I referenced the recent FICO survey of 100 AI-focused leaders from the financial services sector, with 20 executives hailing from the US, Latin America, Europe, the Middle East, Africa, and the Asia Pacific regions, which revealed that “22% say their enterprise has an AI ethics board to consider questions on AI ethics and fairness. The 78% that don’t are poorly equipped to ensure the ethical implications of using new AI systems are considered properly”.

My collaboration with the ForHumanity community has reassured me that a great deal can be done within organisations leveraging powerful transformative digital technologies such as AI/ML either directly, or indirectly through their third-party service providers, to implement their infrastructure of trust that can deliver accountability through transparency, governance and oversight they need to assure their customers and society consuming their digital services.

Furthermore, a great deal needs to be done in the sourcing, vendor risk, and third-party risk management areas to help build that macro-level infrastructure of trust, that can provide a robust and resilient digital supply chain from which innovation within platforms and ecosystems can thrive. While digital is all about data, trust is the key enabler for engagement, which digital businesses rely on to grow.

So, what does all of what I have said, got to do with quality?

Quality makes sense

In this article by Alan Winfield, he proposed solving the ethical governance for robotics and AI question with the five pillars of Total Quality Management. If we then look at why airplanes do not fall from the sky in the way we experience issues with the software systems we implement in our organisations, his research paper, published in 2018, cites: “In general, technology is trusted if it brings benefits while also safe, well regulated and, when accidents happen, subject to robust investigation. One of the reasons we trust airliners, for example, is that we know that they are part of a highly regulated industry with an outstanding safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and when things do go wrong, robust and publicly visible processes of air accident investigation.”

This notion of adding the quality dimension to any governance frameworks for organisations leveraging AI/ML systems directly or indirectly using third-party providers, where personal data is used to make decisions that impact humans, makes sense.  

It’s not what you say, but what you do

While every organisation that leverages the power of AI systems should have a Code of AI Ethics and a Code of Data Ethics, to begin with, their entire AI governance structures and capabilities that put humans at the heart of all AI-driven decision-making outcomes should then be operationalised, assured, and independently verified to comply with relevant regulations in jurisdictions they operate in, to provide a baseline target operating model. Diverse and inclusive multi-stakeholder engagement from those within the organisation as well as customers, consumers, and third-party organisations is also a key part of operationalising their codes of ethics. Leaders will need to ensure that a diverse and inclusive set of stakeholders are engaged along the entire lifecycle of the AI-driven decision-making process, to uphold what they say in their AI and Data Codes of Ethics.  

Since the behaviour of AI systems will change by their very nature, they will need to be continually monitored and re-verified. Therefore, having the quality dimension, backed by best practices in quality management can help organisations demonstrate to their customers, consumers, and society that the risk thresholds and ethical guardrails defined by each organisation’s Ethics Committee are constantly measured against.

A high standard of quality can only be achieved when everyone across the organisation believes in it, lives it, and does it, continuously - day in and day out. Quality should be part of the organisation DNA and embedded in its culture, reflected in its values and purpose. 

For us to trust AI systems that use our data to make decisions that can impact our lives, we should expect those decisions to be ethical, fair, explainable, accurate, safe, and secure – all of which are tangible outcomes of quality.

Addressing the gaps

The FICO survey referenced earlier clearly shows that organisations are far from achieving the desired level of maturity to be able to demonstrate that their AI systems are trustworthy based on the above-stated attributes, let alone comply with relevant laws regulated companies are subject to within their jurisdictions.

So, what do we do about such a compliance or 'expectation' gap? This leads me to my collaboration with the growing team of advisors and industry experts at Change Gap – they are working on some of the biggest gaps faced by the Financial Services industry which have the common threads of risk and regulation. Sarah Sinclair, Co-Founder of Change Gap added, “Whether a business is concerned about operational resilience, regulatory reporting, or AI Ethics & Governance, dealing with any gap is made harder due to there being a lack of simplicity and clarity in terms of ‘what’ needs to be done and ‘how’. This is difficult for large firms given the size and nature of their business and having to navigate silos and overlapping priorities – it is equally challenging for smaller firms who have to comply with the same amount of regulation, yet they don’t have the resources or budget to hire armies of consultants.

Change Gap is offering services and products to deal with these very challenges – but doing it differently. C-Suite leaders, NEDs, and Board members face personal accountabilities, but do they feel they have the tools and advice available to them to gain impactful insights into the myriad of topics within their remits? They need to provide assurance and be in control, but where can they access trusted advice, on-demand insights, and counsel? Look out for details of a new service we are co-creating - designed to answer this very question.”

I look forward to hearing your thoughts. Feel free to contact me via LinkedIn to discuss and explore how I can help.

Emma Parry

Board Member & Advisor | Conduct, Risk & Governance | FinTech | RegTech | Expert Witness

3 年

Another great article Chris Leong ! Excited to be co-creating a new service with yourself, Sarah Sinclair Chris Mills and the Change Gap advisors to start addressing these gaps!

要查看或添加评论,请登录

Chris Leong, FHCA的更多文章

社区洞察

其他会员也浏览了