Go Digital: What does your dashboard tell you?
Chris Leong, FHCA
Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own
There is a good reason why pilots of airliners rely on the information on their cockpit dashboard to safely fly their planeload of passengers from one destination to the next. There is a good reason why teams of surgeons rely on information provided by their monitoring devices attached to their patients as they are performing surgery.
I wrote in my article on Measurements, the benefits and value that can be derived from having data related to entities that can be measured from the performance and risk standpoints. Measurements provide feedback that validates our performance or initiates further action. We cannot ignore the measurement of risks if we want to consistently achieve our performance targets.
Risks first need to be identified, assessed, quantified, mitigated through controls and monitored against thresholds so that appropriate decisions can be made before they are realised in outcomes that could adversely impact humans.
The need to know
I recall the days during the mid to late ’90s when Business Intelligence (BI) software, which evolved from earlier incarnations of Decision Support Systems, was de rigueur. Organisations wanted to deploy it to provide distilled information for senior management consumption, but soon found challenges with gathering adequate and relevant data the moment the scope of their use was extended beyond the initial areas of application and into other areas of their enterprise. It took a tremendous amount of effort and time for the teams to collate and align data from various repositories into the appropriate formats to produce the verifiable BI reports. When visualisation tools arrived a couple of decades later to supplement data analytics software, the same challenges around data within the enterprise remained for most established large organisations.
Most of the application areas of these tools related to information charting performance against business metrics such as Key Performance Indicators (KPIs) and past performance data.
Chief Risk Officers who understood the benefits and value of having an integrated Governance, Risk and Compliance (GRC) strategy set about deploying GRC platforms that underpinned their organisation’s capabilities to integrate strategy, risk management, compliance management, internal audit management with strategy, performance, governance and oversight. The promise from such capabilities is to enable an integrated and unified view of all of the risks that are relevant to their business but is heavily reliant on the availability and quality of the underlying data which can only be realised if the entire organisation subscribes and contributes to generating the required sets of data consumable by these platforms.
Emerging Risks
Many organisations have highlighted and included emerging risks within the Risk Register for their Annual Report as risks that they need to manage as a result of their digital transformation initiatives. Trading off the pace of innovation with the need to ensure that innovation is executed responsibly is a reflection of the organisation’s culture.
Unless your organisation is led by a leader who understands how to innovate responsibly, where privacy, ethics, accountability, cybersecurity, reliability, explainability and compliance are all built into the fabric of its culture before any AI and the autonomous system is designed, developed and deployed, established businesses will need to resolve tensions that will result between functions and respective stakeholders within the organisation to successfully implement an effective AI and Ethics Governance framework and related capabilities. ?Emerging risks that are related to AI and autonomous systems can only be comprehensively identified, examined and analysed when relevant stakeholders within and external to the organisation are engaged to provide different perspectives through diverse inputs and multi-stakeholder feedback throughout the lifecycle of these systems. Once identified, the appropriate enterprise-wide governance structures are expected to be present and operationalised to provide oversight and assign accountability for their outcomes.
Managing emerging risks is the collective responsibility of many from all functions across the organisation. Having the right culture in place is fundamental in ensuring that emerging risks from innovation are mitigated with accountability.
GRC Vendor landscape
Responsible innovation needs to be demonstrable and organisations deploying AI and autonomous systems can start with leveraging their GRC capabilities to ensure that activities related to their deployment of AI is captured within a central governance structure which needs to then be extended to accommodate the scope, context, nature and purpose of those deployments with the inclusion of diverse inputs and multi-stakeholder feedback that is necessary to identify, examine and analyse all risks including emerging risks so that mitigating controls can be deployed and residual risks disclosed along with other relevant information required within respective and relevant legal frameworks.
It was very interesting for me to read The Forrester Wave?: Governance, Risk, And Compliance Platforms, Q3 2021 report where I see the need to mitigate emerging risks from digital innovation and ESG recognised. If you have been reading my articles, you would have noted that I believe that GRC frameworks can be the platforms from which AI and Ethics governance are operationalised.
It is promising to see some of the vendors listed in the report listing data governance, privacy, ESG, ethics among new use cases which their GRC platforms can accommodate. I would love to see how they are intended to be operationalised.
Did I mention ESG?
Yes, everyone’s talking about ESG, including Boards. You might wonder what has ESG got to do with AI and Ethics Governance.
The mere fact that AI and autonomous systems impacts society and fundamentally humans places their use at the heart of the organisation’s ESG credentials. Furthermore, the effectiveness of organisations in governing the use of AI and autonomous systems processing personal data also contributes to their ESG credentials. As for the environmental impact, have you read about the amount of energy that goes into training OpenAI’s GPT-3 model?
EthicsGrade provides a valuable independent service that assesses the maturity of AI and Ethics governance based on publicly available information, based on their methodology ‘developed from Ethical by Design (Radclyffe & Nodell, 2020) which sets out how digital ethics questions can be measured and managed as ESG issues’.
I cannot see how any organisation that deploys AI and autonomous systems can claim to have sound ESG credentials if they cannot demonstrate responsible innovation related to their use of such technologies.
领英推荐
Back to GRC platforms
I have been exploring and discussing within the ForHumanity community what AI and Ethics governance mean, how it can be operationalised within organisations deploying AI and autonomous systems and how they should be audited.
When I then consider the capabilities of GRC platforms, I see the need for these platforms to further evolve, enhanced and better aligned to support the operationalisation of an AI and Ethics governance framework that covers the entire lifecycles of data, AI and autonomous systems. These platforms can also subsequently help facilitate an external independent audit. Although the Forrester Wave report identified areas of improvement for many of the vendors, there is an opportunity for GRC platform vendors to step up and be relevant for organisations that are ready to start demonstrating that they are innovating responsibly.
There are many new entrants offering point solutions that address specific aspects relating to the management of risks specific to AI systems, such as model risk management and explainability. These point solutions could certainly be used but their outputs (including any risk related metrics) need to be incorporated with all of the other risk metrics integrated into the GRC platform to provide that unified view of all risks that could impact the business.
Why a holistic view of risks is necessary?
The impact of unmitigated downside risks from AI and autonomous systems manifesting into adverse outcomes on humans is immediate and amplified. Consequently, the results of continuous monitoring of their performance against approved thresholds need to be fed back to the governing entities within these organisations before those thresholds are breached. Equally, allowing for these AI models to continue learning about the variations within their approved scope, context, nature and purpose to improve their accuracy and reliability. It is the responsibility of the Algorithmic Risk Committees within these organisations to proactively prevent adverse outcomes from occurring. When this is achieved, we will see less of the negative media headlines as well as a reduction of incidents recorded by organisations such as the AI, Algorithmic and Automation Incident and Controversy repository, AlgorithmicWatch and the AI Incident Database.
The propensity for organisations to allow emerging risks from innovation that leverages transformative technology such as AI and autonomous systems to be managed separately from the other non-financial risks will result in silos remaining which can further increase the likelihood of issues arising. It is not just the responsibility of the IT Risk function to manage risks emerging from the use of AI and autonomous systems. The focus needs to extend beyond technical and data matters, towards people, culture and outcomes.
AI and Ethics governance is only effective if operationalised centrally for the entire organisation. Accountability and oversight are critical to safeguarding humans as well as for the organisation’s regulatory and legal obligations, and reputation. Leaders need to be able to challenge the desire to use AI and autonomous systems to solve business problems and ask these two initial questions: “Is it necessary?” “Should we use it, just because we can?”
What should the ideal dashboard look like?
In the ideal scenario that you are a digital-first organisation designed with privacy, compliance, cybersecurity, ethics, reliability, auditability, explainability, accountability, trust and human-centricity in mind, your dashboard is likely to enable you to monitor your business performance against your KPIs, and at the same time alert you when your AI systems are performing close to the risk thresholds that your Algorithmic Risk Committee has set. Crucially, it will enable accountable persons within your organisations to prevent adverse outcomes from occurring.
You can uncover the number of decisions made by your Ethics Committee when presented with ethical choices by your innovation teams. Explanations for your AI models, choices made for your datasets, their compositions and their status relative to biases will have been documented and accessible when required.
The metadata for all data used by your AI and autonomous systems will be available to support your organisation’s obligations around data protection, privacy, Data Subject Access Requests (DSAR) and potential withdrawal of consents, to consistently comply with the relevant legal frameworks.
You can be confident that you can provide clear and easy to understand explanations about how automated decisions were made by your AI and autonomous systems to your customers and users when asked. You can transparently account for how personal data was lawfully used in privacy-preserving ways by your AI and autonomous systems to derive automated decisions. You will know that all relevant disclosures have been set up and accessible to your customers and users.
Your dashboard is informative, engaging, valuable and available on-demand for leaders to be aware of the health of their operations, organisation and business. The benefits of being a digital-first organisation and business are significant, but so are their responsibilities due to the hyperconnectivity of the digital world.
If you are a software vendor leveraging AI and autonomous systems, the maturity of your AI and Ethics governance will enable the required level of transparency for effective third party risk management by your customers, specifically around the scrutiny expected for AI and autonomous systems.
If you are an established and regulated organisation, implementing an operational and effective human-centric AI and Ethics governance for your AI and autonomous systems will contribute towards establishing trust with your customers and users. Obtaining validation from an external rating agency such as EthicGrade as well as a successful external independent audit of your AI and autonomous systems will further elevate your organisation’s trustworthiness.
CEO’s shouldn’t be flying blind
Being a digital-first organisation means all data that fuels the running of the business, like blood flowing through the body to provide oxygen to its organs, is available for monitoring. Your dashboard provides you with an up to date view of its operations and as in the airliner scenario presented earlier, you can be assured (or not) that all of its operating systems are functioning before you fly your passengers to your intended destination.
What will your dashboard look like and what will it tell you about the risks in your organisation, especially the emerging risks related to the use of AI and autonomous systems?
I look forward to hearing your thoughts. Feel free to contact me via LinkedIn to discuss and explore how I can help.
CEO @ EA: Easy Autofill for complex corporate documents | Top 5 UK FinTech TechRound 2024) | Techstars 2022
3 年thanks for the mention Chris Leong and the reference to EthicsGrade | AI & ESG in your article. I've long felt that there is a strong relationship between BI and ESG - but I guess I would say that, having worked previously in the BI industry and understood the importance of operational and strategic decision making to be directed by data, and data that you have high confidence is being generated directly by business operations. ESG is the same, but the audience is external. For ESG to drive the same shift to values-based operations that BI did to data-driven operations a generation ago, we need the same confidence in the data (and appropriate reporting and disclosures also)... ... lots more for us to unpack in these thoughts sometime...
Data protection, security, AI / ML governance, risk, and compliance
3 年This is really interesting. I always say if you can't do it in a spreadsheet or on paper, you are not ready to buy a GRC tool. Simplistic statement, but tailoring tools to your purpose in a way that is sustainable and future proofed, is huge part of the challenge. Much of which is about democratising GRC with the organisation, via simplifying communication and an appropriately granular understanding of inputs, information requirements, information exchanges, escalation points, and places to land ultimate accountability. Equally challenging, as you have rightly surfaced, is ability to describe the umbrella view of risk without simplifying connection to contributory inputs and controls out of the system. We can pretend that a more or less arbitrary dollar impact and % probability can be aggregated into a meaningful 'meta risk', but that comfort rapidly wanes when asked to prove the degree of reduction for that risk after completed remediation, or reassessing that risk in the face of a change in your threat landscape.
AI & IPR lawyer I Associate Partner, Anand and Anand I Fellow AI & Law - ForHumanity I Associate Fellow, CeRAI, IIT Madras I Head- India AI Risk Chapter RIMAS (Singapore) I MKAI I MIP Rising Star 2022 - 2024
3 年Excellent thoughts Chris Leong. Figuring out ways to operationalise AI Ethics is important as we move forward. I am sure much like how AI Ethics - has different approaches and affected by context, operationalising AI Ethics will be equally diverse. Your point about how emerging risks is a collective responsibility is fairly interesting. Makes you wonder whether you want everyone in the organisation providing input in all types of activites or are you looking to then categorise specific emerging risks and thereafter seek these diverse inputs? What would be the approach to seek diverse inputs in an organisation? Will read the article of course for these answers.