Weekend Reading: AI and Trust in Corporate Reporting — The Human Element Remains Key
By: Erich Hoefer , Co-Founder & COO of Starling
Late last year, consultants at EY published the results of a?global survey of financial leaders. The results show a broad lack of trust in non-financial reporting, to include disclosures related to ESG.
The survey found that 55% of respondents felt “sustainability reporting in their industry risks being perceived as including elements of greenwashing.” This trust deficit comes at a time when investors are increasingly demanding information beyond financial disclosures to gain a better understanding of the ways in which companies make money and whether these are likely to be sustainable in the long-term.
Financial reporting?does not often?reflect these factors accurately and a number of companies have found themselves in crisis as a result of?an?exclusive focus on financial results?in their reporting.
Boeing offers one such example. Had the aerospace giant been required to report on its safety culture and?non-financial risk governance practices as rigorously as?it did?on its financial results, perhaps its leadership?would have discovered?earlier that?its reported?financial successes masked serious?operational vulnerabilities.
Lies, damned lies, and non-financial reporting
Unfortunately, nearly all of the financial leaders surveyed by EY indicated they struggled with reporting challenges, including problems with data formats and other inconsistencies. Given the relatively recent focus on?non-financial?reporting, this shouldn’t be a complete surprise.
Even?after?decades of standardization?efforts?and the development of numerous control and auditing safeguards,?financial reporting?is still subject to manipulation. Lacking this historical infrastructure, non-financial reporting is far more vulnerable to claims of ‘greenwashing’ among other complaints that the?reporting metrics which are offered may only serve to mislead.
Meanwhile, the US Department of Justice (DOJ) issued a revised Evaluation of Corporate Compliance Programs (ECCP) last year, raising expectations for how firms must demonstrate the efficacy of their compliance programs. The Department highlights the fact that “Even a well-designed compliance program may be unsuccessful in practice if implementation is lax, under-resourced, or otherwise ineffective.”
In their evaluation of firms gone astray, therefore, prosecutors are now instructed to look for evidence that programs are kept current and are subject to periodic reviews. Further, prosecutors will now assess whether these reviews are limited to a “snapshot” in time or based upon continuous access to operational data and information across functions.
Heightened scrutiny of reporting of non-financial data, among investors and regulators alike, means that many firms will have to invest more time and resources to meet these changed expectations. Fortunately, AI changes the equation by dramatically reducing the cost of capturing and generating accurate and transparent reporting across the organization.
New AI-powered capabilities also make it possible to leverage datasets not traditionally recognized as ‘risk and compliance’ data — to include operational, organizational, and communications data sets that capture information about how work actually gets done. Indeed, in the above-referenced survey, EY explored the accuracy of both financial and non-financial disclosures, validating those disclosures against third-party sources, and?assessing alternative data.
As we have demonstrated at Starling, such ‘latent data’ can be harvested to generate Predictive Behavioral Analytics that offer a basis for sound culture risk governance.
These capabilities are timely: AI is increasingly being leveraged by investors and other stakeholders to determine whether a company’s reporting provides a reliably accurate representation of its business activities. Investors, in particular, are exploring various ways to use AI to analyze or validate corporate reporting.?
Companies that fail to invest in AI to achieve more reliable reporting will increasingly find that either their regulators or their shareholders will be using AI to discover this for themselves.
领英推荐
As companies turn to AI to help improve corporate reporting in order to build trust among stakeholders, interesting questions arise, particularly given that many struggle to trust in the use of AI.?
In the EY survey, for instance, 29% of leaders indicated that they were “hesitant” to use AI for corporate reporting until the risks of such uses were better understood.? Questions remain regarding the accuracy of AI, the risk of “hallucinations,” and what steps may be necessary to assure compliance with regulatory obligations.
The human in the machine
But when considering such risks it is equally important to consider the potential harm that may follow through failure to employ AI-driven capabilities.?
Human intelligence comes with its own set of risks. People have more difficulty processing massive amounts of information than machine tools, and they are susceptible to any number of cognitive biases which impact judgement in ways that are subtle and hard to detect.?
More troubling yet, humans are frequently found to pursue objectives that are out of keeping with C-suite intentions — much less the interests of external stakeholders. As such, integrating AI into financial and non-financial reporting is better than continued reliance on humans alone.
At the same time, this means that AI risk must be evaluated in the context of the?human-designed and human-run?systems that?make use of?AI-tools. If leaders hope to develop greater comfort in seeing AI tools at work in their organizations, they will need to find ways to engender trust in both Man and Machine alike.
In this connection, the Georgetown Center for Security and Emerging Technology has warned of "automation bias," whereby human users may become overly reliant upon automated systems, decreasing their vigilance in monitoring both the system and its outputs. AI-users may put too much trust in AI-generated recommendations and delegate more decision-making responsibility to AI systems than they are designed to handle.?
The Georgetown researchers go on to describe how automation tends to manifest in different ways. One is an error of omission, where a human fails to take action because the AI failed to provide relevant information or trigger an alert. The other possibility is an error of commission, whereby a human fails to question incorrect information or direction received by an AI system.?
Recognizing these biases reinforces the view that AI should not be seen as replacing human judgment, but rather as a tool by which to augment it.?
By combining AI's processing power with human experience and intuition, we can build reporting systems that are both more comprehensive and more trustworthy.?
At the same time, we must?design and?implement effective?culture risk governance structures?to?assure that?employees?are properly equipped?to oversee these AI systems?effectively.?Success depends not only on implementing the right policies and processes, but on assuring that the right people, presumptions, and practices are in place,?as my Starling co-founder Stephen Scott has argued in?recent past?Weekend Readings.
Companies looking to build trust with stakeholders through more effective reporting must embrace AI, and establish strong?culture risk governance safeguards to ensure that those tools are used effectively. We mustn’t consider AI risks or opportunities absent attention to the ‘humans in the loop.’ If we want to reap the benefits of safer and more effective AI systems, then the effective governance of human-driven risks is the place to start.
This piece first appeared in Starling Insights' newsletter on January 12, 2025. If you are interested in receiving our thrice-weekly newsletter, among many other benefits, please consider signing up as a Member of Starling Insights.