Weekend Reading: Mirror Manipulation — How Culture Compromises Controls
By: Erich Hoefer , Co-Founder & COO of Starling
This piece first appeared in Starling Insights' newsletter on March 2, 2025. If you are interested in receiving our newsletter, among many other benefits, please consider signing up as a Member of Starling Insights.
In their quest to manage their organizations, company leaders rely on a growing arsenal of management tools: employee surveys, surveillance systems, project management tools, risk and compliance controls, and employee feedback and whistleblowing systems. The investment in these interventions is substantial — both financially and in terms of organizational attention.
Yet given how often they fail to detect, much less prevent conduct issues, it's worth asking whether all of this investment is worthwhile.
The problem is not that these management tools are inherently ineffective (though some certainly are). Rather, it's that the vast majority are deployed with little thought about how they will perform when they collide with an organization's culture.
That's not to say that organizations don't think about culture. The market for corporate surveys, behavior assessments, and related tools certainly attests to this. Industry estimates suggest that US organizations spend over $1 billion annually on culture assessment tools alone. But, too often, leaders think about culture as something that operates in the background, separate from formal management structures erected and managed by choice.
Culture is taken to operate as some distinct, ephemerous force; a mystery to be pondered rather than an asset (or liability) to be actively managed. As a result, most management tools are implemented under the assumption that they will function as intended, with little attention to the cultural context in which they are expected to operate successfully.
All turned around
This was brought to mind when I came across a column written for FT Alphaville late last year on the use of 360-degree reviews in banking — a fixture at many organizations deployed as a means to generate a complete view of an individual's strengths and weaknesses.
As the name suggests, the 360-reviews capture feedback from bosses but also peers, direct reports, and even customers. The expectation is that, with so much feedback coming from so many sources, an objective, well-rounded view of the employee can be generated. There is literally nowhere to hide, proponents of the method would argue.
And yet, as the author of the referenced article points out, the process is ripe for exploitation by those sufficiently motivated to get ahead. After all, what better way to present an unvarnished, objective representation of oneself than to carefully manipulate … er … stage-manage the process? In some cases, employees can choose which reviewers to include, an obvious opportunity for mutual backscratching. And when reviewers are instead selected by management, this creates opportunity for unscrupulous peers to torpedo rivals in a bid to get ahead.
The result is that, if your organization has a culture which emphasizes competitive, winner-takes-all performance combined with a lax approach to rules and ethics, these cultural proclivities are very likely to intrude in a mechanism like the 360-review, biasing the results accordingly.
Financial institutions aren't blind to these issues. Many have implemented countermeasures — filtering out extreme ratings, cross-referencing feedback with other performance metrics, or supplementing reviews with direct observation. Yet these adjustments often treat the symptoms rather than underlying cultural causes.?
Feedback loops
This is not unique of course to 360-reviews but tends to show up in all manner of tools designed for capturing employee feedback. Employee surveys, for example, are often used to measure everything from engagement and employee satisfaction to personality assessments like Myers-Briggs. But such tools are prone to manipulation at virtually all points.
I was talking with a colleague recently who shared an experience which highlights this. He was working on a team frustrated by slow decision making and a lack of organizational support, and this was reflected strongly in the results of their engagement surveys. Recognizing that this did not reflect well on the organization, his supervisor immediately sprang into action, organizing a series of time-consuming but low impact interventions to turn things around.
Following a series of group exercises and a multi-day offsite retreat, some of which intruded on after-hours and into the weekend, the team was so frustrated that they agreed to fix the problem. The next time the survey came around, the team was unanimous in its praise for the supervisor and the company. Nothing had actually changed, of course, and the team was just as unsatisfied as it had previously been. But they had collaborated well enough to 'fix' the survey that had caused so much distraction, so that they could get back to work, at least.
This anecdote illustrates a phenomenon that occurs with alarming frequency in organizations: feedback mechanisms are subverted by the very dynamics they are designed to measure.
In cultures where appearance is valued over substance, or where management is more concerned with metrics than underlying realities, employees quickly learn that providing honest feedback creates more problems than it solves. The intended feedback loop becomes instead a theater-piece, with participants providing the responses they believe will result in the least disruption to their work lives.
Surveys, in other words, reflect a firm's culture rather than revealing it.
Organizations invest in these tools precisely because they want to understand their employees' authentic experiences. Yet the implementation of these tools often creates incentives that make authentic feedback less likely. In environments where trust is low, or where previous feedback efforts have resulted in superficial changes, cynicism becomes the default response.
This dynamic creates a particularly troubling blind spot for leadership. When survey results improve, executives may believe their past interventions have been effective, unaware that the 'measured' improvement in fact reflects adaptation to the survey process rather than any real change. The organization may thus operate on faulty information, potentially making decisions that further alienate employees, all while believing that their concerns are being addressed.
Whistleblowing and the illusion of safety
Surely there are other tools that are resistant to cultural biases? Take whistleblowing for example. They are widely seen as a key aspect of organizational monitoring, allowing employees to speak up anonymously when they see something has gone wrong. But even these are vulnerable to cultural drivers.
In her book Bully Market: My Story of Money and Misogyny at Goldman Sachs, Jamie Fiore Higgins , a former Goldman Sachs managing director, shared her harrowing experience after bringing information to HR leaders after an incident involving one of her senior colleagues. Higgins believed her complaint to HR would be handled with the confidentiality promised. Instead, she was subsequently confronted by her boss for "going outside the family."
One might argue that the true problem here was the lack of an independent, third-party whistleblowing system. But even when systems are designed with safeguards like anonymity and confidentiality, they still operate within a human ecosystem governed by relationships, incentives, and power dynamics. In organizations where career advancement depends on relationships and reputation, the risks associated with speaking up often outweigh the potential benefits, regardless of how well-designed the reporting system may be.
Research by Harvard Business School professor Amy Edmondson shows that "psychological safety" — the belief that one can speak up without risk of punishment or humiliation — is essential for organizational learning and innovation. Yet this quality emerges from consistent patterns of interaction and leadership behavior, not from formal systems or policies. In organizations lacking in psychological safety, a whistleblowing hotline operates about as effectively as an advanced security system in a house with open doors — underlying vulnerabilities remain unaddressed.
When failure is not an option
The tools upon which firms rely to identify misconduct among managers and employees — tools like surveillance systems, controls, and whistleblowing hotlines — are all highly vulnerable to the kinds of behaviors that a deeply ingrained and "toxic" culture can perpetuate.
To this end, it is instructive to look to other industries that have found ways to minimize the intrusion of insidious culture into critical systems and processes. Our 2022 Compendium, for example, featured an interview with Charles McMillan , former Director of the Los Alamos National Laboratory , which oversees the US nuclear weapons stockpile. In such an environment, what might be seen as relatively minor mishaps in any other context can result in catastrophic outcomes.
In our Compendium, McMillan describes the concept of 'engineered systems' — one that has been intentionally designed such that mistakes can't happen. It's an approach that demands multiple redundancies and precise controls that essentially prevent behavior and, by extension, culture from intruding into and potentially disrupting a critical process.
As McMillan explained: "Whenever we can, we try to engineer our systems so that if something isn't right, the system physically prevents you from taking an action that could lead to something unsafe." He offered the example of working with high explosives in a containment tank: "If I fire the explosive with the door open, it's a bad day for everyone … So, we put switches on the doors — and often multiple switches — so that in order to be able to even push the button that will fire the explosive, that tank door must be closed and locked. That's an engineered system."
Notably, McMillian emphasizes that it is the culture of the Los Alamos lab and its leaders that prompts the development of these engineered systems.
This approach represents a fundamentally different philosophy than that which we typically see at work in financial institutions. Rather than relying on cultural alignment or individual judgment, high-reliability organizations like Los Alamos design systems that function correctly regardless of human fallibility or adverse cultural pressures. Controls are not merely procedural but physical — establishing operating environments where certain types of errors or misconduct simply cannot occur.
Navigating cultural cross-currents
Of course, most work environments do not demand such rigor, nor could they effectively operate under such constraints. Processes rarely have physical parameters that can be controlled in so strict a manner, and there is no room at Los Alamos for the benefits of risk taking which is at the heart of banking. But there is still a lesson to be learned nevertheless.
If management fails to design its processes with an understanding of culture, and the risks it can bring, then it cannot be confident that its processes will work when they must.
In previous Weekend Readings, we have discussed the Culture Risk Governance framework through which Policy and Process Inputs interact with what we refer to as cultural Throughputs — People, Presumptions and Practices — to produce operational Performance outcomes and the various Problems that may be seen to manifest.
This diagnostic approach to assessing operational 'flow' can help management to identify specifically where their Policies and Processes are vulnerable to cultural influences, so that inputs and throughputs alike can be made more resilient.?
Such an analysis may highlight existing tools and processes that are operating sub-optimally, so these may be improved, and identify others that are simply ineffective, so they can be done away with entirely. In other cases, this diagnostic approach may prompt risk governance leaders to design smarter approaches to challenges — approaches that work with the 'grain' of the firm's culture rather than against it.
Management interventions cannot be designed or evaluated in isolation from the cultures in which they operate. The most successful organizations approach this challenge holistically, recognizing that tools and processes both shape and are shaped by the cultural context. They design interventions with cultural dynamics in mind, creating systems that either work with existing cultural strengths or that are sufficiently robust to withstand cultural cross-currents.
The UK's Prudential Regulation Authority has rightly argued that culture can operate either to undergird or to undermine a firm's risk control environment. So too can it support or scupper management's ability to peer into the workings of organizational culture itself.
As financial institutions face increasing pressure to strengthen conduct risk management and to promote healthier cultures, the more nuanced understanding of the relationship between formal systems and cultural dynamics articulated here is essential. The organizations that thrive will be those that recognize a simple truth: it is culture that determines management effectiveness.