Why the Typical Performance Review Is Overwhelmingly Biased

Why the Typical Performance Review Is Overwhelmingly Biased

By David Rock and Beth Jones

The traditional performance review is a confidential, closed-door meeting between no more than two people. Research suggests it is also totally misguided.

Though we may think we’re making accurate, objective assessments during a performance review, the social and brain sciences have shown that bias is still baked into the brain.

Studies, for instance, have indicated that as much as 62% of a rater’s judgment of an employee is a reflection of the rater, not the person getting reviewed. Despite this, survey data from a recent summit we hosted on performance management indicated that 57% of companies weren’t doing anything to remove bias from their performance reviews. It’s no wonder companies that prize traditional reviews are quickly becoming dinosaurs.

Yet these evaluation decisions are among the most important processes that a manager is asked to do, and the consequences can make or break not only an employees’ contributions to a team, but a career. Digging into the research, it’s clear the smarter review process needs a less-biased approach: one based on crowdsourcing.

What the science says

The traditional review structure assumes that leaders who have tracked an employee’s behavior over a certain period of time are the best authorities to judge whether the employee has missed, achieved, or surpassed his or her goals. This assumption itself is biased.

It’s experience bias, or the all-too-human tendency to believe our own interpretations of the world constitute the whole, objective truth. In reality, people perceive the world differently from one another, and no one interpretation is objectively correct.

Experience biases can include a false consensus effect, in which we assume more people agree with our beliefs than is actually true; theblind spot bias, where we can pick out biases in other people but not ourselves; and many others. All of them demonstrate that isolated experiences aren’t enough to get at a full picture of the truth.

Biases like these get us into trouble for a couple reasons. The first is they happen outside conscious behavior, so it’s difficult to address them on our own. The second is they compel us to reject the beliefs of people who see things differently, since we conflate different with wrong.

How to conduct a less-biased review

To conduct smarter reviews, managers should solicit the perspectives of other people. Ideally, these will be people who don’t work in the same capacity as the leader or think along the same lines. 

Bias is built into brain function, which means it can be hedged against, but not erased. Everybody has their own subjective biases, but by surveying across people, you can get closer to an objective, more rigorous version of the truth. A manager can look for patterns in feedback rather than relying on his or her own singular, biased view.

Granted, even collective feedback will have an element of bias in it, but if five of seven colleagues notice X about Tom, then it’s a reasonable bet that Tom should address X.  It’s like seeking multiple opinions for a medical diagnosis, or how a reporter interviews a range of sources to get to the facts that matter for a story.

Leaders who crowdsource reviews can ask about a person’s value outside of the typical “key” metrics, in order to get a fuller, qualitative understanding of the roles the person plays. Perhaps a team member who missed his sales target actually uplifted several other people to hit their targets. A manager would miss this insight unless he or she asked around.

On its face, this may feel like a breach of privacy—Performance reviews are supposed to be confidential, aren’t they?—but, in fact, publicizing the data-gathering works in everyone’s favor.

The long-term benefits of asking around

The earliest benefit to crowdsourced reviews is that candidates feel more comfortable knowing their good work will be seen. Having more advocates generally translates to greater praise. But people can also take solace in the entire team being held accountable in the long run. Asking around gives leaders extra intel that may expose that seemingly competent employees are actually underperforming.

For crowdsourced reviews to be effective, it’s essential for managers to let their teams know prior to review cycles that others will be asked about performance. This creates a positive social pressure to do well, in addition to boosting transparency across an organization.

Beth Jones is Lead Consultant and David Rock is Director at the NeuroLeadership Institute.

This article originally appeared on Quartz.

Aria Smith

Workday Expert & Business Transformation Consultant

6 年

Not only is the evaluator biased, the process is biased, too. 1. Scoring: Who decided the score structure? It could be artificially skewed. Sometimes it involves bad math. 2. Weighting: Who decided the section weights? This could be skewing the overall score. Another place for bad math to appear. 3. Eligibility: Who decided which score gets what eligibility for raise or promotion or perq? 4. Curves: Were evaluators asked to grade on a curve? Do they have to use "calibration" to slide scores around? This skews some people up and down artificially. 5. Targets: If a certain percent of your team hits a target, do you lose or gain resources? This pressures evaluators to skew scores to avoid loss. 6. Change management: If an employee always scored high on evaluations, but the method of evaluation has changed, is the evaluator willing to give a historically high performer a lower rating? 7. Influential non-evaluators: Does a business leader or other influential person insist someone receive a particular rating, even if it is unearned?

Susan Bash Van Vleet

President at Susan Van Vleet Consultants, Inc.? & V2 Consulting, Inc. and Author.

6 年

Good one

回复
Michele Armstrong

We Train Professional Coaches & Support the Growth of Coaching Cultures in Organisations. Accredited Coach / Supervisor / Delivering our AC Accredited Course for 21 years.

6 年

Thanks for this David, raising awareness of this issue, as well as our cognitive ability to notice it and choose to change what we then say, is so important ????

John Hale

Motivational Speaker | Strategic Advisor | Author

6 年

Nice stat. Yes, I agree and I believe we see the world not as it is, but as we are. The more unconscious we are, the higher the number. David Rock a question for you? Is your researcher’s bias under reporting this number? From a higher level of consciousness, I see that rating another is a flawed concept. When the experience of judgement, preferences and rating fall away. What is left...? Just unconditional acceptance of the other perhaps? When the experience of the other falls away, maybe the number approaches 100% and 0%. Then the realisation (and paradox) that we literally create our own reality becomes possible.

回复

要查看或添加评论,请登录

Dr. David Rock的更多文章

社区洞察

其他会员也浏览了