How to fix performance evaluations (Part II of III): new ideas... Same old problems?
Welcome to the second part of my deep dive / musings on the challenges leaders (or anyone managing people) face when it comes to conducting meaningful performance reviews. In case you missed it, you can read part I here .
Before we move on, reader beware: I am not a human resources professional; the research below and my applied process to be discussed on the last instalment of this series are based on lived experience leading teams. If you can, please consult with a Human Resources professional before implementing any new performance management process.
Now back to the topic. Many organizations and management / human resources experts have recognized that the one-way performance review where the supervisor "grades" staff is rife with problems: there is no staff input, explicit and implicit bias can skew ratings, and for some the process could be either too bureaucratic or simply ignored.
To address those issues, there has been a constant stream of new approaches, which have been developed and implemented across companies big and small. These new performance review strategies are, generally speaking, trying to:
In this post we will take a deep dive of some of the new strategies I have become familiar with:
Reader beware, again: there is a bias in the above-list to the way evaluations are conducted in North America. I am ill equipped to extrapolate these trends elsewhere in the world.
1) Performance conversations
There are two versions of performance conversations, one is basically old school with new wording, and the other is a more meaningful attempt to improve the review process.
The old school approach is to simply re-name the performance review and call it performance conversation. Yes, this happens!
The logic here is that the name "review" is intimidating and hearkens back to school, so replacing the word will make the process better. And while staff may be encouraged to respond to the evaluation, the power dynamic and the one-lopsidedness remains.
The second version of the performance conversation approach is more nuanced, and has three key parts. It begins at the start of the year (or fiscal year), with supervisor and supervised agreeing on a set of key goals and what success will look like. These can be called key performance indicators, or operational key results, or simply "goals".
Then half-way through the year there is a check-in where individual goals are discussed and revised if needed. When the actual performance evaluation happens at the end of the year, the employee begins by presenting the results and commenting on them, with the supervisor then adding their take to complete the evaluation.
This three step process (goal setting, check-in, final co-written evaluation) is based on several assumptions:
a) Goals should have been based on organizational priorities and in fact co-created between both parties, not imposed by management unilaterally;
b) There is flexibility at the mid-point to amend goals based on the realities of work (example: a new priority had taken over a lot of time, so a previously written goal needs to be removed or re-written), and a commitment to have those touch-point meetings; and
c) The relationship between staff and supervisor is such that both can be open and honest as they discuss how the agreed-upon goals have been achieved, partially achieved or not completed. This is a lot tougher than it seems .
Unless the culture of the organization allows for open discussion, where employee rank is less relevant than letting facts and measurable goals be the guiding posts, you may end up no better off than with the traditional model with more meetings and possibly more annoyance since there is a pretense of dialogue when in reality the boss still is in full control.
Not to mention that the risk of supervisory bias is still there. Which leads us to the next approach:
2) Peer Reviews
We all have our own built-in biases, some of them so ingrained that we do not even know they are there. To address this, the peer review evaluation model adds voices to the performance management process. Each review requires the feedback of the direct supervisor plus two more "experts" or individuals with good knowledge of the work the person needs to do. By adding expertise to the evaluation, in theory the risk of bias is greatly reduced.
The way peer reviews are implemented varies, in some cases the other individuals are co-workers within the same work unit, in others they may be managers of other areas but with a good knowledge of the type of work the person is assigned to, or it can also be managers or staff from areas who need to coordinate their work with the person being evaluated. In some cases the evaluators are known, and in other instances they are not disclosed to the staff being evaluated.
The challenge I see with this approach is that we are replacing individual bias with social and interpersonal biases. Someone that is well liked and fits within the office / factory floor culture is likely to receive more positive scores than a more reserved individual or someone who is a poor "office politics" player.
And to add one more challenge to the peer review approach, we now live in a remote / hybrid workplace for many administrative, financial, customer service and professional roles. Many organizations allow for full remote work most of the year, while a solid majority in North America seem to favour the hybrid model, with 2-3 days mandated in the office while the other two working days are completed remote.
This reality makes the selection of peers more challenging and the risk of unequal assessments based on employee location very real. It seems like any effort to reduce bias while maintaining an evaluation-ranking platform is doomed to fail. A more radical approach may have to eliminate the supervisory rating all together...
3) Self assessments
What if we let employees be the judges of their work and their achievements and challenges?
That is what the self-assessment approach does, sometimes as the sole performance evaluation and in other implementations combined with a more traditional supervisor-led review. A good primer on self-assessments can be found here .
The self-assessment is usually a questionnaire, or a list of prompts for the employee to follow: greatest achievements, missed opportunities, strengths the person brought to the team, and weaknesses or areas where the individual think they may need additional training and support.
There is still supervisory input in the form of goals and objectives, which should be co-created in the same manner than the previously discussed performance conversation approach.
领英推荐
While this performance evaluation model removes the bias of a supervisor or the social bias of peers, one can argue that it may exacerbate unfairness. If you have some experience managing a team, you know that individuals tend to see their work and their outcomes very differently. While a team member may always see their work as perfectible, or the achievements not noteworthy; other individuals doing the same or similar work may think that their achievements are always notable, their success incredible and therefore a huge raise and promotion must be granted.
In other words, if two individuals working on the very same project may produce very different evaluations based on how they perceive the quality of the work and their contributions... How is that fair to both individuals?
The way to address this is to either bring back the supervisor to "equalize" reviews by adding comments and their perspective, but as the reader may have guessed then the power shifts back to the supervisor and at the end of the day no matter how the employee perceives their work, it is being graded by the boss.
Perhaps then, if the power imbalance is always there and staff will be ranked or graded no matter the system we pursue, we need to just eliminate the looking back at achievements and challenges and concentrate our performance management efforts looking forward. And that is what some organizations, mostly in the technology space, are taking to the next level.
4) FeedForward
Imagine a world where:
This is feedforward (one word), a new model that companies like Microsoft, Astra Zeneca and Booking.com are utilizing, according to the Wall Street Journal (subscription may be required to see the full article), another free and very comprehensive source can be found here (I cannot verify the quality of the second link, FYI).
The underpinnings of this approach are based on human psychology and the unarguable fact that you cannot change the past, but you can affect the future.
People dread feedback, no matter how you label it. In the case of Microsoft, they have ditched the anonymous peer review model and rely on feedforward instead. Based on the above-linked articles, the process involves:
All the literature I've seen cautions leaders that implementing feedforward cannot be done simply by re-labelling (using perspectives instead of feedback, for example), but with a system-wide commitment to a healthy budget for staff training and really focusing on the future.
While some of you may find this new process intriguing, I am on the fence. Feedback and postmortem analysis are, in my view, very important for any organization. We learn from our mistakes and we become better at our jobs when we receive constructive comments that are honest and aimed at improving our work.
Also, and I guess more controversially, in our quest to reduce stressors and become a truly positive workplace we seem to be avoiding tough conversations, or admitting failure. The "everybody is a winner" approach may shield us from stress and negativity, but at the same time it misses opportunities for self-growth and for honesty with each other.
One of the things I repeat to my teammates constantly when we are discussing an idea or doing a postmortem analysis of a completed project is: "tell me why I am wrong". Honest criticism is a gift, if our aim is to continuously improve.
So far, we have been reviewing new ideas on performance management but we are always colliding with the same reality: humans are biased, and we have emotions and feelings. We cannot have a perfectly neutral, unbiased view or opinion on our work of the work of others because that is not the way we are wired. What if we outsource performance management to machines?
5) Algorithmic Performance Management
Humans cannot be trusted. We are not objective, and our "gut instinct" is at best right half of the time (here is a link to an article explaining why algorithms reduce biases).
To make the argument in favour of replacing us supervisors with algorithms, here is what perplexity.ai wrote:
Algorithmic performance management offers benefits such as improved decision-making, efficiency, and objectivity. It can help in tracking and evaluating employee progress, providing insights, and recommending training programs, thus enhancing employee well-being and motivation [1] [2] .
In essence, algorithmic performance evaluation gamifies the review process and it does not need weekly, quarterly, bi-annual or annual touch-base meetings: the software tracks all the relevant metrics for a role, and provides instant feedback to the employee. You could earn badges or gift cards or be recognized in a leader-board by meeting or exceeding the target / goals established by the algorithm. It will push each of us to do a bit better every day, and provide feedback and encouragement in the process.
Daniel Kahneman, Cass Sunstein and Olivier Sibony are one of the key proponents of using algorithms not only to manage performance, but in broader applications as well. In a podcast Sunstein posits that the same judge for example, can sentence two very similar cases with similar or identical defendants in a very different way.
In theory, if we have a bias it should appear equally any time we encounter the same type of person or situation. But data on hiring decisions at work or sentencing by judges show that there is no consistency, even when the bias is there. That unfairness is called noise, which is defined as the large variances in judgment or opinion even when the facts are all the same. Algorithms can eliminate both bias and noise, these researchers say.
Before we all jump into the algorithmic bandwagon, let's look at the very human cost of implementing this approach. In order to provide constant feedback and nudge people to improve or change, the software must have full access to all your activities: emails, web searches, memos, etc. There is a complete loss of privacy, with a set of computer eyes always recording your actions. How do you feel about that?
The other technical aspect to consider is the fact that algorithms are created based on existing data. Even if you feed the algorithm every single performance evaluation ever done in your organization, all the implicit and explicit biases will be replicated. Any company trying to enrich its workforce through equity, diversity and inclusion programs may find that the algorithm works against those efforts.
Aside from surveillance and replicating existing biases against certain type of employees or future employees, delegating hiring, coaching and performance management to an algorithm deprives managers and staff from human interaction. Computers do not have bodies and their memory and algorithms do not work the same way our brains work. Humans are social creatures and we communicate and connect with other humans in a way that so far has eluded computers. Perhaps algorithms will assist in identifying biases and eliminating noise/unfairness, but I do not know many individuals who will trade their privacy and the ability to connect one-on-one with another humans for an Artificial Intelligence bot with a super-duper algorithm and a gamification plan.
We have now discussed the challenges with traditional performance evaluations, and the newest trends currently being tested or used to address issues around fairness, bias, anxiety of being judged and making the performance review process a two-way street. The last post of this series will show you my (imperfect) approach to performance evaluation, and why I think it may be worth for you to consider it.
As always, feel free to message, post a comment, or let me know what you think. Until next time,
Javier
Corporate Performance Measurement Coordinator at City of Winnipeg
9 个月Javier, as someone who has been involved in the Canadian Forces performacne review ecosystem for over 30 years (and the third or fourth version of the system) the 'civilian' perspectives on the topic are always interesting - especially since I have some experience with another system (and your discussion doesn't even touch on whether there is any point to evaluations in a unionized environment)