Beware of Benchmarking: Bad Data Is Worse Than No Data
Benchmarking of the competition by business has increased in popularity over the last twenty years. Beware - if the wrong operational metrics are benchmarked, the benchmark is viewed in a vacuum, or the benchmarking methodology is flawed, bad data is worse than no data. Defective benchmark data leads the organization to either become complacent, pursue the wrong goals or, at minimum, waste lots of money. For successful and accurate benchmarking, compare your efforts and results to the eight concerns describe below.
Eight Pitfalls in Benchmarking
· Focusing only on a limited set of operational metrics
A company decided to focus on the “best” (lowest) value reported for average speed of answer (ASA) and talk time. The company allocated headcount to answering quickly but then rushing customers calls to reduce talk time. Further, the use of mechanistic responses resulted in incomplete answers, many repeat calls, frustrated employees and higher employee turnover. The turnover led to pressure to throw partially trained employees into the breach, which resulted in even less effective answers. Closer scrutiny of the data found that the companies that were benchmarked had fewer complex calls due to a different mix of products and a more effective approach to welcoming and onboarding new customers.
· Benchmarking only within your industry
Consumers gain their perception of good service from their last best service experience, often outside your particular industry. Some industries, like the cable and airline industries, have significantly lower mean satisfaction levels than other. For instance, a major telecom company compared itself to only those in its own industry. The company found that it was in the top quartile on first call resolution and customer satisfaction. Its high ranking led to complacency even though it still had significant voluntary customer attrition. Even the top company in its industry had over 12% annual attrition. You can be the best of a poor lot.
· Failing to verify that benchmarked companies have a comparable workload mix and sell to similar markets
Issue complexity is a key driver of time to handle a contact. Out of warranty calls take three times as long as in-warranty maintenance calls. Educational calls for technology products take longer than toy assembly. Technophobic seniors will take more time than a Gen X geek for the same issue. Make sure the workload profile of the benchmarked company is similar to yours or make allowances. Also, a “value” company that provides only limited services will have different metrics than a premium company in the same market because customers have very different expectations when paying twice as much for the same basic product.
· Includes averages without awareness of the underlying distribution
Averages mask a myriad of problems. An ASA of 50 seconds can still include 10 percent of calls waiting over two minutes if there are also a lot of calls answered quickly. In addition to the average, you must find the size and shape of the tail of the distribution. The more skewed the call arrival pattern or talk time distribution, the more misleading the average score.
· Failure to scrutinize the number of cases associated with each company and the confidence intervals around each score syndicated satisfaction and NPS benchmarking studies
When rankings are published, confidence intervals should be made clear and only significant differences flagged. If several companies are within the confidence interval, they should be shown as tied. In the 2020 version of a well-known syndicated Insurance Shopping Survey, except for USAA, the highest ranked company with a satisfaction rating of 900, the next 8 companies were all clustered with ratings of 847-867 – meaning that there was a 2% difference between #2 and #9. Worse, #7, was only 0.2% higher than #9.. Even with over 10,000 respondents, there was no statistical difference between #7 and #9 and probably no significant difference between #5 at 858 and #9 at 847, a 1.1% difference. For these two, the confidence intervals most likely overlapped. My point is that bragging about being in the top three is almost meaningless because #3 was basically the same as #9. Any of those eight companies was no worse than any of the rest – not a stellar victory.
Even worse, in some benchmarking studies, the rankings reported cover smaller brands that are represented by a much smaller number of consumer responses. The confidence interval for these brands could easily be plus or minus five percent or more, making the rankings basically meaningless.
· Focusing only on outcomes and not understanding the processes to achieve those outcomes
High satisfaction and Net Promoter Scores (NPS) often come from well trained, empowered Customer Service Reps who have access to continuously updated Knowledge Management Systems (KMS). If you are skimping on empowerment and the KMS, you cannot expect to achieve the same levels of satisfaction.
· Failing to examine the underlying source of the data including consistency of definitions used and mix of respondents used for reporting
Benchmarking organizations often fail to confirm definitions used by reporting companies or the comparability of companies responding. Costs and staffing levels can be allocated many different ways. For example, IT costs can be allocated from a central IT function or each unit can have its own IT personnel. Satisfaction and productivity metrics often have different definitions.
· Benchmarking shows you are a leader in your industry - so you declare victory
I’ve seen companies measure that they are the best in their industry with an 85 percent top two box satisfaction rating on a five point scale and declare victory. One the other hand, leaders in the fast food, insurance and motorcycle industries have all been in the low to mid-90s and each CEO asked, “What can we do to get better? Are there no-brainers or easy fixes in that last 6 percent?” In almost every case they found them.
I realize that some of these concerns are actually contradictory - I say look at comparable companies but be sure you look outside your industry. Focus on process and outcome metrics as well as processes. Each issue is a concern. You just have to be aware of all the weaknesses so you do not adopt a target that takes you in the wrong direction.
Recommended Actions
1. Examine the details of benchmarking data – who are the respondents and how big is the sample and are the sample sizes and definitions consistent across all companies?
2. Understand how the company got to the benchmark level - remember outcome metrics do not reveal the details of the process used to get there.
3. Examine all averages – what are the major types of contacts that are included in the average or does the abandon rate average bad Mondays with great Thursdays?
4. Ask if the best is good enough and is it possible to be better than the current best – be sure to understand the causes of dissatisfaction and poor quality in your own company.
The Service Culture Guide | Keynote Speaker
4 年Why do you think benchmarking is so popular? In my own experience, it often seems linked to fear and uncertainty. Executives are worried about making a poor decision, so they conduct benchmarking to see what everyone else is doing. There are times when benchmarking can be helpful, but there are also a lot of pitfalls as you point out.
CEO, Harrington Consulting Group
4 年"You can be the best of a poor lot." Love it. Thanks for a timely read.