Measuring Customer Support
How should Customer Support be measured? Anywhere you look, there’s a whole list of acronyms and a long list of metrics. The internet is full of “x customer support metrics you should track”. x being any number from 10 to 15.
That’s way too many.
Attend any Customer Support (CS) Review meeting, and you come back none the wiser, despite so many metrics. There are far too many operational metrics, with relatively little tying them together into a common thread. So you don't get a good sense of how things are.
Given this, when companies become serious about improving support, they are looking at a CS ecosystem blind – the metrics are mostly contact center focused, inward-looking, and often incorrect.
This leads to a long period of frustration and challenges, as people try to discover the right metrics, put in place measurements, and start driving improvements.
In the absence of meaningful metrics, even significant improvements made are invisible – because the founder or CEO goes by what they see on their social media handle or what they hear from customers. If the CEO can’t understand or trust your metric, then they will form an opinion based on what makes sense to them.
What I find odd is that there’s isn’t great literature out there on the best metrics for CS. Whatever is available also talks about the contact center operational metrics. Perhaps that’s because the world of computers, and multiple channels, and self-serve, is relatively recent? It's not that recent, though!
Over the last few years, I’ve tried to look at CS holistically, and come up with a small set of metrics that start at the CEO/Board level, providing a good picture at a glance. We can then double-click those metrics, to get Level 2 and Level 3 metrics for the CS ecosystem.
This article talks about these metrics.
Effectiveness and Affordability
Right at the top, the two most important things to measure are effectiveness and affordability.
I’ve deliberately chosen these two terms, instead of experience and efficiency.
The term experience evokes the process of getting support- and makes that the prime factor. (Hence the CSAT question in most organisations – how was your experience on the last call). But the primary question should instead be the effectiveness question – Did we do what was needed?
Efficiency, technically, can be applied at different levels, including at a P&L level. But the typical connotations of efficiency in a customer support context are around agent AHT, agent cost, productivity etc. And that’s a very small part of the overall cost context of customer support. Hence I'm using a broader term, affordability.?
Here are the two top level metrics I recommend:
The Board or CEO should really look at only these two metrics to evaluate the overall health of the support system. Everything else is noise for them. (Either because it’s incorrect, or because it has too much detail.)
In this article, I’ll pick up effectiveness. And affordability in a subsequent article.
Effectiveness, and the NPS impact of support
How well is your support ecosystem serving your customers? The answer cannot come from any of the existing CS measurements, because they are all stage-level measures, and so present an incomplete view.
The north star support experience metric needs to come from a brand measure. My recommendation is to have a robust NPS program, and your north star metric is “NPS Impact of Support”.
What’s that?
This topic deserves its own separate post, but I allude to it briefly here:
After the core NPS question, ask users the reasons for detraction by offering them options – each option is a stage of the customer journey. The percentage of people selecting each stage gives you the share of detraction. And from there, you can get the NPS impact of fixing that detraction completely.
But NPS Impact is not directly actionable. It doesn’t tell you exactly what you should work on, or what are the levers you can pull. For that, we need to go one level deeper.
I'll do that using the customer's support journey. By doing that, we come up with the following stages:
Once we’ve defined this, it should be relatively easy to figure out the key things to measure. (The NPS survey should also use them in the sub-journey, or level 2 stage).
I’ll skip the Need for Support stage, and come back to it when we talk about affordability. (The support ecosystem’s experience funnel really starts after the need for support arises.)
Discovery of Support
This is a tricky one. It’s very difficult to measure this through operational metrics- because how do I figure out how many people tried to get support but were unable to find it? There’s no way to instrument this.
So we use the NPS measure – within the people who are detractors due to support, how many are claiming they didn’t know how to get support? That becomes the primary measure for discovery.
Access to support
This one’s relatively easier. For each of your agent channels, you can measure abandoned conversations before they are answered.
But they should be measured in the negative, not positive. So, for instance, a “90% calls answered in 60 sec” sounds good, but is meaningless. In this scenario, it’s possible that 5% customers waited for more than 5 min before abandoning, and 2% waited for more than 10 min!
What we should do is define an acceptable wait time. (You can do this by looking at abandon rates across different wait times.) Then measure the percentage of customers having to wait more than that. Any number greater than zero is then bad. This metric is table stakes, and the customer's right. It's not an aspiration or a stretch target.
Ideally, track the abandonment metrics of the automation layer that precedes the agent access. But those can be a bit difficult to instrument- will cover the automation layer in a separate article.
For asynchronous agent channels, like email or messaging, response times can be considered in the access layer. (One could argue that in the absence of a timely response, customer will feel they don’t have access.) So we could do a response time metric similar to the synchronous channels.
And to keep things simple at this level, we can combine them into one metric across channels: % of customers that had to wait more than the acceptable time.
领英推荐
Intent Identification
The first step, once the customer has reached a support channel, is to tell us what help they need. This is trivial when they are talking to an agent, but is challenging in automation channels.
Because self-serve channels handle stuff differently in different organisations, intent identification might not even exist as a separate stage. Measuring this properly therefore is part of the overall self-serve layer measurements. Will address that in the separate article on self-serve.
Resolution - CSAT
What’s a good resolution metric? The key question at this stage is one of effectiveness of resolution, and is therefore a user feedback metric.
Which also means that we should seek feedback only when we believe we’ve provided them a resolution. Until we’ve done that, there’s no point in collecting feedback.
What should we ask?
Hence, you need something like “Did we resolve your concern”?
However, there are certain responses where there is every reason to believe the problem will get solved, but it hasn’t actually been solved yet. For instance, a query about refunds. Refunds are typically standardized, often automated processes. So an agent can take a look at the status, and inform the customer when they will get it. But they haven’t actually got it yet. So, really, the issue isn’t resolved yet. (Although we know it will be resolved in 2-3 days.)
Hence, a slightly different question probably makes more sense :“Are you satisfied with our response?”
Resolution is mostly binary. The customer issue is either resolved, or not resolved. (There will be scenarios where there could be degrees and subjectivity, but this is mostly binary.)
Therefore, the answer options should be only yes or no. That’s it. Let the customer decide what they choose for the few ‘partial’ cases.
There are a few other important decisions to make for CSAT.
Resolution: Survey channels
What channels should you use? It should always be digitally captured. Never in the conversation itself. In-conversation measures have a natural bias.
The channels to use depends on what you have access to, and what’s going to get you the best response rate, without introducing bias based on customer segments.
Today, Whatsapp is probably the best channel for feedback, and is independent of your support channels. If you have a mobile app as the primary interaction mode, then app notifications might also work.
For asynchronous, off-platform channels such as email, it makes sense to include the survey as part of the resolution email itself. (But if you’re using Whatsapp, it might make sense to use that for ALL channels, to ensure you can compare scores across channels.)
Keep in mind that the same principle should be followed in self-serve channels too. You should ask the question in an async manner, only after you believe you’ve resolved a customer issue. This can be tricky or easy, depending on how your self-serve solution is built.
There are a number of other issues to keep in mind with CSAT. Will cover that separately as part of an article on system and metrics health.
Leakage (Escalation) Rate
CSAT allows you to measure the average case effectiveness. I believe it is also important to measure the worst case – what’s the leakage or escalation rate?
Let me first define escalation: any time a customer reaches out to company leadership/external forum/social media, and claims that they are doing this because they couldn’t get a solution from the regular support system. Note that this definition is based purely on the customer voice.
Escalation Rate is simply the total escalations divided by the number of support tickets received by the support ecosystem. Defining the ticket count for automation channels may or may not be easy depending on your automation approach, so maybe just use agent tickets to start with. In my experience, almost all escalations arise from agent tickets anyway. (In case that’s not true for you, you’ll have to find the most appropriate number for self-serve channels as well.)
Ageing and pendency?
Ageing is a level 1 system health metric. If your open issue count, or the ageing of those issues, is increasing, then you have a big problem- because it’s extremely hard to cover up backlogs later, and it's a lead indicator about your ability to continue to provide good support.
You really need to measure two things here – pendency and ageing. The best approach to measuring pendency is to look at it as a percentage of inflow. Take a snapshot of open issues at a fixed time each day, and divide that by the average daily inflow. That also gives you a broad sense of how long you take to resolve issues.
The best way to measure ageing would be to define resolution SLs for each customer intent. And then measure the % cases going over the SL.
Without defining resolution SLs, any ageing measure will not be effective. Because different issue types will have different SLs, and the age profile can change just because of issue mix itself. So the simple “number of issues older than x days” is a noisy metric.
To recap, the photo below summarises the key metrics. Even out of these, backlog is really more of a health metric. So you're really looking at just four Level 2 metrics, spread across discovery, access and resolution.
And when presented this way, along with the NPS Impact of each stage, one gets a holistic and complete picture of support.
Every other metric currently measured by support systems is either irrelevant to experience, or is at best an input metric.
Over the next few articles on CS measurements, I will cover affordability, system health, and self-serve channels.
Digital Transformation Consultant & Leadership Coach | Fractional CTO/VP | Gen AI & SaaS Expert | With more than 2 decades of product, technology and leadership insights, I am that unfair advantage for your business!
1 年This one is a keeper Ankur Agrawal ??. Thank you for sharing your wealth of experience.
CEO I Founder I Building a Diversified Automation Company by blending Customer first approach & Technology for Growth I From a Customer Service Associate to becoming a Sr Global CX leader & CEO, in just 2 decades.
1 年Very Insightful!! Sir. Especially the Ageing and Pendency ?? part, which often gets lost in day to day operations. I am glad to have witnessed successful results, while working with you on most of these experimental projects ?? ??
Founder - 7 Nomads Footwear | Director - Random Fable Technology, a new age Footwear manufacturer
1 年Very useful, thanks for sharing your thoughts Ankur Agrawal