Measuring CS: Automation Layer
Ankur Agrawal
Product | Business | Experience | VP@Ola | SVP@MakeMyTrip | Snapdeal
?
I’ve been talking about how to measure customer support. Have covered the two keys aspects earlier – effectiveness and affordability .
In this article, I’ll talk about the self-serve (or automation) layer.
One of the two major levers to improve CS efficiency is Share of Self-Serve (SoSS). If you improve the share of self-serve in your overall CS interactions, costs reduce, as self-service costs are dramatically lower than agent-driven support costs.
SoSS = Self-Serve Adoption * Coverage * Effectiveness
where
Adoption = Percentage of support journeys that begin with self-serve.
Coverage = % of support contacts for which a self-serve solution is implemented.
Effectiveness = % of self-serve journeys that end at self-serve. (If a customer goes through self-serve, but then still ends up talking to an agent, then self-serve was not effective).
Each of these can have a large impact on the overall SoSS number.
Let’s talk about how to measure each of them.
Adoption
The first decision to make is what you count as self-serve visits/contacts. If you have a support section on your app, you could count all visits to that section. Or, you could decide to count only those visits where the customer has selected at least one issue they need help with.
I believe it doesn’t matter which approach you go with – just choose one.
It’s easy to measure the number of contacts served by your contact center agents.
The slightly difficult part is measuring the overlap – customers who started at self-serve, didn’t get what they were looking for, and directly went to an agent channel. (Or the other way round).
If you log your self-serve visits, then this should not be too difficult for an analyst to do. For instance, you could decide that if a customer had visited the self-serve section within 24 hours before calling your support number, then both visits are for the same issue. And you count those journeys as self-serve first journeys.
Coverage
The simplest way to come up with a coverage number is to take a look at each customer intent (or issue type). For each intent, your self-serve coverage could be anywhere from 0 to 100%. That’s because each intent might have multiple scenarios, with self-serve implemented for some scenarios, and not for others. For simplicity, use your judgement to assign a subjective rating (maybe a 5 point -/Complete/High/Medium/Low/None), and? use a simple scale (0/25/50/75/100% ) against each of them. To start with, that should be enough. Once you have better instrumentation and analytics, you might be able to get more granular data on coverage.
Along with the coverage, also do a rating on potential coverage- what’s the maximum coverage possible. (There will be scenarios that simply cannot be automated at all.)
Effectiveness
You get this number from two sources. The first is easier: For all self-serve journeys- there will be the option to connect to an agent at various points. The number of customers who chose that option should be easy to get. The second one is the same as that in the adoption calculation- folks who went to self-serve, but then dropped off and directly reached an agent channel. Add those two, and you’ll get the effectiveness number.
Note that lack of effectiveness can also be due to coverage issues. As you go deeper into solving for effectiveness, you could break your pass-throughs (cases where customer ends up going to an agent) into different types:
To avoid double-counting: either remove the coverage related pass-through cases from the effectiveness calculation, or count everything in effectiveness, and don’t use coverage for overall SoSS calculation.
This looks great on paper. But looks like quite a bit of work. Is there much point in all this effort?
Using the SoSS formula
Different folks within the company will have different views on CS channels, automation etc. For example:
For a support ecosystem, these are very important questions.
领英推荐
But in the absence of clear metrics, it is very difficult to have a meaningful discussion on these aspects, as it boils down to subjective opinion. Once you have the metrics, it becomes rather simple to answer.
It also allows you to decide where to focus…
If your effectiveness is low, that’s the first one to fix. There’s no point fixing anything else. You don’t want to create more frustration for customers by sending them to a bad self-serve solution.
If effectiveness is good, but coverage is low, then you first improve coverage to a respectable level, before focusing on adoption.
Once you have these metrics, it’s easy to figure out where your main levers are: just calculate these metrics at a customer intent level, and do a contribution v/s intensity map. (See Anti-pattern 1 here ) The high-contribution high-intensity issue types are where you focus. (And you can keep digging deeper by adding upstream attributes to the intent. (See the Process Improvements section in A CSAT Story )
To improve effectiveness, it’s much better to use CSAT for further deep-diving. The reason is that not all customers who’re unhappy with self-serve will take the effort to go to an agent. So while the current definition is good enough for understanding the funnel, it’s not enough to get actionable intelligence.
That leads us to measurement of CSAT for self-serve solutions. Today, either self-serve doesn’t have CSAT, or has a measurement problem similar to agent CSAT. (Leading to biases in surveying)
Measuring CSAT for self-serve
Let’s start with the fundamental principle that we’ll carry forward from the agent world. We should ask the customer for feedback only when we believe that we have resolved the issue.
But following that principle is challenging in self-serve. Why is that?
Because we’re not always sure if we have resolved the customer issue. We don’t know if and when the conversation is over. In agent driven tickets, we always know- in calls and chats, there is a two-way agreement on when a conversation is over. In email, the agent is taking a call on whether the conversation is over. There isn’t any such clear indication in self-serve.
The easiest way to understand this is by thinking of a self-serve solution as a process flowchart/tree.
In any self-serve solution, one of the following will happen:
Therefore, any type of in-session feedback is incorrect. (And it will introduce biases as well.)
The way to figure out if a conversation has ended is by waiting for a certain time to see if the customer comes back. That time can vary by business, but let’s say that if a customer doesn’t come back for 24 hours, we can assume the conversation to be over.
Any feedback should be solicited only at that stage, using the same channels as for agent tickets. Using that approach also allows us to compare CSAT across channels.
But that still doesn’t solve for whether the issue was resolved or not. If I don’t have point of view about whether I solved the issue, should I really ask the CSAT question?
In my view, you ask two different questions, depending on the situation:
Sounds good in theory, but how do I implement it?
Implementing CSAT in self-serve
Let’s talk about different types of self-serve implementations. I can think of broadly three types:
For the workflow based solution, this is easiest, because you typically have a central CS team implementing these bots, so you can ensure that this is done.
For custom built solutions – As multiple teams are involved in building the solutions, some bit of process governance is needed to ensure the right implementation. If you have a CS product team, they can build a framework.
With an AI driven solution- we can get the AI itself to take a view. It’ll be relatively easy for the AI to decide if the customer dropped off in the middle of a conversation, or whether a solution was provided.
In these 3 articles, I have covered the core measurements for CS, starting at the North Star metrics of affordability and effectiveness.
In the next article, I will talk about how to use these metrics to actually transform and improve customer support.
Founder, Trev Mobility | Ex Zoomcar, Cars24, Swiggy, redBus, Hero MotoCorp | IIFT
1 年Great insights Ankur Agrawal !