Measuring CS: Automation Layer

?

I’ve been talking about how to measure customer support. Have covered the two keys aspects earlier – effectiveness and affordability .

In this article, I’ll talk about the self-serve (or automation) layer.

One of the two major levers to improve CS efficiency is Share of Self-Serve (SoSS). If you improve the share of self-serve in your overall CS interactions, costs reduce, as self-service costs are dramatically lower than agent-driven support costs.

SoSS = Self-Serve Adoption * Coverage * Effectiveness

where

Adoption = Percentage of support journeys that begin with self-serve.

Coverage = % of support contacts for which a self-serve solution is implemented.

Effectiveness = % of self-serve journeys that end at self-serve. (If a customer goes through self-serve, but then still ends up talking to an agent, then self-serve was not effective).

Each of these can have a large impact on the overall SoSS number.

Let’s talk about how to measure each of them.

Adoption

The first decision to make is what you count as self-serve visits/contacts. If you have a support section on your app, you could count all visits to that section. Or, you could decide to count only those visits where the customer has selected at least one issue they need help with.

I believe it doesn’t matter which approach you go with – just choose one.

It’s easy to measure the number of contacts served by your contact center agents.

The slightly difficult part is measuring the overlap – customers who started at self-serve, didn’t get what they were looking for, and directly went to an agent channel. (Or the other way round).

If you log your self-serve visits, then this should not be too difficult for an analyst to do. For instance, you could decide that if a customer had visited the self-serve section within 24 hours before calling your support number, then both visits are for the same issue. And you count those journeys as self-serve first journeys.

Coverage

The simplest way to come up with a coverage number is to take a look at each customer intent (or issue type). For each intent, your self-serve coverage could be anywhere from 0 to 100%. That’s because each intent might have multiple scenarios, with self-serve implemented for some scenarios, and not for others. For simplicity, use your judgement to assign a subjective rating (maybe a 5 point -/Complete/High/Medium/Low/None), and? use a simple scale (0/25/50/75/100% ) against each of them. To start with, that should be enough. Once you have better instrumentation and analytics, you might be able to get more granular data on coverage.

Along with the coverage, also do a rating on potential coverage- what’s the maximum coverage possible. (There will be scenarios that simply cannot be automated at all.)

Effectiveness

You get this number from two sources. The first is easier: For all self-serve journeys- there will be the option to connect to an agent at various points. The number of customers who chose that option should be easy to get. The second one is the same as that in the adoption calculation- folks who went to self-serve, but then dropped off and directly reached an agent channel. Add those two, and you’ll get the effectiveness number.

Note that lack of effectiveness can also be due to coverage issues. As you go deeper into solving for effectiveness, you could break your pass-throughs (cases where customer ends up going to an agent) into different types:

  1. Coverage issue (we don’t have a self-serve solution for this scenario at all). This itself can be of 3 types: A): It’s not feasible to do self-serve; B): It’s feasible but requires other organisations to provide tech; C): It is feasible and we can build it within our company. Classifying it accordingly helps in computing possibilities and goals. When your self-serve solution itself sends the customer to an agent, then it’s a coverage issue.
  2. Quality issue: we have coverage but the quality of solution is not good. Typically, these will be cases where the customer chooses to connect with an agent.

To avoid double-counting: either remove the coverage related pass-through cases from the effectiveness calculation, or count everything in effectiveness, and don’t use coverage for overall SoSS calculation.


This looks great on paper. But looks like quite a bit of work. Is there much point in all this effort?

Using the SoSS formula

Different folks within the company will have different views on CS channels, automation etc. For example:

  • What agent channels should we offer? Should we allow direct access to an agent?
  • Automation: have we done enough? Is there a low hanging fruit? Is the automation good or bad?
  • Visibility of self-serve support: should it get a separate icon on the app home page? Or is that space better used for generating more revenue?

For a support ecosystem, these are very important questions.

But in the absence of clear metrics, it is very difficult to have a meaningful discussion on these aspects, as it boils down to subjective opinion. Once you have the metrics, it becomes rather simple to answer.

It also allows you to decide where to focus…

If your effectiveness is low, that’s the first one to fix. There’s no point fixing anything else. You don’t want to create more frustration for customers by sending them to a bad self-serve solution.

If effectiveness is good, but coverage is low, then you first improve coverage to a respectable level, before focusing on adoption.

Once you have these metrics, it’s easy to figure out where your main levers are: just calculate these metrics at a customer intent level, and do a contribution v/s intensity map. (See Anti-pattern 1 here ) The high-contribution high-intensity issue types are where you focus. (And you can keep digging deeper by adding upstream attributes to the intent. (See the Process Improvements section in A CSAT Story )

To improve effectiveness, it’s much better to use CSAT for further deep-diving. The reason is that not all customers who’re unhappy with self-serve will take the effort to go to an agent. So while the current definition is good enough for understanding the funnel, it’s not enough to get actionable intelligence.

That leads us to measurement of CSAT for self-serve solutions. Today, either self-serve doesn’t have CSAT, or has a measurement problem similar to agent CSAT. (Leading to biases in surveying)


Measuring CSAT for self-serve

Let’s start with the fundamental principle that we’ll carry forward from the agent world. We should ask the customer for feedback only when we believe that we have resolved the issue.

But following that principle is challenging in self-serve. Why is that?

Because we’re not always sure if we have resolved the customer issue. We don’t know if and when the conversation is over. In agent driven tickets, we always know- in calls and chats, there is a two-way agreement on when a conversation is over. In email, the agent is taking a call on whether the conversation is over. There isn’t any such clear indication in self-serve.

The easiest way to understand this is by thinking of a self-serve solution as a process flowchart/tree.

In any self-serve solution, one of the following will happen:

  • Customer will start the self-serve journey, but drop off somewhere in the middle of the tree. It is possible they have found a solution by then, or maybe not.
  • Customers get to a ‘leaf’ node, and drop off. We believe we have provided a final solution, but we don’t know if the conversation is over, because maybe they will come back a bit later and traverse other parts of the tree.

Therefore, any type of in-session feedback is incorrect. (And it will introduce biases as well.)

The way to figure out if a conversation has ended is by waiting for a certain time to see if the customer comes back. That time can vary by business, but let’s say that if a customer doesn’t come back for 24 hours, we can assume the conversation to be over.

Any feedback should be solicited only at that stage, using the same channels as for agent tickets. Using that approach also allows us to compare CSAT across channels.

But that still doesn’t solve for whether the issue was resolved or not. If I don’t have point of view about whether I solved the issue, should I really ask the CSAT question?

In my view, you ask two different questions, depending on the situation:

  • If the customer reached a leaf-node and then did not come back, it’s reasonable to make the assumption that we have solved their problem. So ask the standard CSAT question.
  • If the customer did not reach a leaf-node, then you ask a different question. So we go something like “We noticed you’d come to us for this problem x. Did you find a solution to it?”. If they say no, then you offer them the option of continuing the journey, or connecting to an agent. And you track this score as well, just separately from the CSAT score.

Sounds good in theory, but how do I implement it?

Implementing CSAT in self-serve

Let’s talk about different types of self-serve implementations. I can think of broadly three types:

  • Custom built (the solution for each intent is built by your developers.)
  • Workflow/flowchart based: by using a ‘bot’ solution, such as freshbot. You basically design a flowchart in the platform, which converts that into a run-time journey for the customer, typically in a chat interface.
  • An NLP/LLM driven bot

For the workflow based solution, this is easiest, because you typically have a central CS team implementing these bots, so you can ensure that this is done.

For custom built solutions – As multiple teams are involved in building the solutions, some bit of process governance is needed to ensure the right implementation. If you have a CS product team, they can build a framework.

With an AI driven solution- we can get the AI itself to take a view. It’ll be relatively easy for the AI to decide if the customer dropped off in the middle of a conversation, or whether a solution was provided.


In these 3 articles, I have covered the core measurements for CS, starting at the North Star metrics of affordability and effectiveness.

In the next article, I will talk about how to use these metrics to actually transform and improve customer support.

Naveen Gupta

Founder, Trev Mobility | Ex Zoomcar, Cars24, Swiggy, redBus, Hero MotoCorp | IIFT

1 年

Great insights Ankur Agrawal !

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了