Transforming Support: CSAT Story Part 2
Ankur Agrawal
Product | Business | Experience | VP@Ola | SVP@MakeMyTrip | Snapdeal
Beyond freebird: Agent capability?
This is part 2 of my article on how we transformed customer support at MMT and Ola. Part 1 is here.
With the freebird approach, we were able to take our CSAT score from 30 to 50. What about the rest?
Could this be due to process gaps? Were we done with agent interventions?
This was easy to test through data. If what remained was just process, then issues with the simplest processes should have CSAT close to 100. Was that the case, though?
Well, no.
For very simple issue types: customers reaching out for an invoice, or for a transaction statement – the CSAT peaked at about 75. (It was similar story at an agent level too- the best agents peaked at about 75.) That told us two things:
We obviously had to work both on the agent side and the process side.
On the agent side, we now picked up the second hypothesis to test – what if this was due to the quality of our agents? Perhaps they had reached the limits of their capability? ‘Better’ agents, might be able to do a lot better?
Time for the second experiment.
Now, it’s not easy to evaluate agents at hiring time to decide who’s better – language skills are relatively easy to evaluate, but soft skills such as empathy, helpfulness etc. are not. Even communication skills – listening, understanding, communicating complex ideas – are difficult to evaluate.
So we used proxies for quality – we assumed better paid agents will, in general, be better. Along with factors like where they’d studied, what their academic performance was, which companies they had worked at, for what clients, etc.
For this experiment, we hired a team of agents, at a per agent cost that was almost double of our existing agent cost. (60k per agent per month v/s 35k for regular agents). (Remember – we didn’t want to leave this to chance! At the end of the experiment, we didn't want to wonder - what if we had gotten even better agents?).
We carried forward everything else from the earlier experiment – just put brand new better agents on the team. By having ‘new’ agents, this also took care of the possibility that time spent in the legacy system would have de-sensitized the agents, or made them cynical. New employees, when they see us focussed on CSAT, are more likely to believe in it.
Interestingly, these agents performed better, but not dramatically better- they were about 5 points higher than the previous experiment.
So we decided that broadly, our current agent population was good enough.
This led us to wonder if we could go lower in our agent costs? If lower cost agents could deliver similar results, why not go lower? We really had no basis for deciding that the current levels were the right ones! We didn’t go down that path, because agent cost was not a major lever in support costs at that point.
(In a different org, later on, the freebird experiment failed to show any improvement at all. Agents there were at a significantly lower cost – our billing rates were 60% of the ones earlier. So perhaps we had reached a level of competency where agent quality was the fundamental issue.)
We were now back to the same question: where’s the 25 point gap coming from?
An a-ha moment: Agent perspectives v/s customer perspectives!
In our mentoring sessions with the freebird team, we spent a lot of time discussing the cases where they had received a negative CSAT.?One puzzling thing was that in quite a few cases, agents genuinely believed they had done a great job, and should receive a positive CSAT score, whereas it was clear to us (those not from the contact centers) that the customer was not happy.
Freebird solved for the low hanging fruit – just trying to solve for the customer made a big difference. But figuring out what is the right thing to do was another major issue.
The ‘I’m following the process, so I’m doing the right thing', perspective was so deeply embedded, that it took us a lot of effort to change the agent perspectives for the freebird team. That model would absolutely not scale, though – we couldn’t do intensive mentoring like this for the entire agent population – more than 1000 of them!
As part of the freebird experiment, based on these mentoring sessions, we had added a small tweak to the CRM- when an agent closed a ticket, we asked them to predict the CSAT response they would get. That would make them pause, rethink and then decide if it was ok to close the ticket. It was meant as a reinforcement.
But this gave us a way to quantify how large this agent perspective problem was. If this was a significant cause, then lower performing agents will have higher prediction error? We plotted them together.
领英推荐
(Deciding how to calculate prediction error was an important decision as well – we could easily have picked a wrong measure. Will leave that for folks to figure out ?? )
Lo and behold! There was a perfect corelation! Higher CSAT agents had very low prediction error rate, while lower CSAT agents had very high error rates.
There was a new factor to add to agent intent and agent capability - agent perspective! (Does the agent even know what’s the right thing to do?)
This discovery also gave us a lever to use at scale. We would focus only on the lower CSAT agents, and mentor them closely. For other agents, we could rely on more scalable training methods. This way, we would get dramatic improvements from the lower performing agents, and incremental improvements from the rest.
(There were challenges scaling this too, though.?As an example – if agents believed they were being measured on this factor, they would become conservative in their prediction. They’d just predict a negative response in most cases. The utility of the tool would disappear. We wanted agents to focus on this, but still make them believe this was not a performance parameter.
This helped increase the scores another 5-10 points overall. This one created a pinch the curve from the left effect. (Remember- freebird had the shift the curve to the right effect)
Process Improvements
But it wasn’t all agents, right? What about process?
Absolutely. There must be process issues too! Fixing them would shift the curve to the right further.
But how to identify them? One approach was to go through the various processes with a critical eye, and update them. Possible, but that approach had the following issues:
We needed a better way to zoom-in on the bigger process issues. We went back to the CSAT. Everything else being equal, CSAT scores across issue types should be the same, right? If they are not, then the only difference is the process?
Hence we looked at CSAT scores by issue types.
We created our contribution v/s intensity table, (See the first anti-pattern here) and picked the “HH” issue types first for a deeper process review. This helped us reduce the processes that we would need to look at in detail.
In addition, we looked at the spread of agent CSAT scores by issue type. For simpler processes, variation would be lower, because, well, there was little scope for an agent to go wrong. But if the agent variation was high, then it could mean that the process had gaps, or was incorrect, or was complex.
Based on these two lists, we picked the processes that had the most impact.
To make our job even easier, we added a number of parameters from the upstream transaction. (For ola, it would be ride attributes. For MMT it was hotel or flight booking attributes. At paytm, for UPI transactions, it would be whether it was merchant or P2P payment, whether the merchant was a Paytm merchant or not, whether they were offline or online, etc.)
By slicing CSAT by these additional attributes, we could pin-point the areas with the biggest issues.
For these issue types, we also started calling customers with negative scores. We wanted to hear directly from them why they gave a negative score. (Getting these calls right took a tremendous amount of effort as well – but that’s too detailed for this article.)
With the sharp focus this gave us, we were able to iteratively find and fix process issues rather quickly. In some cases, we simplified the process, so that agent errors could be minimized. This moved CSAT up another 15 points.
Time for a break :-)
In the concluding part of the CSAT story, I talk about the Path to CSAT 100, additional initiatives, and some specific examples. Part 3 is here.