KPI Performance and Outlier Management in a Call Center Environment Using Dynamic Performance Targets and Six Sigma Lean Approach
Introduction In this article, we will explore how to manage KPI performance and outliers in a call center environment by assigning dynamic performance targets and using a Six Sigma Lean approach. The Key Performance Indicator (KPI) we will focus on is Average Handling Time (AHT).
This is a SQL project designed to handle the entire process automatically by defining parameters and creating views that will perform the tasks for us.
Step 1: Assigning a Performance Target The first step involves assigning a performance target for a project, business line, or skill level using a 50% inlier approach, or in other words, the median of the defined previous period's performance. This excludes employees with less than 90 days (3 months) tenure to ensure stability in target assignment.
The process is as follows:
Step 2: Creating a Dynamic Goal The second step is creating a dynamic goal for each employee, skill, line of business (LOB), or service breakdown. This step depends on your business model; you might have one target for all breakdowns or separate targets for each.
In our example, we will use different targets for each support group. An employee will have separate targets for all the queues/skills they support. AHT KPIs will store the results for each breakdown for each day, resulting in a dataset with rows for each employee, queue/skill they support, and date interval.
Dynamic AHT Goal Since AHT may differ from skill to skill or support group to support group, analyzing AHT alone can be misleading, especially for outlier management. For instance, while one service might require 500 seconds of optimal handling time, another could require 1000 seconds due to the nature of the service. Therefore, we need another KPI to measure AHT performance over the AHT target. We will call this KPI "AHT Attainment" or "Attainment."
By setting targets logically and taking into consideration each breakdown carefully, analyzing AHT Attainment KPIs will yield more accurate results. This methodology can be applied to almost all call center KPIs. We chose AHT because it is easy to understand and access.
Step by Step Guidance:
Create Dynamical AHT Goal for Performance Measure: This approach ensures a stable and balanced target-setting process, enabling more effective KPI performance and outlier management in call centers.
Let's start by defining our fields:
Now that we are familiar with the columns and their purpose, let's start our exercise.
As I mentioned before, I will use the 50% inlier approach for this exercise, but if your project is more demanding and your goal is to hit a higher level of success, please adjust the code as desired. For example, for 75% inlier, you should adjust the code to PERCENTILE_CONT(0.75):
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY sum(TotalHandlingTime)/sum(Calls)) OVER (PARTITION BY SkillID)
By running the below script, you will get a dynamic target that takes the last two quarters' results as the base and generates a target for each group included in your analysis. In my case, I am working with skill level results, so the target assignments are set to the skill level.
SQL Query:
SELECT SkillID
, Employee
, SUM(TotalHandlingTime) / SUM(Calls) AS [AHT]
, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY SUM(TotalHandlingTime) / SUM(Calls)) OVER (PARTITION BY SkillID) AS [Median_AHT Goal]
FROM sample_table
WHERE [Date] >= DATEADD(QUARTER, DATEDIFF(QUARTER, 0, GETDATE()) - 2, 0)
AND [Date] < DATEADD(QUARTER, DATEDIFF(QUARTER, 0, GETDATE()), 0)
AND TenureDays > 90
GROUP BY Employee, SkillID, Calls
ORDER BY SkillID, AHT, Employee;
Now we can wrap our code into a Common Table Expression (CTE) to get the target for each SkillID. As you can clearly see in the below image, the query generated 187 as the Median_AHT Goal for Skill 128 and there are 14 agents under and over the target.
Here is how your end table should look:
SQL Query:
WITH aht_target AS (
SELECT SkillID
, Employee
, SUM(TotalHandlingTime) / SUM(Calls) AS [AHT]
, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY SUM(TotalHandlingTime) / SUM(Calls)) OVER (PARTITION BY SkillID) AS [Median_AHT Goal]
FROM sample_table
WHERE [Date] >= DATEADD(QUARTER, DATEDIFF(QUARTER, 0, GETDATE()) - 2, 0)
AND [Date] < DATEADD(QUARTER, DATEDIFF(QUARTER, 0, GETDATE()), 0)
AND TenureDays > 90
GROUP BY Employee, SkillID, Calls
)
SELECT DISTINCT SkillID
, [Median_AHT Goal];
Now we know how to assign a performance target for each breakdown. Let's continue our exercise by checking our Employee AHT performance over our Performance Target (AHT Goal). Please keep in mind that this step is required only if you do not have a predefined target for your business line, or else you can simply use that target for your performance measure.
Creating Attainment KPI for AHT Performance Over Dynamic AHT Goal
Now we have a dynamic AHT Goal that updates itself every quarter based on the past two quarters' AHT results. It's time to measure our employee's performance over the AHT Goal.
To do that, first, we will bring the AHT Goal and TotalHandlingTime_Goal columns into our table for analysis. AHT Goal will show the goal for the related skill, while TotalHandlingTime_Goal will show TotalHandlingTime_Goal * Calls.
It's time to check our employee performance. To do that, we will use AHTGoal / AHT. Since AHT is a "lower is better" KPI, we use AHT as the divisor. In scenarios where lower is not better, you should apply the opposite by using the goal as the divisor.
Now we have visibility of each employee and each skill they handle on a daily level. Although it's good to have visibility on different time intervals, in most cases, stakeholders are more focused on either monthly or weekly performance. Thus, we will stop using the daily interval and shift our focus to monthly results. To do that, we need to add Month and Week columns to our table. Since they are already available in my table, I will not go through how to create those columns for this exercise.
Let's create our new measurements and columns in our table:
Explaining by example, as you can see from the below image:
For Employee_1109:
From Skill 114:
From Skill 34:
From Skill 36:
Now that we have all the visibility about our employee's performance, we can work on implementing outlier management.
Here is the SQL Query :
Select [MonthName]
,[WeekName]
,[Date]
,[DateFrom]
,[DateTo]
,[Employee]
,[SkillID]
,[Calls]
,[TotalHandlingTime]
,[AHT]
,[TotalHandlingTime_Goal]
,[AHTGOAL]
,[AHTGOAL]/[AHT] as [AHT Attainment]
,Sum([TotalHandlingTime_Goal]) over(Partition by [WeekName],[SkillID],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [WeekName],[SkillID],[Employee]) as [AHT Attainment_WeeklyPerSkill] --- On Weekly Level
,Sum([TotalHandlingTime_Goal]) over(Partition by [WeekName],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [WeekName],[Employee]) as [AHTATT]
,Sum([TotalHandlingTime_Goal]) over(Partition by [MonthName],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [MonthName],[Employee]) as [AHT Attainment_Month]
FROM sample_table
order by [WeekName] desc,Employee ,SkillID desc
Time for Outlier Management
A common mistake or a step that is often missed is handling extreme points before running an analysis in your dataset. Working with a dataset that has extreme data points at either end would not only be misleading but also could cause a very wrong impression of your success. For instance, if an employee has a 10-second AHT over a 300-second AHT goal, it is not a realistic scenario but rather an extreme data point. Handling a situation in 10 seconds is not usual, as you might agree. Thus, we will start our outlier management by excluding extreme data points from our analysis so we can have more accurate insights about our features.
Although there are several very successful and innovative methods available these days, I find the IQR method better for such conditions since it is easier to understand and interpret. To implement the IQR method for filtering our extreme data points, first, we need to explain the placeholders:
After we filter extreme data points in our dataset, we will create our main statistical measures to apply the Six Sigma Lean approach.
领英推荐
Defining your USS and LSS based on the VSF value gives an upper hand in managing your process. If you already have a successful process but struggle to manage your outliers, most of your employees perform at the same level. You will end up with a very small VSF value since it checks the variance in your dataset, thus you should extend your upper and lower sigma levels by multiplying STD by 2. Because your issue is not variance but performance.
In a scenario where your VSF is greater than 1, it means that the variance in your employees' performance is not stable and changes occur often, thus you need to handle it first so you can focus on performance.
After we find our main statistical measures, we apply them to our sample table to identify which employees are Six Sigma outliers based on your defined parameters.
We can then implement a new solution to our existing flow that would raise an alert for the extreme outliers and require close attention to manage. At this stage, based on your preference, you can schedule close coaching sessions, or decide if you should replace those employees.
High variance and low mean values indicate that your process is not sufficient and you should find a solution to your problems, such as refresher training or process optimization.
As you can see from the image below, the DateFrom and DateTo values are set to the month prior to the MonthName value. This approach handles using the previous month's results for the current month's target. This way, any changes will be handled accordingly.
Here's the script until now:
With BaseData as (
Select [MonthName]
,[WeekName]
,[Date]
,[DateFrom]
,[DateTo]
,[Employee]
,[SkillID]
,[Calls]
,[TotalHandlingTime]
,[AHT]
,[TotalHandlingTime_Goal]
,[AHTGOAL]
,[AHTGOAL]/[AHT] as [AHT Attainment]
,Sum([TotalHandlingTime_Goal]) over(Partition by [WeekName],[SkillID],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [WeekName],[SkillID],[Employee]) as [AHT Attainment_WeeklyPerSkill] --- On Weekly Level
,Sum([TotalHandlingTime_Goal]) over(Partition by [WeekName],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [WeekName],[Employee]) as [AHTATT]
,Sum([TotalHandlingTime_Goal]) over(Partition by [MonthName],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [MonthName],[Employee]) as [AHT Attainment_Month]
FROM sample_table
)
,IQRCalc AS (
SELECT *,
PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY [AHTATT]) OVER (PARTITION BY [MonthName]) AS Q1,
PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY [AHTATT]) OVER (PARTITION BY [MonthName]) AS Q3
FROM BaseData
)
,FilteredData AS (
SELECT *,
(Q3 - Q1) AS IQR,
(Q3 + 1.5 * (Q3 - Q1)) AS UpperBound,
(Q1 - 1.5 * (Q3 - Q1)) AS LowerBound
FROM IQRCalc
)
,OutlierFilteredData AS (
SELECT *
FROM FilteredData
WHERE AHTATT BETWEEN LowerBound AND UpperBound
)
SELECT DISTINCT [MonthName],
DATEADD(month, 1,DateFrom) as DateFrom,
DATEADD(month, 1,DateTo) as DateTo,
AVG(AHTATT) OVER (PARTITION BY [MonthName]) AS MeanATT,
STDEV(AHTATT) OVER (PARTITION BY [MonthName]) AS StdDevATT,
STDEV(AHTATT) OVER (PARTITION BY [MonthName])*6 as VSF,
Case when STDEV(AHTATT) OVER (PARTITION BY [MonthName])*6 >= 1 then
AVG(AHTATT) OVER (PARTITION BY [MonthName]) - STDEV(AHTATT) OVER (PARTITION BY [MonthName])*2
else AVG(AHTATT) OVER (PARTITION BY [MonthName]) - STDEV(AHTATT) OVER (PARTITION BY [MonthName])
end as LSS,
Case when STDEV(AHTATT) OVER (PARTITION BY [MonthName])*6 >= 1 then
AVG(AHTATT) OVER (PARTITION BY [MonthName]) + STDEV(AHTATT) OVER (PARTITION BY [MonthName])*2
else AVG(AHTATT) OVER (PARTITION BY [MonthName]) + STDEV(AHTATT) OVER (PARTITION BY [MonthName])
end as USS
FROM OutlierFilteredData
Outlier Management and New Target Assignment for Performance Improvement
Now it's time to identify Six Sigma process outliers based on the results we just generated. To do so, we simply need to join our OutlierFilteredData table to the BaseData table to find out which employees are considered process outliers.
Quick Definitions Before Moving On:
If an employee has an AHTATT less than the LSS, it means that this employee is driving your performance negatively and needs to be addressed immediately.
Exception: In a scenario where the LSS is greater than your performance goal delivered by the client, you should ignore this or follow different approaches since the employee is on target based on the client's criteria. Even though it's not common, it's better to keep this in mind.
In most cases, employees whose performance falls under the LSS are considered extreme outliers, and immediate actions need to be taken, such as close coaching sessions, refresher training, or replacement. Since the LSS is defined by employee performance, it is not a constant but a dynamic limit to consider and understand the performance situation of your employee.
The Real Opportunity: Employees with an AHTATT greater than the LSS but lower than the mean. This is the sweet spot where you can get fast results by focusing on these employees. Because employees falling within this range are not considered process outliers, putting some effort into them will yield quick rewards. In situations where you need quick improvement, shifting the focus to these employees can provide what you need.
Exception: Since my model and approach define the performance goal based on previous quarters' performance, it's uncommon to see the MeanAHTATT being greater than the target. If this scenario is not applicable to your data model, please ignore this part. Assume that your MeanAHTATT is greater than the goal; you should focus on the employees whose performance falls between the LSS and the performance goal.
In this part, we will discuss employees whose AHTATT is between MeanAHTATT and the performance goal (since we are working with Attainment, the goal will always be 100%).
In a scenario where an employee's attainment is 95% but the goal is 100%, there is not much room for improvement because the employee is already performing better than average and is very close to hitting the target. Putting too much effort or pressure on this group could backfire for a couple of reasons:
Above, you can see all the measures we were discussing.
The space before the LSS (purple dashed line), annotated as red, visualizes employees performing worse than the LSS. Since the VSF is greater than 1 (VSF: 1.62), the LSS is set to -1 Sigma. There are 25 employees in this group. To improve your performance and move your mean point closer to the target (which is 100%), you need to address these 25 employees immediately. This group of employees will also receive the highest level of attention based on their previous month's AHT results.
The point referred to as the sweet spot is the area between the red dashed line and the purple dashed line (between LSS and Mean). With close attention, you can get rapid returns from this group of employees. In most scenarios, people in this group are employees whose NH process is completed and are getting familiar with the environment, or they could be tenured employees experiencing new process changes.
The area between the red and blue dashed lines represents the group of employees with the potential to hit the target. This group of people could hit the target more easily than the rest of the group. Once you handle your employees and manage to increase your mean closer to the target (higher is better in these cases), you can shift your focus to this group since they will define your final performance.
Now, it's time to discuss how to handle each group under different conditions.
Dynamic Targets Assignments for Outlier Management
Since in my model it's uncommon to have a MeanAHTATT greater than the AHTGoal, I implemented a simple approach for improvement. However, if your MeanAHTATT is greater than the AHTGoal, you need to adjust the conditions based on your dataset. Since these steps require tailor-made adjustments with market expertise, unfortunately, there is no one-size-fits-all solution.
Let me go through the important features: the distance between Mean ATT Attainment and AHT Goal (100%) and the distance between LSS and AHT Goal (100%).
These are the most important measures to decide how you should handle the process. In a scenario where your mean is close to the Attainment Target (100%), it means that you are very close to hitting the target. It also means that the distance between LSS and 100% is not that big. Thus, you need to apply a specific formula. You can get the distance between MeanAHTATT to 100% and sum it with STD to adjust your lower group.
Example:
In this scenario, the worst-performing employee is at the point of 65% AHTATT. The reduction could be something like: MeanAHTATT - [AHT Attainment_Month]. Considering your worst performer is set to 65%, you will reduce these employees' previous AHT results by 30%, and it decreases with every employee while getting closer to MeanAHTATT.
First Group: Where Attainment_Month is less LSS :
In my dataset, I follow a similar approach with an upper bound of 30% reduction. This means that if the distance between the MeanAHTATT and AHT Attainment_Month is greater than 30%, then I will round it to 30%; otherwise, I use the distance. This approach applies to the employees falling under LSS.
Second Group: Where Attainment_Month is between LSS and MeanAHTATT:
Final Group: Where Attainment_Month is between MeanAHTATT and 95%:
By doing so, you aim for dynamic monthly AHT target assignments that will lift your mean AHTATT closer to the target.
Final Steps: Setting New Targets and Preparing for Production
While we are coming to the end of our exercise, we are finally setting new targets and rejoining our Sample_data. Our table is now ready for production.
Here is the final output of our table where we have all the variables.
And here is the full script ready for production:
With BaseData as (
Select [MonthName]
,[WeekName]
,[Date]
,[DateFrom]
,[DateTo]
,[Employee]
,[SkillID]
,[Calls]
,[TotalHandlingTime]
,[AHT]
,[TotalHandlingTime_Goal]
,[AHTGOAL]
,[AHTGOAL]/[AHT] as [AHT Attainment]
,Sum([TotalHandlingTime_Goal]) over(Partition by [WeekName],[SkillID],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [WeekName],[SkillID],[Employee]) as [AHT Attainment_WeeklyPerSkill] --- On Weekly Level
,Sum([TotalHandlingTime_Goal]) over(Partition by [WeekName],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [WeekName],[Employee]) as [AHTATT]
,Sum([TotalHandlingTime_Goal]) over(Partition by [MonthName],[Employee]) / Sum([TotalHandlingTime]) over(Partition by [MonthName],[Employee]) as [AHT Attainment_Month]
,Sum([TotalHandlingTime]) over(Partition by [MonthName],[Employee]) / Sum([Calls]) over(Partition by [MonthName],[Employee]) as [AVGHandlingTime]
FROM sample_table
)
,IQRCalc AS (
SELECT *,
PERCENTILE_CONT(0.25) WITHIN GROUP (ORDER BY [AHTATT]) OVER (PARTITION BY [MonthName]) AS Q1,
PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY [AHTATT]) OVER (PARTITION BY [MonthName]) AS Q3
FROM BaseData
)
,FilteredData AS (
SELECT *,
(Q3 - Q1) AS IQR,
(Q3 + 1.5 * (Q3 - Q1)) AS UpperBound,
(Q1 - 1.5 * (Q3 - Q1)) AS LowerBound
FROM IQRCalc
)
,OutlierFilteredData AS (
SELECT *
FROM FilteredData
WHERE AHTATT BETWEEN LowerBound AND UpperBound
)
,StatCalc AS (
SELECT DISTINCT [MonthName],
DATEADD(month, 1,DateFrom) as DateFrom,
DATEADD(month, 1,DateTo) as DateTo,
AVG(AHTATT) OVER (PARTITION BY [MonthName]) AS MeanATT,
STDEV(AHTATT) OVER (PARTITION BY [MonthName]) AS StdDevATT,
STDEV(AHTATT) OVER (PARTITION BY [MonthName])*6 as VSF,
Case when STDEV(AHTATT) OVER (PARTITION BY [MonthName])*6 >= 1 then
AVG(AHTATT) OVER (PARTITION BY [MonthName]) - STDEV(AHTATT) OVER (PARTITION BY [MonthName])*2
else AVG(AHTATT) OVER (PARTITION BY [MonthName]) - STDEV(AHTATT) OVER (PARTITION BY [MonthName])
end as LSS,
Case when STDEV(AHTATT) OVER (PARTITION BY [MonthName])*6 >= 1 then
AVG(AHTATT) OVER (PARTITION BY [MonthName]) + STDEV(AHTATT) OVER (PARTITION BY [MonthName])*2
else AVG(AHTATT) OVER (PARTITION BY [MonthName]) + STDEV(AHTATT) OVER (PARTITION BY [MonthName])
end as USS
FROM OutlierFilteredData
)
, new_target as (
Select DISTINCT b.[MonthName] as [ResultMonth]
,s.DateFrom
,s.DateTo
,b.Employee
,Case when [AHT Attainment_Month] >0.95 then null
When [AHT Attainment_Month]<=LSS then (1-(case when MeanATT-[AHT Attainment_Month]>=0.3 then 0.3 else MeanATT-[AHT Attainment_Month] end ))*AVGHandlingTime
When [AHT Attainment_Month] between LSS and MeanATT then (1-(case when ((1-MeanATT)/2)+(MeanATT-[AHT Attainment_Month]) >=0.15 then 0.15 else ((1-MeanATT)/2)+(MeanATT-[AHT Attainment_Month]) end))*AVGHandlingTime
When [AHT Attainment] between MeanATT and 0.95 then (1-(case when (1-[AHT Attainment_Month])>=0.05 then 0.05 else (1-[AHT Attainment_Month]) end))*AVGHandlingTime
else null end as [NewTarget]
,Case when [AHT Attainment_Month] >0.95 then null
When [AHT Attainment_Month]<=LSS then (case when MeanATT-[AHT Attainment_Month]>=0.3 then 0.3 else MeanATT-[AHT Attainment_Month] end )
When [AHT Attainment_Month] between LSS and MeanATT then (case when ((1-MeanATT)/2)+(MeanATT-[AHT Attainment_Month]) >=0.15 then 0.15 else ((1-MeanATT)/2)+(MeanATT-[AHT Attainment_Month]) end)
When [AHT Attainment] between MeanATT and 0.95 then (case when (1-[AHT Attainment_Month])>=0.05 then 0.05 else (1-[AHT Attainment_Month]) end)
else null end as [Target Reduction]
,DENSE_RANK() over(Partition by b.[MonthName] order by [AHT Attainment_Month]) as [Reverse_PerformanceRank]
From BaseData as b
left join StatCalc as s on s.[MonthName] = b.[MonthName]
)
Select [Date]
,c.[Employee]
,SkillID
,Calls
,TotalHandlingTime
,AHT
,[TotalHandlingTime_Goal]
,AHTGoal
,Reverse_PerformanceRank
,[NewTarget]
,[Target Reduction]
From sample_table as c
left join new_target as n on n.Employee = c.Employee and c.[Date] between n.DateFrom and n.DateTo
#DataAnalytics #CallCenterManagement #KPI #PerformanceMetrics #SixSigma #LeanManagement #OutlierDetection #SQL #DataScience #BusinessIntelligence #AHT #CustomerService #DynamicTargets #ProcessImprovement #PerformanceAnalysis
Senior Quality Assurance Manager @ Teleperformance | Process Excellence, Data Analytics
8 个月Very helpful!