HR-Tech and experimenting through A/B testing
Indira Wadhawan
HR100under40|| Strategic HR Partner, Talent and Culture, Organization Development, DE&I, Growth
The HR fraternity is going gung-ho on Tech enabled processes and vouching for AI to upheaval Employee Experience via agility and responsiveness. I was recently reading an interesting piece on “Avoid the Pitfalls of A/B testing”, I was intrigued how we can use these inferences when we as HR introduce new code/ policies using technology (aka internal HR portals).
For introduction, A/B test are digital experiments to test if A (control/ current feature) is inferior to B (experiment/ proposed improvement) by objectively measuring impact of change through user reactions. These test are released to a select group of users rather than all/ mass to prevent any unintended adverse risks. Other decisions about what, when, how and how much resource allocation revolves around test results.
While the authors mention pitfalls, let’s look at them with potential cases where HR tech is used
1) One Size Fit All- Pitfall: A/B tests are usually conducted basis impact on average or mean customer/ user while in reality what may tick for one may turn-off the other. When core metrics are dominated by small number of superusers (think Pareto principle to understand i.e if 20% of superusers govern 80% of the metrics) the use of average can be misleading.
An example to begin with could be, the interventions designed post an employee satisfaction survey. Now these interventions may look good from average score perspective but would need to reflect value of different business units/ departments or employee category.
Let’s take another scenario; in-field/ onsite/ remote employees versus the in-house desk employees, where the former access HR-ESS/ Self-service ticket system more frequently than the latter to raise grievance or query. The ease or difficulty of any new feature, additional data field, additional approval would vary between the former and the latter. Also, the speed of your HRIS may vary in-house broadband versus individual data connection. And so while doing a user acceptance testing, the change team must account in for group-specific behavior and derive metrics accordingly.
2) Silo-user Pitfall- There are times when A/B testing are conducted assuming there is no interaction between users of these 2 groups while the possibility of contamination of the group shall not be ignored. Consider designing the user interface of your Performance Management System, where an employee is a self, could possibly be an evaluating manager or matrix manager for some employees and/ or a reviewer for some. And hence, a network A/B testing will capture impacts over workflow, notification, email algorithm, master data linked to the algorithm and more.
3) Short Term Focus Pitfall- The novelty bias lays the ground for this pitfall in two different ways. Firstly, any new feature might show a heightened engagement by user that decreases over time or Secondly, any long term change could be slow/ gradual reaction or acceptance by user and may not show immediate results. So the question remains how long is too long in the testing phase?- Till the user behavior stabilizes!! Consider rolling out a flexi work-hour policy with defined core hours in an organization which is a captive subsidiary of US/ UK time-zone organization. In the testing phase, you might delink deviations from any penalising effect on employees. The initial time-sheet compliance may be as frequent as weekly or every third day but to actually understand policy implication on absenteeism, productivity versus billable hours, time versus resource one has to look length of interval.
Rarely, we would have tried a Time Series experiment for intermittently switching back from new algorithm to old and back to new and randomly switch for a duration of atleast 2-3 weeks to assess different impact/ scenarios we might not have considered. One may look at experimenting this for its R&R portal or Peer Appreciation Platform or its LMS platform.
While A/B testing allows to re-look the merits of the control group, on the contrast the HR Tech has short-term focus i.e rolling the implementation due to pressures of the stakeholder who govern resource allocation.
While HR Tech does its work, Humane HR gears up for consistent, clear communication and advisory & support as a core to Employee Experience.
Tavistock Certified Consulting Practitioner, Organization Transformation, Capability Building Expert, Speaker, Research Scholar
5 年This is basically a Positivist Syndrome which the #HR fraternity is suffering from. HR craves for metrics because #business understand metrics. When HR guys offers metrics, they are listened to by business. Metrics gives them eligibility to sit in meetings. Positivism is an approach and system that recognises only that which can be scientifically verified or which is capable of logical or mathematical proof, and therefore rejecting what cannot be reached through objective studies of material reality.(metaphysics) Human dynamics and processes can never be objectively proved. #AI or no AI, unless HR folks first make themselves humanly aware and get themselves to learn to diagnose human dynamics at individual, groups and #systems level, they will continue to ‘work for others’ as compliance with no original contribution of theirs.