Workforce Analytics For Decision Makers

Workforce Analytics For Decision Makers

We have all read at least one “____ For Dummies” book.?When faced with a daunting technological challenge (who ever thought getting a new phone would be threatening?) we frantically seek basic survival information.?We are not dumb… just temporarily a little short on the knowledge and skill required to confront whatever the challenge is.?

The literature currently suggests that if someone is not doing analytics, they are hopelessly incapable of making business decisions. Increasingly they must also have befriended AI tools to compensate for their inadequacy when making decisions. Since I have a PhD and several professional certifications, I am cautious of exposing my total ignorance about a topic. outside of my field. When I wanted to get information on "Random Forests," a statistical technique suggested by a data scientist at a conference, I found a definition. It read “Random Forests are an ensemble learning method for classification, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes or mean/average prediction of the individual trees.”?Despite having an understanding of statistics and being capable of using regression analysis I found myself no better off than before reading that definition. So, I assumed I would never need to do a Random Forest analysis and checked my emails, looking for a topic I knew something about. Part of wisdom is knowing what to obsess about. Retreat from what seem to be insurmountable challenges is my way to cope, although I do often attempt to create an obscuring fog by doing things that seem to deal with the issues..

What Is Necessary?

One of the challenges associated with using analytics is deciding what is needed.?If I am evaluating the effectiveness of a recruiting and selection process, I certainly would like to know how it has worked. A simple correlation test that measures the relationship between the criteria used in selection and both the retention and the performance of hires would provide evidence useful in evaluating the selection model. If I found that certain criteria used for selection correlated highly with the desired outcomes this is evidence that using those criteria was helping the organization to make good decisions. I could do a single factor test with each of the criteria used to see if each correlated with positive outcomes. Or I could enter all the criteria into a multiple factor regression model to see which of them contributed the most. These are simple statistical tests that can be done with readily available software.

Being able to predict the likely performance of a candidate for employment would be valuable.?There is a lot of research on that topic. Intelligence (G) has been shown to correlate with performance… but the correlation is not that strong.?Conscientiousness is shown to correlate with performance… but the correlation is not strong, and a little lower than G. But by using both factors in a multiple factor test the correlation becomes very strong. This is understandable, since some very smart people don’t achieve much because they are not conscientious and some moderately intelligent people are successful because of their persistence. A more nuanced prediction can be made by considering the nature of the role the candidate would play. If a high threshold of native intelligence is required (e.g., Research Scientist), this may make intelligence more impactful than conscientiousness in selection, although both are of course desirable. If the job requires a high level of persistence but of a less sky-high intelligence then conscientiousness becomes relatively more important as a predictor.

In addition to discovering the best prediction one would want to develop a selection process that was appropriate. For example, if a candidate will have to work closely with a variety of current employees it may be prudent to involve those potential peers in the screening process. even if it took longer and increased the investment of people’s time. Southwest Airlines made that a signature process at its inception.

If there is an unacceptable loss of seemingly qualified candidates during the process is possible to evaluate the success/failure rate at each stage. By focusing on where candidates were lost corrective action can be taken. A simple model like the one below would require measurements at each of the stages and the results could provide insights into what might be done to lessen the losses.

No alt text provided for this image

Business knowledge is helpful in defining the issues one wishes to address using analytics. It allows one to decide how sophisticated the tools must be to get at the information needed.?If the organization has just implemented a new compensation plan and wants to know the level of acceptance by employees, it can conduct an employee attitudes survey. Perhaps just asking “what is your reaction to the plan?” may be enough. A multiple point scale that ranges from “hate it” to “love it” could be used as the response scale. When compiling the results, it is common to report a simple average (mean) to reflect employee views.?But if a more nuanced indication is needed, such as knowing how many hated it, how many loved it and how many were less extreme in their views the data could be arrayed in a frequency distribution (called a histogram), such as the one below. Then further analysis could be done to see who fell into each category.

No alt text provided for this image

It is apparent that much intelligence is lost by only reporting the single average in this case. There appears to be a “bi-modal” distribution of responses, which requires further examination.?If the positive responses (the ones to the right of the average) are more often from older/longer service employees, the cause of that pattern should be evaluated. Perhaps a service award program was replaced with an incentive plan that only rewards those whose current performance is outstanding. Longer service employees would lose the guarantee of an award, trading it for an uncertain outcome, so service-related opinions might not be a surprise.

At this point technology is of less use and the knowledge of an HR practitioner needs to take over. The reasons for the pattern need to be identified, to determine if they might indicate a problem with the new program. Yet, for further examination to be done, there must be a way to determine who was positive and who was negative. Otherwise, the data only identifies that there are polar views. If anonymity increases response rates it would still be possible to build measures into the responses that would preclude identifying individuals (i.e., using age ranges and seniority ranges).?Anticipation of how the data will be used and what information is required needs to guide the survey design. This will be facilitated by having a practitioner involved with the design, since a data scientist might not consider the need to do more in-depth analysis or how knowing the characteristics of respondents might add value.

The planning phase of analytics is where practitioner knowledge is needed to guide the selection and use of analytical tools. One of the greatest dangers with looking for correlations in databases is that it is likely that many will be found. Yet some may be meaningless, and others may provide bad guidance. When designing a research study good practice demands that a hypothesis (or several) be created prior to the study, and that the study must be structured to test the hypothesis.?Otherwise, a fishing expedition seeking any correlations may find them, even though they are meaningless.?Identifying business issues and framing hypotheses is the responsibility of the practitioner, since those specializing in analytics may not possess the necessary knowledge.

It is important to make the distinction between correlation and causation. To establish causation:

1. There needs to be a high correlation,

2. Changes in A (cause) must precede changes in B (effect), and

3. There must not be other feasible causes.

This is the reason studies done at a single point in time are limited in what can be concluded from their results. Without longitudinal measurements the “must precede” requirement cannot be tested. Another issue is the direction of causation. There have been claims made that high pay for executives causes improved organizational performance. This has been based on correlation analysis. But it could also be argued that financially successful organizations are able to afford higher pay, which reverses the direction of causation. For decades researchers have been unable to find strong correlations between employee satisfaction and productivity. When researchers reversed the premise, it was found that improved performance was correlated with increased satisfaction . Too many claims in the practitioner literature fall into the trap of ignoring causal direction.

One of the common use of analytics is determining whether an intervention produces positive results.?A group incentive plan is usually installed with the intention of motivating collaborative behavior and focusing people on unit performance. Analysis should be aimed at measuring if the desired results did occur. The chart below tracks performance for the periods after the installation of the plan.

No alt text provided for this image

Although performance increases steadily after the plan installation the chart tells us nothing about whether the plan made a difference.?Realizing this data was gathered on performance before plan installation and is shown in the chart below.

No alt text provided for this image

This additional intelligence indicates that the plan may have had no effect, since performance was already on the upswing. On the other hand, if the pattern is like the one in the display below a totally different conclusion would be reached.

No alt text provided for this image

The plan designer could now be confident that the plan at least contributed to improvement, even if other things might also have had a positive impact. It would be helpful to create a way to establish how much of the difference was attributable to installing the plan. If the business climate changed significantly that might have been at least part of the reason for improved performance. By using multiple regression analysis the relative contribution of each of the factors can be determined, which provides greater confidence when measuring effectiveness.But the other possible causes must be identified so their impact can be tested.

How Refined Does Measurement Need To Be?

One of the scales used to measure pain I always thought to be simplistic is the “on the scale from 1 to 10 how intense is your pain?”?If my response is a10 I probably would be physically unable to respond to the question and that should be apparent to the diagnostician. I might be too embarrassed to admit it was a 1 yet if I was still seeking relief. This scale is an extension of a satisfaction index developed decades ago by a researcher that showed relatively simple scales can suffice.

No alt text provided for this image

A more “sophisticated” scale is the Likert scale shown below.

No alt text provided for this image

The type of scale used should be determined by the need for precision and the ability to truly differentiate between levels. Overly precise scales frustrate respondents because they cannot differentiate between adjacent values. I once consulted with a state government that had a job family for General Clerk that had nine levels (I – IX). When I used the job descriptions to fill in a matrix that defined the nature of work (variety, difficulty, complexity), degree of autonomy, impact and required qualifications it became evident that it was impossible to differentiate across some of thelevels. We revised the definitions and ended up with three levels. In a national research laboratory I encountered a Director who ranked over 1700 subordinates based on individual performance. Another Director used a seven-level performance appraisal scale for his people. Pretending individual judgments can be made that are precise does not make it so. These scales were akin to measuring a cloud with a micrometer.

But there is also a danger in using an overly simple measure.?Everyone tracks turnover. But is 29% turnover in the IT function a problem? Good news? Painful but tolerable??One of the requirements for making that determination is deciding whether turnover is impacting performance.?McDonalds may survive 200% annual turnover rates among hourly employees but if restaurant managers turn over at that rate there will probably be alarm bells going off.?Is zero turnover the good news or the bad news? And what type of turnover? The chart below breaks turnover statistics down by type.?Turnover that is determined to be dysfunctional will be a concern, while little sleep is lost when the organization terminates someone for sustained poor performance. This analysis shows that of the 29% total turnover 10% is dysfunctional.?This is the number that will warrant management attention and what, if anything, should be done.

No alt text provided for this image

Once the strength of correlation between things is determined it must be decided whether it is strong enough to fit the requirements. When measuring the correlation between relative internal values (measured using job evaluation) and external values (measured using market data) I often find the correlation to be .70 to .85. That could be viewed as high, but cynics might point out that it is poor for 15 – 20% of the jobs. Another measurement is the organization’s current pay posture relative to market. If an organization is paying “around market” levels (5% + or –) management can view that as sufficiently close. Random error makes precise fixes unrealistic. Telling executive management that the organization is 2.6% below market is suggesting the measure is more precise than the approximations that go into reporting in surveys.?An experienced practitioner knows that matching the organization’s jobs to the benchmark jobs in a survey requires individual judgment, and perfect equivalence is rare. So “close enough” can result in concluding the organization is “within the range of competitive market levels.”?

Conclusion

I have made peace with the reality that I do not know how to use the Random Forest technique and may never invest the required effort to make myself competent. Practitioners should forgive themselves for not being able to talk shop with Decision Scientists on all topics.?On the other hand, they must be able to define business issues and formulate a hypothesis for the quants to test, leaving them to practice their craft.?They must provide parameters that make things like the level of precision needed clear. It is often useful to conduct a dialogue with those doing the analysis that enables them to understand how it is relevant to real world issues and why they are important to address. Running numbers as a form of recreation might work for some but it does not hold up under scrutiny when it must be justified based on value to the organization.?

Practitioners need to increase their understanding of things like what research tells us and what types of analytics should be built into key business processes. Working together with Decision Scientists increases the probability that analytics will produce what is needed. If that means practitioners need to take some statistics and quantitative methods course work the investment should be made. Additionally, Decision Scientists must not be turned loose to use all their best tools if there is not a laser-like focus on the outcomes needed. Investing in helping the quants understand the business will contribute to outcomes that are useful.


About the Author:?Robert Greene, PhD, is CEO at Reward $ystems, Inc., a Consulting Principal at Pontifex and a faculty member for DePaul University in their MSHR and MBA programs. Greene?speaks and teaches globally?on human resource management. His consulting practice is focused on helping organizations succeed through people. Greene has written 4 books and hundreds of articles about human resource management throughout his career.

要查看或添加评论,请登录

Robert J. Greene的更多文章

社区洞察

其他会员也浏览了