Some interesting observations from a cross-employer covid survey written about in Forbes
This Forbes article, How Your Company Can Drive Positive Culture Change During A Global Pandemic, came across my Facebook feed this morning. I noticed a long time friend, Steven Huang, was quoted in it.
I suspect we are all getting kind of Covid-advised out. I certainly am. In any case, Steven Huang has from time to time been a great advisor to me on business ventures. Steven is a fantastic human being, and he always does good work. So I read the article and I'm glad I did. I think it was very good.
What I especially like are the insights his team was able to produce by looking at the correlations and cross-tabs between items/concepts in the survey, which helped to explain where there were differences between teams, and what we or others could work on if we want to move or change the score. I think some of the insights extend beyond the immediate.
The manner in which they produced the insights is also a good example of a broader principle. The face value observation from any specific item (percent favorable responses) is interesting, but is hard to make sense of and has limited to no long-term value. What has more value is the relationship between items, units and demographics. The range of cross-tabs or correlations that can be produced from the items added to the survey is worth looking carefully going into the survey effort because that is where 90% of the real learning will be. The profound insights are found in the relationships between survey items (each other) and other data. The problem is these correlations cannot be produced after the fact if the right mix of questions and other data wasn’t added from the start. Sort of like saying that if you want to run faster, if you work really hard on it you can improve your sprint time a little, but if you want to break world records in the 40 yard dash, you can’t put in what God (or nature) left out. All living things have different makeups, intended purposes, strengths and weaknesses. So do different datasets. It is harder to change the design of a physical object created in the past over which you have no control, but you can control the complete physical makeup of datasets you are creating now or in the future. Knowing what your are gaining and what you are giving up - in your design - is where the magic of creating really is.
The point is that as most HR people and executives think about how insight should be produced by analysts, they generally get the time period where the greatest contribution occurs backwards. Determining what data to collect and how to store it is where the magic is. The magic is not in the decision for what statistical procedure should be used or what manner of visualization or distribution. Those things are still important, and when they are changed from grossly wrong to correct the change can make a huge difference, but that is not the same as the contribution for designing a dataset that remarkably changes the range of total insights that can produced, the relevance, and their "action-ability. This is where the impact is found. A decent analyst will follow principles of basic data hygiene, doing the best they can with what they have; a great analyst will shape the entire effort from start to finish. If you put the results sid-by-side the difference is nothing less than stunning.
Maybe you don't understand anything I just said. Or maybe you do. Methodology rumination aside, the insights shared in the narrative of this article are easy to understand and useful to any HR professional or executive. Go in peace.
Overall, I don’t think too many of the HR folks I know and have worked with are too far off from each other, if at all, in the nature of actions they are taking or the advice they are providing to executives. That said, the CultureAmp research indicates may be some organization or team cultural differences that may limit the high point for some teams. Global pandemics just bring these to the surface and interact with them. Over the longer term this could be worth scrutinizing carefully.
[To be clear, by sharing this I’m absolutely not advocating for a company or product. I have used CultureAmp, Glint and half a dozen other partners for surveys. They have varied pros and cons. While I like the people at CultureAmp, and they have difficult to replicate advantages, there are a lot of great survey partners. I’m suggesting the article as an example of how a sharp team squeezed a little more insight out of what was otherwise a pretty basic survey. I think much more can be done in that manner, much more than they did.]