The Biggest Gap With Training Data Is Not (Just) The Lack of It ... We're Focusing On The Wrong Type
Shane Baetz
Vice President | LX (Training, Leadership Development and Design & Innovation)
The name Edwin Link is not well known. Born in 1904, Edwin was an inventor and entrepreneur fascinated by aviation. At the ripe age of 16 he took his first flying lessons and by 1927, he obtained the first Cessna airplane ever delivered and eked out a living by?barnstorming charter flying and giving lessons. (1)
His greatest invention was the flight simulator. Prior its creation, aspiring pilots were brought into a classroom, taught the theories of flight, walked around airplanes, and eventually, they would attempt to actually fly a plane. A small number would pull it off but a greater number would struggle to get it off the ground and a lot died.
The concept of a flight simulator is pretty straightforward. Put some in a "safe" environment, make it as "real" as you can, and allow them to practice what you want them to actually do in the real world. As they've advanced through time they've taken on a greater level of importance which I'll get to but for now, we've set the stage.
I've been in learning & development roles for a number of years, starting back in time as a trainer, building content, and eventually moving into leadership. One of my early observations about the training world was that there seemed to be a dearth of data compared to the operations side of the business I was supporting. In the operations world it almost seemed like there was too *much* where in training, we didn't have much, and what we did have, was under-utilized.
For those in L&D, The Kirkpatrick model is the primary tool used for measuring the effectiveness of training programs. (2)
To keep it simple, its four levels measure:
a) Reaction (Did I "like" the training or find it useful) - b) Learning (Can I demonstrate retention - c) Behaviour (Am I performing/acting differently and d) Results (Do I see results)
In my past working with a variety of different companies and clients the largest gap I saw was where training *was* being measured (which wasn't always the case), most of the focus was on Levels 1 and 2 and not enough on Level 3 and in particular, Level 4. After all, we need to measure results to see if the training worked, right?
But over the years I've come to realize there is a significant gap inherent here which is all of the above are lag measures. Did they like training at the end? Did they pass the final test? Are they acting differently in the "real world"? Did the results (if we are even looking at them) show a direct and quantifiable impact?
In the words of a former manager (and mentor), "no one likes doing post-mortems when the body is dead and cold on the ground". So what's missing?
We're not looking at or using any sort of lead measures. No predictive data.
领英推荐
Going back to the flight simulators, with their advances, they today not only serve their primary purpose (preparing people to fly an actual airplane) but provide reams of predictive data via simulations. What would a pilot do if an engine fails? How quickly can they fly Route A vs a peer under similar circumstances? How quickly can they identify an issue and address it? And if we repeat those scenarios, do they get better? Faster? Catch more intricacies?
All in a safe environment and as lead measures. And with the ability to intervene prior to the "real world" with more training. More challenging scenarios. Stack-ranking and yes, if needed, a decision that someone should *not* be a pilot, as much as they may want to be as the lead/predictive data and trending indicates they will not rise to the occasion when needed.
So for those in L&D, a few thoughts headed into 2023?
1) How are you measuring training effectiveness vs the desired state?
2) What % of your training-related data is a lag vs lead measure?
3) What do *your* flight simulators look like?
By not generating and leveraging sufficient lead data, we go into the future blind, like a pilot in a storm with malfunctioning instruments. The ability to take predictive data and course-correct while individuals are still in a learning environment is critical in assessing their trajectory, their ability to navigate in a real world environment and hopefully, over time, limit and eliminate any post mortems (pilots or otherwise).
Always eager to get other takes and perspectives so please do in the Comments!
(1) https://en.wikipedia.org/wiki/Edwin_Albert_Link
(2) https://educationaltechnology.net/kirkpatrick-model-four-levels-learning-evaluation/