Evaluating the impact of L&D investments
An entirely irrelevant picture of a King Gizzard and the Lizard Wizard concert I was at the other day

Evaluating the impact of L&D investments

This article, with a bit of an example, is also available on my blog here .

Meaningful evaluation of L&D is difficult.

It’s quite easy to get some happy sheet feedback that gives us some data on how people felt about a session, but trying to evaluate the impact we have on performance is a nightmare!

There are many reasons for this, one being that organisations don’t always value L&D as a mechanism that can genuinely shift the dial on performance. They see it as a peripheral “nice to have” that pleases the masses. A bit of a perk, a day out of the office, perhaps even a cheeky reason for a motivational-but-non-essential business trip.

Little wonder when most L&D isn’t linked to performance, and doesn’t have the data to show it makes much of a difference!

This links to the other big reason we struggle to measure the impact L&D has on performance: most organisations don’t measure performance anyway, not in any objective data-rich way. They may have a sense or a feeling or an instinct for how things are going at the human performance level, but they don’t have reliable numbers and graphs that mean much.

They don’t measure how good their managers are, for example, because it’s almost impossible to do so in any meaningful way.

Rarely do organisations try to turn effective leadership into a codified metric because to do so would be such a formulaic fudge that would miss the point that it would almost certainly do more harm than good.

So often we’re left with happy sheets and smiley anecdotal feedback that keeps L&D firmly on the periphery as a “nice-to-have” that doesn’t make much difference.

So, if we can’t just piggyback existing performance measures (because there aren’t any), how can we get some proper data to show we can make a difference to performance?

Most evaluation models offer a structure of layers to measure, and have L&D professionals tied up in knots trying to get to “level 4” (impact on performance) … rarely does anyone get that far, and there are many reasons for this, but one of the big reasons is that the model is the wrong place to start (I wrote about this previously here ).

Kirkpatrick, the ubiquitous evaluation model, is great, but if we only look at the surface of the model, then all we see is it describing the evaluation problem, it doesn’t really help us find the solution.

So here’s a process-approach, that builds on Kirkpatrick’s foundation, with some project management and Theory of Change thinking to create a method (rather than a model) for the evaluation of impact (there’s nothing original in this, but useful to articulate in one place):

1. What’s your problem?

There’s a reason the PRINCE2 project management methodology demands that all projects start with a problem statement. The act of framing it as a problem (even if it isn’t really a problem) forces a thinking process that helps understand the point behind the investment.

This is another way of saying “find your why,” as Simon Sinek would put it, or “begin with the end in mind” as Stephen Covey described it.

It is worth spending a fair chunk of time in this phase to really copper-bottom a solid problem statement that accurately captures something important for the organisation.

This might start as something like “we need to get better at selling product A” or “managers need to be able to understand how to use the performance appraisal system” but with a bit of work can be knocked into better shape: “Sales colleagues lack knowledge in product A and so don’t push hard to sell it” or “Manager are unclear of the new processes and not confident in having more difficult performance conversations” – this is a great opportunity for us as L&D professional to add value and shape strategic L&D investment, moving away from being order-takers for training courses.

It may not be a problem as such, it may be “we want a leadership development programme because we haven’t got one” but the same principle applies, what’s the problem with not having one? What happens now? Maybe we get to something like “leaders don’t feel valued, career not supported, mediocre scores on leadership on our staff survey” etc. – all good material to be pulled into a problem statement(s).

The problem statement must be something the organisation recognises as a genuine problem and agree that solving it would add value – this moves L&D from the fluffy nice-to-have periphery to being bang in the middle of how the organisation succeeds.

2. Where’s your evidence?

This step is the basis for rock-solid learning evaluation.

If we have a decent problem statement worked out, it must be based on proper evidence, how do they know that managers aren’t having those difficult performance conversations? How do they know it’s a lack of confidence, or skill, or knowledge, or whatever the problem is? How do they know leaders don’t feel supported in their careers?

Sometimes problem statements are based on vague feelings rather than solid facts. If so, this might be a load of biased rubbish, or it might be that someone is picking up on subtle clues – helping the business zero in on some evidence will help thrash this out.

There might be countable items like complaints, absences, or employee satisfaction surveys, or it might be the more nebulous feeling that motivation is lower or that it just all feels a bit meh round here … in which case we have to help de-meh this nebulous evidence by running some sort of survey, 360 feedback campaign, or focus groups to turn that meh-ness feeling into something more substantial.

These are the measures we will use to evaluate impact: if we can shift the dial on these numbers, we can show that the L&D investment had an impact.

So now we have a solid problem statement(s) and robust evidence that fully explains the moving parts that contribute to that problem statement, and we haven’t designed a single bit of training yet!

3. Map the Change

Now we have a clear problem statement to structure the L&D solution around, and evidence to back it up, and agreement on the end state (i.e where you aspire to shift the evidence measures to), we need to map the changes to show how this will all fit together.

The change map shows how we will move the dial on each measure.

This starts by looking at what is contributing to these low scores, or high absences, or whatever the evidence is. If we can identify every factor, we can design interventions to act on those factors – many may not have an L&D component: L&D can act on gaps in knowledge, skill, behaviour, confidence, motivation, habit etc., but many reasons will be outside of those definitions: overwork, unreliable processes, poor management, poor equipment, lack of psychological safety, unsuitable environment, feelings of exclusion, boredom etc.

A holistic solution will need to identify as many of these forces as possible and consider how to address them.

4. Design and Deliver

Just skating over this bit because I’m talking about evaluation in this post … so, then the solutions are magically designed and delivered, partnering with relevant HR people, managers etc. … and on to …

5. Evaluate

Only now can we evaluate a meaningful impact of our L&D investment.

Have we moved the dial on the evidence we found in Step 2? (Note this is only moving the evidence, not necessarily the performance itself, so a further round of evaluation might be required to join it back to the problem statement – is the problem statement still a problem?).

To do so we repeat the measures we did back at Step 2 to look at the “value add” of the investment. Ideally we would also measure several months later, to look at “sustained value add” as most new skill development takes time.

This article, with a bit of an example, is also available on my blog here .

Matthew Vickers

Director of Connections Reform, NESO

1 年

John Tomlinson I came for the article and stayed for the King Gizzard and the Lizard Wizard. Downloaded some and I have been obsessed for the last week. It’s as if Mother Julian of Norwich and Teresa of Avila started a band and got taught to play guitar by My Bloody Valentine and put Neil Peart on drums. That’s good obviously. I think what they would say is there are colours which we can’t see and feelings which we can’t express and capture easily that resist reduction to evaluation…

A really sensible piece. All I would add is that L&D is usually just an ingredient not a recipe. A recipe for change or improvement may have several other ingredients, though sometimes these are left out and L&D is left holding the can. Or one high profile ingredient gets all the credit. Getting the recipe right is vital and you should be able to evaluate its success or otherwise, especially using the approach set out here. Isolating and evaluating the impact of a single ingredient, be that L&D or Leadership, is much tougher, not impossible and certainly worth a go but it is rarely easy to arrive at a convincing assessment.

Tomos James

We use actors and technology to make immersive simulations. We design and deliver blended learning, and assessments.

1 年

This is a joy to read. Thanks for sharing your thoughts in such a clear way, and inspiring me to google King Gizzard and the Lizzard Wizard. From our side, as a supplier, we wrestle with this all the time. I almost spat my coffee out with delight when I read your line: "This links to the other big reason we struggle to measure the impact L&D has on performance: most organisations don’t measure performance anyway" Finding and defining the problem is when we know we are on to something. And doing the research on the audience, asking the people the questions, it takes time, but it's time so well spent.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了