Learning Incident Rate - measuring lessons learned performance on projects. Insights from safety and quality.
All good projects these days capture lessons learned. However often these so-called 'lessons learned' are just 'lessons identified' and there is no evidence that they are actually converted into real learning that is applied back into the projects. In fact there is plenty of evidence to suggest the opposite that projects are not learning from their mistakes and that this is a real productivity issue. The challenge is one of transparency - to measure it is to manage it - but how can we measure the performance of lessons learned on a project?
What can we learn from safety and quality in their use of leading and lagging indicators to measure performance? Accident Frequency Rate (AFR), Health & Safety Performance Indicator (HSPI), Quality Performance Indicator, Quality Incident Rate - why not a Learning Incident Rate and a Knowledge Management Performance Indicator (KMPI)?
But how do we define indicators of knowledge performance? Safety has a very clear measurable – accidents, whilst knowledge is more aligned to quality in that there is no single measure that can be used to compare to other organisations/projects, and quality is subjective (ie, one person’s version of high quality is not the same as another’s).
One thing that safety and quality do have in common these days is a focus on preventative (leading) indicators (used effectively as an early warning system) as well as the traditional resultant (lagging) indicators such as measuring the number of incidents. So how can knowledge be measured using leading and lagging indicators?
Lagging Indicators
Resultant (lagging) indicators focus on measuring accidents and incidents. The Construction Quality Council define Quality Incidents as those incidents that are not acceptable on a project (demonstrating poor quality process, in the same way that an injury is not acceptable and indicates a failure of the project safety process). They stressed the importance of defining Quality Incidents at the start, suggesting the following categories as a starting point on a construction project:
- Major Rework – any rework above a set cost (for example, $2,500) to any party - owner, designer, contractor or trade contractors; or any rework that impacts the project's critical path by one day or more.
- Failed Tests – any test that fails for which the project team had expected to pass.
- Missed Key Project Activity – any activity the project team committed to in their quality plan that they subsequently did not accomplish.
So what would an incident look like for KM? You could argue that we're looking for is an instance in which a project hasn't learned a lesson which has impacted project delivery. This information could be extrapolated from the quality incident data to create a Learning Incident. For example,
- Major rework due to repeated error - any rework in which the root cause has occurred previously elsewhere on the project or programme (indicating a lack of learning from experience)
This could be taken further to create a Learning Incident Rate (LIR), ie,
LIR = No. of learning incidents No. of project man-hours
Leading Indicators
Building on the Construction Quality Council list of leading indicators for quality, comparable knowledge indicators could be devised as shown in the table below:
Leading indicators should be developed in collaboration with the supply chain to ensure buy-in. Furthermore to collate these measures and assure them, a team of competent knowledge managers within the client team is required as well as assigned knowledge champions within the supply chain supported by training.
Knowledge Management Performance Indicator (KMPI)
Creating a KMPI made up of leading and lagging indicators would enable a baseline of projects' and/or supply chain performance and then an ability to monitor and drive improvement.
Incorporating the KMPI into a Performance Assurance Framework like that used very successfully by Crossrail and publishing a scorecard showing project/contractor performance in relation to each off is a powerful motivator that can be used to drive performance improvement.
Figure 1: Performance Assurance Scoring Example from Crossrail Learning Legacy
As was the case with safety and quality, a culture change is required for knowledge management to get all projects understanding and reporting knowledge management performance effectively and this can take anywhere between 2-5 years within an organisation.
For now, can we agree some knowledge management performance reporting principles so that we can start comparing project performance in this area and driving improvements?
Do you have a good example of how knowledge management performance is measured on your projects? Please share.
Comments welcome.
Leading the transformation of data-driven project delivery | Recognised in DataIQ100 for 2 years running.
7 年Thanks Karen for a very thought provoking article. There is certainly merit in what you suggest and it's worthy of further development. One of the biggest problems we face with lessons identified is that they are grouped into a big bucket and would benefit from segmentation when they are harvested. This makes the challenge easier to grapple with. As a profession we also need to get better at understanding which lessons are avoidable and which were a product of complexity, which may not be repeated again. The approach that you suggest certainly has merit, but we first need to distil the list of lessons into a rich seam of insights against which we can apply analytics. Incident rate is one measure that is certainly worth developing. Not sure how much it helps you to develop your thinking, but very happy to discuss further.
Thanks Karen for confirming what I fear is happening everywhere. People ticking the box and confirming they have "captured" the lessons learned but not discussed, socialised or improved performance based on what is now known. The biggest success we had embedding learning from doing was when we coached project board members to ask questions during project board meetings to ensure colleagues had explored and learned from previous experience. We measured this by capturing resource hours saved from using previous insights. For example, doing one hour of reviewing projects lessons learned saved 20-hour development time. It helped to create a more diligent approach to learning from projects.
Creating sustainable solutions and partnerships for improved safety performance
7 年I think Angel's point is quite important, you can't really identify repeat incidents without confirming ability to capture the relevant root cause information from each incident and integrate the learning in the "way things are done". Only then a repeat incident can be considered a significant metric otherwise it reflects the same incident causes at a different time. This would then skew your indicator for incident/time unit to be just nr of incidents per time unit.
Vice President Europe @ Vysus Group
7 年Number of repeat incidents / project hours is a non-stochastic metric, so it will be hard to use to compare or predict. Repeat incidents (for which lessons learned existed) depends on the actual effort in capturing lessons (systematic) and the probability that an incident occurs for which a lesson learned already existed (also systematic). I believe the best approach for managing lessons & knowledge is to update procedures based on incidents and make sure everyone complies.
Global EHS Director
7 年Good perspective. You are absolutely right. What we call Lessons Learned are not always that, most of the times are just Lessons identified and it is difficult to measure if they are finally applied or at least taken into account. There is a lot of work here