Assessing Project Performance Beyond CPI/SPI
Glen Alleman MSSM
Vietnam Veteran, Applying Systems Engineering Principles, Processes & Practices to Increase the Probability of Program Success for Complex Systems in Aerospace & Defense, Enterprise IT, and Process and Safety Industries
Informing Programmatic Performance (Earned Value) with Technical Performance to Produce a Credible Estimate at Completion - Glen B. Alleman (Niwot Ridge Consulting), Thomas J. Coonce (Institute for Defense Analysis), Rick A. Price (Lockheed Martin Space Systems)
EAI-748-E asks us to "objectively assess work performance-level accomplishments." Also, §3.8 of 748-C tells us, "Earned Value is a direct measurement of the quantity of work accomplished. Other processes control the quality and technical content of work performed."
By connecting the technical and quality measures to Earned Value, CPI and SPI provide more to ensure the delivered products perform as needed. We need more than CPI and SPI to give integrated cost, schedule, and technical performance visibility. We need measures of the increasing technical maturity of the project's deliverables in units of measure meaningful to the decision-makers. Those units include Effectiveness, Performance, all the …ilities, and risk reduction.[1]
Top-Level Process for Program Success
The elements in Figure 1 are the basis of a credible Performance Measurement Baseline (PMB). Table 1 describes the management processes needed to increase the probability of program success using these elements.
The Situation
We're working on ACAT1/ACAT2 programs for the Department of Defense using JCIDS (Joint Capabilities and Development Systems), the governance paradigm defined in DODI 5000.02.
The JCIDS Capabilities-Based Planning paradigm tells us to define what done looks like, so we must start by measuring progress toward delivered Capabilities. Requirements elicitation is part of the process, but programs shouldn't start with requirements but with assessing the Capability Gaps. While this approach may seem overly formal, defining what capabilities are needed for success is the basis of determining what done looks like. We'll use this definition so we'll recognize it when it arrives.
With the definition of done, we can define the processes for incrementally assessing our deliverables along the path to done. Earned Value Management is one tool that can measure progress in planning. But like it says in EAI-718-C, Earned Value—left to itself—is a measure of quantity. We need measures of quality and many other …ilities that describe the Effectiveness and performance of the product to inform our Earned Value measures of the Estimate to Complete (ETC) and Estimate at Completion (EAC).?
Forecasting unanticipated EAC growth is where Earned Value Management most contributes to increasing the probability of program success.
We need to integrate other measures with the Earned Value assessment process. We can start by recognizing that the Budgeted Cost of Work Performed (BCWP) measures the efficacy of our dollar.
Earned Value is the measure of "earned budget." Did we "earn" our planned budget? Did we get our money's worth? We only know the answer if we measure "Value" in units other than money. This can be the Effectiveness, the solution's performance, or the products' technical performance. The fulfillment of the mission. The maintainability, reliability, serviceability, and other …ilities are assessments of Value earned, noValuecribed by CPI and SPI from the simple calculation of BCWP.
The Imbalance
The program performance assessment cannot be credible without the connection between the technical and programmatic plans – cost and schedule. Of course, BCWP represents the Earned Value, but the Gold Card and other guidance doesn't explicitly state how to calculate this number. BCWP is a variable without directions for its creation. BCWS and ACWP have clear directions for assigning values to them, but not BCWP.
The common advice is to determine the percent done and multiply that with BCWS to produce BCWP. But the percent done of what outcome? What units of measure are used to calculate the percent complete?
This paper aims to bridge the gap between assigning a value to BCWP and informing BCWP with tangible evidence of progress toward the plan in units of measure meaningful to the decision-makers.
Restoring The Balance
If we define the expected Effectiveness, Technical Performance, …illities, or level of risk at a specific time in the program, then measure the actual Effectiveness, Technical Performance, …illities, or Risk and compare it to the planned values of these measures, we can determine the physical percent complete of any deliverable in the WBS.
Using the steps in Table 1, the elements of Figure 1 can be developed. The connection between each component assures that the Technical Plan and the Programmatic Plan are integrated with units of technical performance used to inform BCWP.?
We need to integrate other measures with the Earned Value assessment process. We can start by recognizing that the Budgeted Cost of Work Performed (BCWP) measures the efficacy of our dollar.
Earned Value is the value of "earned budget." Did we "earn" our planned budget? Did we get our money's worth? We only know the answer if we measure "Value" in units other than money. This can be the Effectiveness, the solution's performance, or the products' technical performance. The fulfillment of the mission. The maintainability, reliability, serviceability, and other …ilities are assessments of Value earned, not described by CPI and SPI Valuethe simple calculation of BCWP.
The Imbalance
The program performance assessment cannot be credible without the connection between the technical and programmatic plans – cost and schedule. Of course, BCWP represents the Earned Value, but the Gold Card and other guidance doesn't explicitly state how to calculate this number. BCWP is a variable without directions for its creation. BCWS and ACWP have clear directions for assigning values to them, but not BCWP.
The common advice is to determine the percent done and multiply that with BCWS to produce BCWP. But the percent done of what outcome? What units of measure are used to calculate the percent complete?
This newsletter aims to bridge the gap between assigning a value to BCWP and informing BCWP with tangible evidence of progress toward the plan in units of measure meaningful to the decision-makers.
Restoring The Balance
If we define the expected Effectiveness, Technical Performance, …illities, or level of risk at a specific time in the program, then measure the actual Effectiveness, Technical Performance, …illities, or Risk and compare it to the planned values of these measures, we can determine the physical percent complete of any deliverable in the WBS.
Using the steps in Table 1, the elements of Figure 1 can be developed. The connection between each component assures that the Technical Plan and the Programmatic Plan are integrated with units of technical performance used to inform BCWP.
The remainder of this newsletter describes the steps needed to establish a credible PMB containing event-based risk reduction activities and the schedule margin for irreducible necessary risk to ensure a high probability of cost, schedule, and technical performance success.??
The Solution
Starting with the processes in Table 1 and using the elements in Figure 1 assures the Technical Plan and the Programmatic Plan are integrated from day one. The culture of integrating Systems Engineering—the source of MOE, MOP, TPM, the …ilities, and risk—with Earned Value Management must be guided manually. It is not a natural affinity. One speaks in technical terms of machinery, software systems, bent metal, and electronic devices. The other speaks in terms of Dollars.
The natural solution to making this connection is through policy guidance and working examples. The remainder of this paper shows the step-by-step process of establishing a credible PMB.
Measures of Technical Progress Start in the IMP
The Program Events, Significant Accomplishments, and Achievement Criteria form the elements of the IMP. The program's Measures of Effectiveness (MOE) are derived from the JCIDS process and are reflected in the IMP's Significant Accomplishments (SA). The Measures of Performance (MOP) are derived from the MOEs and are reflected in the IMP's Accomplishment Criteria (AC).
These measures assess the physical percent complete of the deliverable used to inform Earned Value (BCWP) for reporting programmatic performance. With this Physical Percent Complete measure, the EVM indices can reflect the program's cost, schedule, and technical performance.
Continuous verification of actual versus anticipated achievement of a selected technical parameter confirms progress and identifies variances that might jeopardize meeting a higher-level end-product requirement. Assessed values falling outside established tolerances indicate the need for management attention and corrective action.
A well-thought-out TPM program provides early warning of technical problems, supports assessments of the extent to which operational requirements will be met, and assesses the impacts of proposed changes made to lower-level elements in the system hierarchy on system performance. [1] ?With this estimate, the programmatic performance can be informed in a way not available with CPI and SPI alone.
Technical Performance Measurement (TPM), defined in the industry standard EIA-632, involves estimating the future value of a critical technical performance pValueter of the higher-level end product under development based on current assessments of products lower in the system structure. Continuous verification of actual versus anticipated achievement for selected technical parameters confirms progress and identifies variances that might jeopardize meeting a higher-level end-product requirement. Assessed values falling outside established tolerances indicate the need for management attention and corrective action.
Risk Management Starts At The IMP
Throughout all product definition processes, technical and programmatic risk assessment is performed. These risks are placed in the Risk Register with uncertainties. Uncertainty comes in two forms: [2]
Structure of the IMP
Figure 2 shows the structure of the Integrated Master Plan (IMP), including the Program Events, Significant Accomplishments, and Achievement Criteria. This structure builds the assessment of the maturity of the deliverables to assure that the technical performance of the deliverables meets the planned technical, performance, and Effectiveness needs to fulfill the capabilities needed for the mission or business goals.
With this structure, Earned Value Management measures can be connected to the Work Packages defined in the Integrated Master Schedule (IMS) to assess technical performance in ways not available with CPI and SPI. The Program Manager now has leading indicators of the program's success through the MOEs defined by the Significant Accomplishments and MOPs defined by the Accomplishment Criteria, each assessed for compliance with the plan at the Program Event.
The IMP describes this vertical connectivity, supported by the horizontal traceability of Work Packages in the Integrated Master Schedule.
During actual program execution, the IMP and IMS provide visibility to the program's performance for the government and the contractor. When integrated with the Earned Value Management System (EVMS), the IMP and IMS enable the program's management to:
IMP and IMS Relationship
The IMP is an event-based plan demonstrating the maturation of the development of the product as it progresses through a disciplined systems engineering process. The IMP events are not tied to calendar dates. Each event is completed when its supporting Accomplishments are completed and when this is evidenced by the satisfaction of the Criteria supporting each of those accomplishments. [1]
The IMP is usually placed on contract and becomes the program or project's baseline execution plan. Although detailed, the IMP is a top-level document compared to the IMS.
The IMS flows directly from the IMP and supplements it with additional levels of detail. The IMS incorporates all of the IMP Events, Accomplishments, and Criteria through Work Packages and detailed Tasks to support the IMP Criteria. This network of integrated tasks creates a calendar-based schedule that is the IMS defined to a level of detail for the day-to-day execution of the program.
Assembling the parts needed to connect the dots
To inform Earned Value (BCWP), we need information about the program's performance beyond cost and schedule performance. Could you explain how the program is progressing toward delivering the required capabilities?
Technical Measures Used Are To Inform Earned Value
Risk Reduction Measures Are Used to Inform Earned Value
All programs have risks. Assessing how these risks are being reduced measures the program's probability of success. If the risks are not being reduced as planned, to the level they are planned for, on the date they are planned to be reduced, then the program is accumulating a risk debt. This debt lowers the probability of success.
The naturally occurring cost, schedule, and technical performance uncertainties can be modeled in a Monte Carlo Simulation tool. The event-based uncertainties require they be captured in the Risk Register, modeled for their impacts, with defined handling strategies, modeled for the Effectiveness of these handling efforts, and the residual risks, and their impacts assessed for both the original risk and the residual risk on the program.
Managing the naturally occurring uncertainties in cost, schedule, and technical performance, as well as the event-based Uncertainty and the resulting risk, are critical success factors in informing Earned Value.
Definitions of Uncertainty and Risk Needed Before Proceeding
In program performance management, risk drives the probability of program success. Risk is not the actual source of this probability of success. All risk comes from Uncertainty. [1] This paper uses specific definitions of risk and Uncertainty to assess the impact on program performance.
This distinction is essential for modeling a program's future performance of cost, schedule, and technical outcomes using the Risk Register and Monte Carlo Simulation tools. It is also essential to distinguish between two types of Uncertainty and create risk to the program.
Separating these classes helps design assessment calculations and present results for the integrated program risk assessment.
Sources of Uncertainty
There are sources of uncertUncertaintyl programs [2]
领英推荐
All Risk Comes From Uncertainty
Uncertainty creates risk. Uncertainty comes in two types – reducible and irreducible. We need to handle both on the program. But both reducible and irreducible uncertUncertaintys with a statistical process that is either naturally occurring – irreducible, or event-based reducible.
Uncertainty is present when probabilities cannot be quantified in a rigorous manner but can only be described as intervals on probability distribution function. Risk is present when the uncertain outcome can be quantified in probabilities or a range of possible values. The distinction between statistics and probability is essential for modeling the performance of cost, schedule, and technical outcomes of the program. Most importantly, in defining the measures used to assess the program's actual performance against the planned performance as Uncertainty, the resulting risk impacts it.
Risks that result from irreducible and reducible uncertainties are recorded in the Risk Register. The Register starts with a short description of the risk, its probability of occurrence, and its impact. Each risk then needs a handling strategy, which can be represented in the IMS or in Management Reserve.
For reducible risks, the Performance Measurement Baseline can contain work to reduce the probability of the risk's occurrence or reduce its impact on the program should it become an issue. This work is on the baseline, has a budget, and is contained in a Work Package.
Risk can also be handled indirectly. A Management Reserve can be assigned for each risk outside the PMB. Should the risk occur, this Reserve can be used to perform work to address its impact.
Explicit risk retirement activities and Management Reserve are necessary to successfully complete any program. Management Reserve is a budget set aside for use by management to address the Known Unknowns in the program. Explicit risk retirement activities have budget on baseline for reduction of risk.
Using Risk Reduction as a Measure of Program Performance
With the reducible risk, specific work is performed on baseline to reduce the probability of occurrence of the risk. [1] With Irreducible risk, margin is needed to protect the delivery date of key deliverables. A measure of the performance of the program is the Margin Burn-down Plan shown in Figure 5. Suppose the actual risk reduction does not follow the risk burn-down plan. In that case, this is a leading indicator of future difficulties in the program’s performance and an indicator of impact on cost and schedule.
This risk reduction plan is traced to the Risk Register through the Risk ID and the Work Breakdown Structure number.
Development of Schedule Margin
Schedule Margin Burn-Down as a Measure of Program Performance
A similar reduction plan can be used for schedule margin.
Six Steps to building a credible integrated master plan
The Integrated Master Plan (IMP) is the strategy for the successful delivery of outcomes of the program. Strategies are hypotheses that need to be tested. The IMP's Events and their Criteria and Accomplishments are the testing points for the hypothesis that the program is proceeding as planned. Both technically as planned and programmatically as planned.
1. Identify Program Events
2. Identify Significant Accomplishments
3. Identify Accomplishment Criteria
4. Identify Work Packages to Complete the Accomplishment Criteria
5. Sequence The Work Packages In A Logical Network.
6. Adjust the Sequence Of WPs, PPs, and SLPPs To Mitigate Reducible Risk
Eight Steps to Building the Integrated Master Schedule
A Risk Informed PMB represented by the resource loaded Integrated Master Schedule (IMS) means that both Irreducible (Aleatory Uncertainty) and reducible (Epistemic Uncertainty) risk mitigations are embedded in the IMS. For non-mitigated risks, Management Reserve (MR) must be in place outside the PMB to cover risks that are not being mitigated in the IMS.
While DCMA would object, this Management Reserve needs to be assigned to specific risks or classes of risk to ensure sufficient MR is available and use is pre-defined.
2. Identify the Reducible Risks
3. Put These Risks in the Risk Register
4. Develop Risk Retirement Plans for the Reducible Risks
5. Assess the Irreducible Risks
6. Use Monte Carlo Simulation to Determine Schedule Margin
Describe how the schedule margin is developed using reference classes and Monte Carlo simulation of the deterministic schedule to produce the needed confidence level for the probabilistic schedule. The difference between the deterministic schedule – with its event-based risk reduction activities – and the probabilistic schedule at some acceptable level of confidence – 80%, e.g., is the schedule margin. Show the step-by-step process for doing this.
7. Assign Schedule Margin to Protect Key Deliverables
Using schedule margin to protect against schedule risk—created by the natural uncertainties in the work durations—is the appropriate method to enable on-time contractual end item deliveries. Where you place the margin and how you manage its use are up for debate.?
Fundamentally, there are 2 approaches to consider:
Some believe holding it all at the end, forcing the earliest possible baseline, is the best means of ensuring on-time completion. This is effective in short-duration or production efforts where the primary schedule risk is not driven by technical complexity or development risk. However, the same objective can be achieved when a disciplined process is followed to control and consume distributed margins. Paramount to this approach is accelerating downstream efforts when the margin is NOT consumed. Most schedule risk in a development program is encountered when program elements are integrated and tested. So even when using distributed margin, the margin is often kept at the end of the deliverable schedule to help protect against risk when all paths come together during final integration and test. This approach enables on-time end-item delivery with realistic cost and schedule baselines that provide accurate forecasts and decisions based on current status, remaining efforts, and related schedule risks.?
Valid reasons for distributing schedule margin earlier in the schedule include:
However, extreme care and discipline must be used in deciding where distributed margin is placed, in providing rules on how it is consumed, and for what to do when it is NOT used where it is initially allocated.
When a distributed schedule margin "task" is reached in the schedule and the risk it was planned for is realized, the margin is converted to the appropriate task(s) and included in the baseline. This realized risk represents a new scope to the affected CAMs and can be funded from the management reserve as appropriate.
If the risk is not realized and the schedule margin is not consumed, you must also be prepared to accelerate efforts where possible. The margin task is zeroed out, and the remaining margin is moved downstream in the form of increased forecast durations for subsequent margin tasks or as an increased total float. The determination of which method to use should be a risk-based decision. Succeeding tasks are accelerated and accurately depicted ahead of schedule (positive schedule variance) as tasks are completed ahead of the baseline.
The point is to plan schedule margin strategically where it ultimately enables on time end item delivery. It's an important way Planning brings value to our programs.?
Summary of KeyValuets:
References
Footnotes
[1] The term …ilities refers to maintainability, reliability, serviceability, operability, and testability.
[2] https://dap.dau.mil/acquipedia/Pages/ArticleDetails.aspx?aid=7c1d9528-4a9e-4c3a-8f9e-6e0ff93b6ccb
[3] "On Numerical Representation of Aleatory and Epistemic Uncertainty," Hans Schj?r-Jacobsen, Proceedings of 9th International Conference of Structural Dynamics, EURODYN 2014.
[4] Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide, Version 0.9, August 15, 2009, OUSD (AT&L) Defense Systems, Systems Engineering, Enterprise Development (OUSD(AT&L) DS/SE/ED).?
[5] "Epistemic Uncertainty in the Calculation of Margins," Laura Swiler, 50th AIAA/ASME/ASCE/AHS Structural Dynamics and Materials Conference, 4-7 May, 2009, Palm Springs, California.
[6] Towards a Contingency Theory of Enterprise Risk Management, Harvard Business School, Anette Mikes and Robert Kaplan, Working Paper 13-063 October 17, 2013.
[7] Towards a Contingency Theory of Enterprise Risk Management, Harvard Business School, Anette Mikes and Robert Kaplan, Working Paper 13-063 October 17, 2013?
[8] NASA Risk Informed Decision Making Handbook, NASA/SP-2010-576
As always, very insightful. Thank you. I've, on occasion, used both an RPI and a QPI. That is, I've done earned value on both risk and quality. For the risk I've used FMEA on the top 100 project risks and set a plan, over time, to reduce the total FMEA score. For quality, since I've only done this for software projects, I measure the weighted planned anomalies against the weighted actual anomalies. I also use a form a Bayesian analysis to predict latent defects and measure planned latent defects against actual latent defects. Anyway, I do like earned value, and your views on risks.