Assessing Project Performance Beyond CPI/SPI

Assessing Project Performance Beyond CPI/SPI

Informing Programmatic Performance (Earned Value) with Technical Performance to Produce a Credible Estimate at Completion - Glen B. Alleman (Niwot Ridge Consulting), Thomas J. Coonce (Institute for Defense Analysis), Rick A. Price (Lockheed Martin Space Systems)

EAI-748-E asks us to "objectively assess work performance-level accomplishments." Also, §3.8 of 748-C tells us, "Earned Value is a direct measurement of the quantity of work accomplished. Other processes control the quality and technical content of work performed."

By connecting the technical and quality measures to Earned Value, CPI and SPI provide more to ensure the delivered products perform as needed. We need more than CPI and SPI to give integrated cost, schedule, and technical performance visibility. We need measures of the increasing technical maturity of the project's deliverables in units of measure meaningful to the decision-makers. Those units include Effectiveness, Performance, all the …ilities, and risk reduction.[1]

Top-Level Process for Program Success

The elements in Figure 1 are the basis of a credible Performance Measurement Baseline (PMB). Table 1 describes the management processes needed to increase the probability of program success using these elements.

Table 1

The Situation

We're working on ACAT1/ACAT2 programs for the Department of Defense using JCIDS (Joint Capabilities and Development Systems), the governance paradigm defined in DODI 5000.02.

The JCIDS Capabilities-Based Planning paradigm tells us to define what done looks like, so we must start by measuring progress toward delivered Capabilities. Requirements elicitation is part of the process, but programs shouldn't start with requirements but with assessing the Capability Gaps. While this approach may seem overly formal, defining what capabilities are needed for success is the basis of determining what done looks like. We'll use this definition so we'll recognize it when it arrives.

With the definition of done, we can define the processes for incrementally assessing our deliverables along the path to done. Earned Value Management is one tool that can measure progress in planning. But like it says in EAI-718-C, Earned Value—left to itself—is a measure of quantity. We need measures of quality and many other …ilities that describe the Effectiveness and performance of the product to inform our Earned Value measures of the Estimate to Complete (ETC) and Estimate at Completion (EAC).?

Forecasting unanticipated EAC growth is where Earned Value Management most contributes to increasing the probability of program success.

We need to integrate other measures with the Earned Value assessment process. We can start by recognizing that the Budgeted Cost of Work Performed (BCWP) measures the efficacy of our dollar.

Earned Value is the measure of "earned budget." Did we "earn" our planned budget? Did we get our money's worth? We only know the answer if we measure "Value" in units other than money. This can be the Effectiveness, the solution's performance, or the products' technical performance. The fulfillment of the mission. The maintainability, reliability, serviceability, and other …ilities are assessments of Value earned, noValuecribed by CPI and SPI from the simple calculation of BCWP.

The Imbalance

The program performance assessment cannot be credible without the connection between the technical and programmatic plans – cost and schedule. Of course, BCWP represents the Earned Value, but the Gold Card and other guidance doesn't explicitly state how to calculate this number. BCWP is a variable without directions for its creation. BCWS and ACWP have clear directions for assigning values to them, but not BCWP.

The common advice is to determine the percent done and multiply that with BCWS to produce BCWP. But the percent done of what outcome? What units of measure are used to calculate the percent complete?

This paper aims to bridge the gap between assigning a value to BCWP and informing BCWP with tangible evidence of progress toward the plan in units of measure meaningful to the decision-makers.

Restoring The Balance

If we define the expected Effectiveness, Technical Performance, …illities, or level of risk at a specific time in the program, then measure the actual Effectiveness, Technical Performance, …illities, or Risk and compare it to the planned values of these measures, we can determine the physical percent complete of any deliverable in the WBS.

Using the steps in Table 1, the elements of Figure 1 can be developed. The connection between each component assures that the Technical Plan and the Programmatic Plan are integrated with units of technical performance used to inform BCWP.?

We need to integrate other measures with the Earned Value assessment process. We can start by recognizing that the Budgeted Cost of Work Performed (BCWP) measures the efficacy of our dollar.

Earned Value is the value of "earned budget." Did we "earn" our planned budget? Did we get our money's worth? We only know the answer if we measure "Value" in units other than money. This can be the Effectiveness, the solution's performance, or the products' technical performance. The fulfillment of the mission. The maintainability, reliability, serviceability, and other …ilities are assessments of Value earned, not described by CPI and SPI Valuethe simple calculation of BCWP.

The Imbalance

The program performance assessment cannot be credible without the connection between the technical and programmatic plans – cost and schedule. Of course, BCWP represents the Earned Value, but the Gold Card and other guidance doesn't explicitly state how to calculate this number. BCWP is a variable without directions for its creation. BCWS and ACWP have clear directions for assigning values to them, but not BCWP.

The common advice is to determine the percent done and multiply that with BCWS to produce BCWP. But the percent done of what outcome? What units of measure are used to calculate the percent complete?

This newsletter aims to bridge the gap between assigning a value to BCWP and informing BCWP with tangible evidence of progress toward the plan in units of measure meaningful to the decision-makers.

Restoring The Balance

If we define the expected Effectiveness, Technical Performance, …illities, or level of risk at a specific time in the program, then measure the actual Effectiveness, Technical Performance, …illities, or Risk and compare it to the planned values of these measures, we can determine the physical percent complete of any deliverable in the WBS.

Using the steps in Table 1, the elements of Figure 1 can be developed. The connection between each component assures that the Technical Plan and the Programmatic Plan are integrated with units of technical performance used to inform BCWP.

Figure 1

The remainder of this newsletter describes the steps needed to establish a credible PMB containing event-based risk reduction activities and the schedule margin for irreducible necessary risk to ensure a high probability of cost, schedule, and technical performance success.??

The Solution

Starting with the processes in Table 1 and using the elements in Figure 1 assures the Technical Plan and the Programmatic Plan are integrated from day one. The culture of integrating Systems Engineering—the source of MOE, MOP, TPM, the …ilities, and risk—with Earned Value Management must be guided manually. It is not a natural affinity. One speaks in technical terms of machinery, software systems, bent metal, and electronic devices. The other speaks in terms of Dollars.

The natural solution to making this connection is through policy guidance and working examples. The remainder of this paper shows the step-by-step process of establishing a credible PMB.

Measures of Technical Progress Start in the IMP

The Program Events, Significant Accomplishments, and Achievement Criteria form the elements of the IMP. The program's Measures of Effectiveness (MOE) are derived from the JCIDS process and are reflected in the IMP's Significant Accomplishments (SA). The Measures of Performance (MOP) are derived from the MOEs and are reflected in the IMP's Accomplishment Criteria (AC).

These measures assess the physical percent complete of the deliverable used to inform Earned Value (BCWP) for reporting programmatic performance. With this Physical Percent Complete measure, the EVM indices can reflect the program's cost, schedule, and technical performance.

  • Measures of Effectiveness (MOE) – are operational measures of success closely related to the mission's achievements or operational objectives evaluated in the operational environment under a specific set of conditions.
  • Measures of Performance (MOP) – characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
  • Key Performance Parameters (KPP) – represent capabilities and characteristics significant that failure to meet them can cause reevaluation, reassessing, or termination of the program.
  • Technical Performance Measures (TPM) – are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal.

Continuous verification of actual versus anticipated achievement of a selected technical parameter confirms progress and identifies variances that might jeopardize meeting a higher-level end-product requirement. Assessed values falling outside established tolerances indicate the need for management attention and corrective action.

A well-thought-out TPM program provides early warning of technical problems, supports assessments of the extent to which operational requirements will be met, and assesses the impacts of proposed changes made to lower-level elements in the system hierarchy on system performance. [1] ?With this estimate, the programmatic performance can be informed in a way not available with CPI and SPI alone.

Technical Performance Measurement (TPM), defined in the industry standard EIA-632, involves estimating the future value of a critical technical performance pValueter of the higher-level end product under development based on current assessments of products lower in the system structure. Continuous verification of actual versus anticipated achievement for selected technical parameters confirms progress and identifies variances that might jeopardize meeting a higher-level end-product requirement. Assessed values falling outside established tolerances indicate the need for management attention and corrective action.

Risk Management Starts At The IMP

Throughout all product definition processes, technical and programmatic risk assessment is performed. These risks are placed in the Risk Register with uncertainties. Uncertainty comes in two forms: [2]

  • Aleatory—These are naturally occurring Variances in the program's underlying processes. These are variances in work duration, cost, and technical performance. We can state the probability range of these variances as stochastic variability from the process's natural randomness. They are characterized by a probability density function (PDF) for their range and frequency and, therefore, are irreducible.
  • Epistemic—These are Event-based uncertainties, where there is a probability that something will happen in the future. We can state this probability of an event and do something about reducing this probability of occurrence. These are (subjective or probabilistic) uncertainties, which are event-based probabilities, knowledge-based, and reducible by further gathering of knowledge.

Structure of the IMP

Figure 2 shows the structure of the Integrated Master Plan (IMP), including the Program Events, Significant Accomplishments, and Achievement Criteria. This structure builds the assessment of the maturity of the deliverables to assure that the technical performance of the deliverables meets the planned technical, performance, and Effectiveness needs to fulfill the capabilities needed for the mission or business goals.

With this structure, Earned Value Management measures can be connected to the Work Packages defined in the Integrated Master Schedule (IMS) to assess technical performance in ways not available with CPI and SPI. The Program Manager now has leading indicators of the program's success through the MOEs defined by the Significant Accomplishments and MOPs defined by the Accomplishment Criteria, each assessed for compliance with the plan at the Program Event.

The IMP describes this vertical connectivity, supported by the horizontal traceability of Work Packages in the Integrated Master Schedule.

During actual program execution, the IMP and IMS provide visibility to the program's performance for the government and the contractor. When integrated with the Earned Value Management System (EVMS), the IMP and IMS enable the program's management to:

  • Identify and assess actual progress versus the planned progress.
  • Monitor the program's critical path and help develop workarounds to problem areas
  • Assess program maturity.
  • Assess the status of risk management activities based on including the program risk mitigation activities in the IMP and IMS.
  • Assess the progress on selected Key Performance Parameters (KPPs) and Technical Performance Measures (TPMs).
  • Provide an objective, quantitative basis for the contractor's performance assessment rating and award fee.
  • Help develop and support "what-if" exercises and identify and assess candidate problem workarounds.
  • Provide better insight into potential follow-on efforts that were not part of the original contract award.

IMP and IMS Relationship

The IMP is an event-based plan demonstrating the maturation of the development of the product as it progresses through a disciplined systems engineering process. The IMP events are not tied to calendar dates. Each event is completed when its supporting Accomplishments are completed and when this is evidenced by the satisfaction of the Criteria supporting each of those accomplishments. [1]

The IMP is usually placed on contract and becomes the program or project's baseline execution plan. Although detailed, the IMP is a top-level document compared to the IMS.

The IMS flows directly from the IMP and supplements it with additional levels of detail. The IMS incorporates all of the IMP Events, Accomplishments, and Criteria through Work Packages and detailed Tasks to support the IMP Criteria. This network of integrated tasks creates a calendar-based schedule that is the IMS defined to a level of detail for the day-to-day execution of the program.

Figure 2

Assembling the parts needed to connect the dots

To inform Earned Value (BCWP), we need information about the program's performance beyond cost and schedule performance. Could you explain how the program is progressing toward delivering the required capabilities?

Technical Measures Used Are To Inform Earned Value

  • Measures of Effectiveness
  • Measures of Performance
  • Key Performance Parameters
  • Technical Performance Measures

Risk Reduction Measures Are Used to Inform Earned Value

All programs have risks. Assessing how these risks are being reduced measures the program's probability of success. If the risks are not being reduced as planned, to the level they are planned for, on the date they are planned to be reduced, then the program is accumulating a risk debt. This debt lowers the probability of success.

The naturally occurring cost, schedule, and technical performance uncertainties can be modeled in a Monte Carlo Simulation tool. The event-based uncertainties require they be captured in the Risk Register, modeled for their impacts, with defined handling strategies, modeled for the Effectiveness of these handling efforts, and the residual risks, and their impacts assessed for both the original risk and the residual risk on the program.

Managing the naturally occurring uncertainties in cost, schedule, and technical performance, as well as the event-based Uncertainty and the resulting risk, are critical success factors in informing Earned Value.

Definitions of Uncertainty and Risk Needed Before Proceeding

In program performance management, risk drives the probability of program success. Risk is not the actual source of this probability of success. All risk comes from Uncertainty. [1] This paper uses specific definitions of risk and Uncertainty to assess the impact on program performance.

  • Uncertainty is present when probabilities cannot be quantified rigorously or validly but can described as intervals within a probability distribution function (PDF)
  • Risk is present when the Uncertainty of the outcome can be quantified in terms of probabilities or a range of possible values.

This distinction is essential for modeling a program's future performance of cost, schedule, and technical outcomes using the Risk Register and Monte Carlo Simulation tools. It is also essential to distinguish between two types of Uncertainty and create risk to the program.

  • Aleatory (stochastic) variability is the natural randomness of the process. It is characterized by a probability density function (PDF) for its range and frequency and is, therefore, irreducible.
  • Epistemic (subjective or probabilistic) uncertainties are event-based probabilities, are knowledge-based, and are reducible by further knowledge gathering.

Separating these classes helps design assessment calculations and present results for the integrated program risk assessment.

Sources of Uncertainty

There are sources of uncertUncertaintyl programs [2]

  • Lack of precision about the underlying process – we can tell what is going on with the underlying process.
  • Too few samples of past performance may lead to a lack of accuracy about the possible values in the uncertainty probability distributions.
  • Undiscovered Biases used in defining the range of possible outcomes of project processes – this is a naturally occurring process when people are asked to assess risk.
  • Natural variability from uncontrolled processes is the underlying aleatory uncertainty processes, the only protection from clavia Uncertainty since they cannot be reduced or eliminated.
  • Undefined probability distributions for project processes and technology – with some form of probability distribution function, the risk assessment and its impact cannot be modeled without making assumptions about the underlying behaviors.
  • Unknowability of the range of the probability distributions – is the probability distributions are truly unknowable, alternative modeling processes are needed. Bayesian risk modeling is one approach. [3]
  • Absence of information about the probability distributions – like the unknowability of the probability distributions, alternative approaches are needed.

All Risk Comes From Uncertainty

Uncertainty creates risk. Uncertainty comes in two types – reducible and irreducible. We need to handle both on the program. But both reducible and irreducible uncertUncertaintys with a statistical process that is either naturally occurring – irreducible, or event-based reducible.

Uncertainty is present when probabilities cannot be quantified in a rigorous manner but can only be described as intervals on probability distribution function. Risk is present when the uncertain outcome can be quantified in probabilities or a range of possible values. The distinction between statistics and probability is essential for modeling the performance of cost, schedule, and technical outcomes of the program. Most importantly, in defining the measures used to assess the program's actual performance against the planned performance as Uncertainty, the resulting risk impacts it.

Risks that result from irreducible and reducible uncertainties are recorded in the Risk Register. The Register starts with a short description of the risk, its probability of occurrence, and its impact. Each risk then needs a handling strategy, which can be represented in the IMS or in Management Reserve.

For reducible risks, the Performance Measurement Baseline can contain work to reduce the probability of the risk's occurrence or reduce its impact on the program should it become an issue. This work is on the baseline, has a budget, and is contained in a Work Package.

Risk can also be handled indirectly. A Management Reserve can be assigned for each risk outside the PMB. Should the risk occur, this Reserve can be used to perform work to address its impact.

Explicit risk retirement activities and Management Reserve are necessary to successfully complete any program. Management Reserve is a budget set aside for use by management to address the Known Unknowns in the program. Explicit risk retirement activities have budget on baseline for reduction of risk.

Figure 3

Using Risk Reduction as a Measure of Program Performance

With the reducible risk, specific work is performed on baseline to reduce the probability of occurrence of the risk. [1] With Irreducible risk, margin is needed to protect the delivery date of key deliverables. A measure of the performance of the program is the Margin Burn-down Plan shown in Figure 5. Suppose the actual risk reduction does not follow the risk burn-down plan. In that case, this is a leading indicator of future difficulties in the program’s performance and an indicator of impact on cost and schedule.

Figure 4
Figure 5

This risk reduction plan is traced to the Risk Register through the Risk ID and the Work Breakdown Structure number.

Development of Schedule Margin

  • Assure all work in the WBS is performed in the proper sequence in a well-formed network.
  • Identify all reducible risks from the Risk Register
  • Add work for reducible risks with a budget to the PMB to form the deterministic IMS.
  • Identify variances in work durations with a probability distribution function shape and upper and lower limits.

Schedule Margin Burn-Down as a Measure of Program Performance

A similar reduction plan can be used for schedule margin.

Figure 6

Six Steps to building a credible integrated master plan

The Integrated Master Plan (IMP) is the strategy for the successful delivery of outcomes of the program. Strategies are hypotheses that need to be tested. The IMP's Events and their Criteria and Accomplishments are the testing points for the hypothesis that the program is proceeding as planned. Both technically as planned and programmatically as planned.

1. Identify Program Events

  • Program Events are maturity assessment points in the program
  • They define what levels of maturity for the products and services are needed before proceeding to the next maturity assessment point
  • The entry criteria for each Event defines the units of measure for the successful completion of the Event
  • Confirm the end-to-end description of the increasing maturity of the program's deliverables
  • Establish RFP or Contract target dates for each Event.
  • Socialize the language of speaking in "Events" rather than time and efforts

2. Identify Significant Accomplishments

  • The Significant Accomplishments are the "road map" to the increasing maturity of the program
  • The "Value Stream Map" resulting from the flow of SAs describes how the products or services move through the maturation process while reducing risk
  • The SA map is the path to "done."

3. Identify Accomplishment Criteria

  • The definition of "done" emerges in deliverables rather than measures of cost and passage of time.
  • These deliverables come from Work Packages, whose outcomes can be assessed against the Technical Performance Measures (TPM) to assure compliance with the MOP.
  • The increasing maturity of the deliverables is defined through the Measures of Effectiveness (MoE) and Measures of Performance (MoP) at each program event.

4. Identify Work Packages to Complete the Accomplishment Criteria

  • The work identified that produces a measurable outcome.
  • This work is defined in each Work Package.
  • The Accomplishment Criteria (AC) state explicitly what "done" looks like for this effort.
  • With "done" stated, Measures of Performance and Measures of Effectiveness can be assessed with the products or services produced by the Work Package.

5. Sequence The Work Packages In A Logical Network.

  • Work Packages partition work efforts into a "bounded" scope
  • Interdependencies constrained to Work Package boundaries prevent "spaghetti code" style schedule flow
  • Visibility of the Increasing Flow of Maturity starting to emerge from the flow of Accomplishment Criteria (AC)

6. Adjust the Sequence Of WPs, PPs, and SLPPs To Mitigate Reducible Risk

  • Both the maturity assessment criteria and the work needed to reach that level of maturity are described in a single location
  • Risks are integrated with the IMP and IMS at their appropriate levels
  • Risks to Effectiveness – risk to JROC KPPs
  • Risks to Performance – risk to program KPPs and TPMs
  • Leading and Lagging indicator data are provided through each measure to forecast future performance.

Eight Steps to Building the Integrated Master Schedule

A Risk Informed PMB represented by the resource loaded Integrated Master Schedule (IMS) means that both Irreducible (Aleatory Uncertainty) and reducible (Epistemic Uncertainty) risk mitigations are embedded in the IMS. For non-mitigated risks, Management Reserve (MR) must be in place outside the PMB to cover risks that are not being mitigated in the IMS.

While DCMA would object, this Management Reserve needs to be assigned to specific risks or classes of risk to ensure sufficient MR is available and use is pre-defined.

  1. Assemble a Credible Description of What is Being Delivered

  • Use MIL-STD-881C as the framework for defining the structure of the deliverables.
  • Assure only deliverables and processes that produce deliverables are in the WBS.
  • Using the Integrated Master Plan, develop the Integrated Master Schedule showing the work to be performed that increases the maturity of each deliverable assessed in the IMP with Measures of Effectiveness (MOE) and Measures of Performance (MOP)
  • Assign Technical Performance Measures to each key deliverable produced by a Work Package in the IMS.

Figure 7

2. Identify the Reducible Risks

  • Variances in duration and cost are applied to the Most Likely values for the work activities.
  • Apply these variances in the IMS
  • Model the outcomes using a Monte Carlo Simulation tool
  • The result is a model of the confidence of completing on or before a date and at or below a cost

3. Put These Risks in the Risk Register

4. Develop Risk Retirement Plans for the Reducible Risks

5. Assess the Irreducible Risks

6. Use Monte Carlo Simulation to Determine Schedule Margin

Describe how the schedule margin is developed using reference classes and Monte Carlo simulation of the deterministic schedule to produce the needed confidence level for the probabilistic schedule. The difference between the deterministic schedule – with its event-based risk reduction activities – and the probabilistic schedule at some acceptable level of confidence – 80%, e.g., is the schedule margin. Show the step-by-step process for doing this.

7. Assign Schedule Margin to Protect Key Deliverables

Using schedule margin to protect against schedule risk—created by the natural uncertainties in the work durations—is the appropriate method to enable on-time contractual end item deliveries. Where you place the margin and how you manage its use are up for debate.?

Fundamentally, there are 2 approaches to consider:

  • Holding all margin at the end (of a program or end item deliverable) or
  • It visually distributes margin at strategic junctures along critical paths with known schedule risk.?

Some believe holding it all at the end, forcing the earliest possible baseline, is the best means of ensuring on-time completion. This is effective in short-duration or production efforts where the primary schedule risk is not driven by technical complexity or development risk. However, the same objective can be achieved when a disciplined process is followed to control and consume distributed margins. Paramount to this approach is accelerating downstream efforts when the margin is NOT consumed. Most schedule risk in a development program is encountered when program elements are integrated and tested. So even when using distributed margin, the margin is often kept at the end of the deliverable schedule to help protect against risk when all paths come together during final integration and test. This approach enables on-time end-item delivery with realistic cost and schedule baselines that provide accurate forecasts and decisions based on current status, remaining efforts, and related schedule risks.?

Valid reasons for distributing schedule margin earlier in the schedule include:

  • They are protecting the use of critical shared resources so that being a few weeks late doesn't turn into a several-month schedule impact. An example in space programs is using a thermal vacuum chamber shared across multiple programs at critical times in their schedules. If a program cannot enter the chamber at its scheduled time, the ultimate delay may be an exponential factor of its original delay.
  • Protecting obvious milestones that are difficult and highly undesirable to change, like a Critical Design Review (CDR)
  • Establishing realistic performance baselines accounting for schedule risk at key points provides more valid data for program decisions.
  • Realistic baselines (vs. one planned as early as possible) are more cost-effective for our subcontractors; valid integration dates are less likely to lead to forced early procurements that incur expediting costs and rework due to immature/changing requirements.
  • Placing margin where you believe it will be needed and consumed provides the most realistic schedule baseline possible for succeeding efforts and enables more accurate resource planning (for us, our customer, & our suppliers)?

However, extreme care and discipline must be used in deciding where distributed margin is placed, in providing rules on how it is consumed, and for what to do when it is NOT used where it is initially allocated.

When a distributed schedule margin "task" is reached in the schedule and the risk it was planned for is realized, the margin is converted to the appropriate task(s) and included in the baseline. This realized risk represents a new scope to the affected CAMs and can be funded from the management reserve as appropriate.

If the risk is not realized and the schedule margin is not consumed, you must also be prepared to accelerate efforts where possible. The margin task is zeroed out, and the remaining margin is moved downstream in the form of increased forecast durations for subsequent margin tasks or as an increased total float. The determination of which method to use should be a risk-based decision. Succeeding tasks are accelerated and accurately depicted ahead of schedule (positive schedule variance) as tasks are completed ahead of the baseline.

The point is to plan schedule margin strategically where it ultimately enables on time end item delivery. It's an important way Planning brings value to our programs.?

Summary of KeyValuets:

  • Schedule margin (reserve) is different from slack (float) as acknowledged in the "GAO Schedule Assessment Guide". Margin is preplanned and consumed for known schedule risk, and float is the calculated difference between early and late dates. Margin is much like management reserve, and float is similar to underruns/overruns.
  • A schedule margin is placed where there is known schedule risk. It is never consumed because of poor schedule performance. (managed like MR)
  • You don't budget schedule margin. If the risk the margin was designed to protect against comes to fruition, identify and budget the new tasks (or extend the duration of existing tasks). If the risk is not realized, zero out the margin and accelerate succeeding tasks
  • Allocating margin for known risks at critical points makes it "off limits" for crutching poor schedule performance. This helps force immediate recovery action to stay on schedule instead of degrading the margin as if it were floating.
  • Inclusion of margin (either interim or at the end of contractual deliverables) does not affect contractual period of performance.
  • The inclusion of an interim schedule margin for known schedule risk provides the most realistic baseline and accurate resource planning (including our customer). If not consumed, the effort is accurately "ahead of schedule." It's risk-based planning.
  • Provides the ability to meet interim key milestones via probabilistic scheduling
  • Integrates EVM & Risk Management
  • Makes best use of SRA data

References

  1. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners, NASA/SP-2011-3421, 2nd Edition, December 2011.
  2. NASA Risk Informed Decision Making Handbook, NASA/SP-2010-576, April 2010.
  3. EAI-748-E
  4. The Forgotten "-ilities," James D. Willis and Dr. Steven Dam, SPEC Innovations. https://www.dtic.mil/ndia/2011system/13166_WillisWednesday.pdf
  5. GAO Cost Estimating and Assessment Guide Best Practices for Developing and Managing Capital Program Costs, GAO-09-3SP

Footnotes

[1] The term …ilities refers to maintainability, reliability, serviceability, operability, and testability.

[2] https://dap.dau.mil/acquipedia/Pages/ArticleDetails.aspx?aid=7c1d9528-4a9e-4c3a-8f9e-6e0ff93b6ccb

[3] "On Numerical Representation of Aleatory and Epistemic Uncertainty," Hans Schj?r-Jacobsen, Proceedings of 9th International Conference of Structural Dynamics, EURODYN 2014.

[4] Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide, Version 0.9, August 15, 2009, OUSD (AT&L) Defense Systems, Systems Engineering, Enterprise Development (OUSD(AT&L) DS/SE/ED).?

[5] "Epistemic Uncertainty in the Calculation of Margins," Laura Swiler, 50th AIAA/ASME/ASCE/AHS Structural Dynamics and Materials Conference, 4-7 May, 2009, Palm Springs, California.

[6] Towards a Contingency Theory of Enterprise Risk Management, Harvard Business School, Anette Mikes and Robert Kaplan, Working Paper 13-063 October 17, 2013.

[7] Towards a Contingency Theory of Enterprise Risk Management, Harvard Business School, Anette Mikes and Robert Kaplan, Working Paper 13-063 October 17, 2013?

[8] NASA Risk Informed Decision Making Handbook, NASA/SP-2010-576



As always, very insightful. Thank you. I've, on occasion, used both an RPI and a QPI. That is, I've done earned value on both risk and quality. For the risk I've used FMEA on the top 100 project risks and set a plan, over time, to reduce the total FMEA score. For quality, since I've only done this for software projects, I measure the weighted planned anomalies against the weighted actual anomalies. I also use a form a Bayesian analysis to predict latent defects and measure planned latent defects against actual latent defects. Anyway, I do like earned value, and your views on risks.

回复

要查看或添加评论,请登录

Glen Alleman MSSM的更多文章

  • 2 - Fundamentals of Digital Engineering Systems

    2 - Fundamentals of Digital Engineering Systems

    This is the 2nd in a 3-part series on Digital Engineering. The 1st introduced the Capabilities of Digital Engineering.

  • Some GovLoop Publications

    Some GovLoop Publications

    GovLoop is The Knowledge Network for the Government of more than 300,000 federal, state, and local government peers in…

  • Five Immutable Principles of Project Success No Matter the Domain, Context, Management Tools, or Processes

    Five Immutable Principles of Project Success No Matter the Domain, Context, Management Tools, or Processes

    Here is a collection of materials I use to guide project success when we are not immune to common reasons for project…

    6 条评论
  • Planning is Everything

    Planning is Everything

    Plans are nothing; Planning is Everything. The notion that plans are nothing but planning is everything is a standard…

    3 条评论
  • Learning from Mistakes is Overrated

    Learning from Mistakes is Overrated

    We've all heard this before: hire good people and let them learn from their mistakes. The first question is, who pays…

    2 条评论
  • Quote of the Day

    Quote of the Day

    “The first rule of any technology used in a business is that automation applied to an efficient operation will magnify…

    3 条评论
  • Quote of the Day

    Quote of the Day

    For the sake of persons of different types, scientific truth should be presented in different forms and should be…

    1 条评论
  • The Fallacy of the Iron Tiangle

    The Fallacy of the Iron Tiangle

    The classic Iron Triangle of lore - Cost, Schedule, and Quality- has to go. The House Armed Services Committee (HASC)…

    9 条评论
  • Why Projects Fail - The Real Reason

    Why Projects Fail - The Real Reason

    At the Earned Value Analysis 2 Conference in November of 2010, many good presentations were given on applying Earned…

    2 条评论
  • Quote of the Day - Risk

    Quote of the Day - Risk

    The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one.…

    6 条评论

社区洞察

其他会员也浏览了