Using Bayesian Techniques in Program Management: A Comprehensive Guide

Using Bayesian Techniques in Program Management: A Comprehensive Guide

Introduction

If you are a program manager, uncertainty is a constant companion. From project timelines and resource allocation to risk management and decision-making, managing uncertainty effectively can make the difference between success and failure. Today, we will explore Bayesian techniques—a powerful statistical tool that can help program managers navigate the unpredictable seas of project management with greater confidence and precision.

Bayesian statistics, named after the Reverend Thomas Bayes (18th-century English?statistician,?philosopher, and?Presbyterian minister), is a method of statistical inference that combines prior knowledge with new evidence to update the probability of a hypothesis being true. This approach is particularly useful in program management, where new information continuously becomes available, and decisions must be adjusted accordingly.

In this article, we'll not only explore how Bayesian techniques are used in program management and dig into their application details, but we'll also provide two practical examples to illustrate their effectiveness. This hands-on approach will help you see how these techniques can be applied in your day-to-day work.

NOTE: The LinkedIn editor does not provide proper mathematical formulae entry. Therefore, formulas are written in linear form, which makes them less readable. If you would like to read this article in proper formatting, please download a PDF from my Google account. https://drive.google.com/file/d/1jOfKApM07OylFJ29Beyv_WdbQKJkO4VP/view?usp=sharing

The Basics of Bayesian Techniques

Bayesian techniques revolve around Bayes' Theorem, which can be summarized as follows:

P(A│B)=(P(B│A) * P(A)) / P(B)

Where:

  • P(A│B) is the posterior probability or the probability of event A occurring, given that B is true.
  • (P(B│A) ?is the likelihood or the probability of event B occurring, given that A is true.
  • P(A) is the prior or initial probability of event A.
  • P(B) is the marginal likelihood or the total probability of event B.


In program management, these components can be interpreted as follows:

  • Prior Probability (P(A): Initial estimates based on historical data or expert judgment.
  • Likelihood (P(B|A): The probability of new evidence given the initial estimates.
  • Posterior Probability (P(A|B): Updated probability after considering new evidence.

Applications of Bayesian Techniques in Program Management

The resurgence of Bayesian statistics in the mid-20th century, driven by computational advancements and influential proponents, paved the way for its widespread use in modern industry and daily life. Its flexibility, ability to handle uncertainty and interdisciplinary applicability have made Bayesian methods a valuable tool in various sectors.

In this section, we will pay attention to its usage in Program Management with examples of such usage in:

  1. Risk Management
  2. Project Performance Tracking
  3. Decision Making
  4. Agile – Cone of Uncertainty

Risk Management

Risk management is a critical aspect of program management, and Bayesian techniques can significantly enhance this process. By continuously updating risk probabilities as new data becomes available, project managers can better anticipate potential issues and implement mitigation strategies more effectively.

Example 1: Bayesian Risk Assessment

Scenario:

Let's consider a project to develop a new software application. The initial risk assessment identifies a 30% chance of a critical bug delaying the project. As development progresses, several minor bugs are discovered and resolved, but no critical bugs have surfaced yet. The project manager decides to update the risk assessment using Bayesian techniques.

Step 1: Define Prior Probability

The prior probability of a critical bug (A) is 30% or 0.3.

P(A)=0.3

Step 2: Gather New Evidence

Suppose the probability of discovering no critical bugs after resolving several minor ones (B) is 70% (given that there is still a presumption of a critical bug hiding somewhere).

P(BA)=0.7

Step 3: Calculate Marginal Likelihood

The probability of discovering no critical bugs, regardless of whether a critical bug exists or not, needs to be estimated. Let's assume this is 50%.

P(B)=0.5

Step 4: Apply Bayes' Theorem

Using Bayes' Theorem, we can update the probability of a critical bug given that no critical bugs have been found yet.

P(A│B)=(P(B│A )* P(A)) / P(B) = (0.7 * 0.3) / 0.5 = 0.42

The updated probability of a critical bug delaying the project is now 42%, indicating a higher risk than initially estimated. This updated assessment allows the project manager to allocate additional resources to testing and debugging to mitigate this risk.


Example 2: Bayesian Risk Assessment in Sensor Calibration

Scenario:

An initial risk assessment for the development of a self-driving car indicates a 15% chance of sensor calibration issues causing delays during the testing phase. As testing progresses, several calibration adjustments are made without major issues being detected. The project manager uses Bayesian techniques to update the risk assessment.

Step 1: ?Define Prior Probability

The prior probability of sensor calibration issues (A) is 15% or 0.15.

P(A)=0.15

Step 2: ?Gather New Evidence

Assume the probability of observing no calibration issues after several adjustments (B) is 85%, given the potential for calibration problems.

P(B│A)=0.85

Step 3: Calculate Marginal Likelihood

The probability of observing no calibration issues, regardless of whether calibration problems exist, is estimated to be 70%.

P(B)=0.70

Step 4: Apply Bayes' Theorem

Using Bayes' Theorem, we update the probability of calibration issues given that no issues have been observed.

? P(A│B)=(P(B│A)* P(A)) / P(B) = (0.18*0.15) / 0.7 = 0.1275 / 0.7 = 0.182

The updated probability of sensor calibration issues is now approximately 18.2%, indicating a slightly higher risk that requires continued monitoring

Project Performance Tracking

Bayesian methods can be used to track project performance by comparing actual progress against initial estimates. This ongoing comparison allows for real-time adjustments and more accurate forecasting.

Example 1: Bayesian Project Performance Tracking

Scenario:

Consider a construction project with an initial estimate of completing a major milestone in 100 days. After 50 days, the project is 40% complete. The project manager wants to update the estimated time to completion using Bayesian techniques.

Step 1: Define Prior Probability

The prior estimate for completing the milestone is 100 days.

P(A)=100

Step 2: Gather New Evidence

The evidence (B) is that 40% of the project is completed in 50 days. Let's assume the likelihood of completing 40% in 50 days is represented by a normal distribution with a mean of 100 days and a standard deviation of 20 days (standard deviations are derived from historical performance data).

P(B│A)= Normal (μ=100, σ =20)

Step 3: Calculate Marginal Likelihood

Regardless of the initial estimate, the marginal likelihood of completing 40% in 50 days can be estimated from historical data or expert judgment. For simplicity, let's assume this is also normally distributed with a mean of 100 days and a standard deviation of 20 days.

P(B)= Normal (μ=100, σ =20)

Step 4: Apply Bayes' Theorem

Using Bayesian updating, we combine the prior estimate with the new evidence (_new - in the following formulas) to update the project completion time.

To simplify the calculation, we'll assume a conjugate prior for normal distributions, which means the posterior distribution is also normal. The updated mean (μ_update) and variance (σ_update^2) are calculated as follows:

μ_update= ((σ_new ^2 * μ_prior) + (σ_prior ^2 * μ_new)) / (σ_prior ^2 + σ _new ^2 )

σ_update ^2 = (σ_prior ^2 * σ_new ^2 ) / ( σ_prior ^2 + σ_new ^2)

Where:

  • μ_prior = 100
  • σ_prior^2 = 20^2 = 400
  • μ_new =50/0.4 = 125
  • σ_new^2 = 20^2 = 400

Plugging in the values:

μ_update= ((400* 100)+(400 * 125)) / (400+ 400) = (40,000+50,000) / 800 =112.5

σ_update ^2 = (400* 400) / (400+ 400) = 16,000 / 800 = 200

The updated estimate for completing the milestone is now 112.5 days, with a reduced standard deviation of σ_update =√200 ≈14.14.


Example 2: Bayesian Project Performance in Software Integration

Scenario:

A milestone in the self-driving car project involves integrating new navigation software within 90 days. After 45 days, only 25% of the integration work is complete. The project manager updates the estimated completion time using Bayesian techniques.

Step 1: Define Prior Probability

The prior estimate for completing the integration is 90 days.

P(A)=90?days

Step 2: Gather New Evidence

The evidence (B) is that 25% of the project is completed in 45 days. Let's assume the likelihood of completing 25% in 45 days follows a normal distribution with a mean of 90 days and a standard deviation of 15 days.

P(B│A)= Normal (μ=90, σ =15)

Step 3: Calculate Marginal Likelihood

The marginal likelihood of completing 25% in 45 days, regardless of the initial estimate, is assumed to be normally distributed with a mean of 90 days and a standard deviation of 15 days.

P(B)= Normal (μ=90, σ =15)

Step 4: Apply Bayesian Updating

Assuming a conjugate prior for normal distributions, the updated mean (μ_update) and variance (σ_update^2) are calculated:

μ_update= ((σ_new ^2 μ_prior) + (σ_prior ^2 μ_new)) / (σ_prior ^2 + σ _new ^2 )

σ_update ^2 = (σ_prior ^2 * σ_new ^2 ) / ( σ_prior ^2 + σ_new ^2)

Where:

  • μ_prior = 90
  • σ_prior^2 = 15^2 = 225
  • μ_new =45/0.25 = 180
  • σ_new^2 = 15^2 = 225

Plugging in the values:

μ_update= ((225 * 90)+(225 * 180)) / (225+ 225) = (20,250+40,500) / 450 =135

σ_update ^2 = (225 * 225) / (225+ 225) = 50,625 / 450 = 112,5

The updated estimate for completing the integration is now 135 days, with a reduced variance σ_update ^2 = 112.5, i.e. reduced standard deviation of σ_update =√112,5 ≈10.61

Decision Making

Bayesian inference supports decision-making by incorporating both prior knowledge and new data. This approach helps program managers make more informed decisions, especially in complex projects with multiple variables and uncertainties.

Example 1: Scope Change Management

Scenario:

A project manager needs to decide whether a proposed scope change can be managed within the project’s timeline and budget.

In this example, "New Information" refers to any additional data or evidence that provides insight into the likelihood of successfully managing the proposed scope change. This information could come from various sources, such as:

  1. Past Project Performance: Historical data on how similar scope changes were managed in previous projects.
  2. Expert Opinions: Assessments from team members or industry experts on the feasibility of the scope change.
  3. Current Project Status: Updated progress reports and metrics indicating the project's current health and potential to accommodate changes.
  4. Risk Assessments: Evaluations identifying potential risks and their impacts on the project if the scope change is implemented.

Initial Assumptions:

  • The project manager starts with a prior probability of 70% (0.7) that the scope change can be managed successfully:

P(Success)=0.7

  • The prior probability of failure is 30% (0.3):

P(Failure)=0.3P

  • The probability of observing the new information if the scope change is successfully managed is 80% (0.8).

P(New?Info│Success)=0.8

  • The likelihood of observing the new information if the scope change fails is 20% (0.2)

? P(New?Info│Failure)=0.2

Bayesian Update:

To calculate the posterior probability, we use Bayes’ Theorem:

P(Success│New Info)=(P(New Info│Success)* P(Success)) / P(New Info)

We see that we are missing the value of P(New Info).

Step 1: ?Calculate the probability of receiving new info: P(New?Info):

P(New Info)=P(New Info│Success)?P(Success)+P(New Info│Failure)?P(Failure)

P(New Info)=0.8?0.7+ 0.2?0.3 = 0,56+0.06 = 0.62

Step 2:? With P(New Info) value obtained calculate the posterior probability:

Plug in values into the formula:

P(Success│New Info)=(0.8 * 0.7) / 0.62= 0.56 / 0.62 = 0.903

After considering the new information, the updated (posterior) probability of successfully managing the scope change is approximately 90.3%. This higher probability gives the project manager greater confidence in the decision to proceed with the scope change.


Example 2: Vendor Selection

Scenario:

A project manager needs to select a vendor based on past performance data.

Initial Assumptions:

  • Prior probability of selecting Vendor A:

P(Vendor?A)=0.5

  • Prior probability of selecting Vendor B: ????

?P(Vendor?B)=0.5??

  • Likelihood of good performance by Vendor A:

P(Good?Performance│Vendor?A)=0.9

  • Likelihood of good performance by Vendor B:

P(Good?Performance│Vendor?B)=0.7

Bayesian Update:

We calculate the posterior probability for both vendors good performance given the evidence of their prior good performance.

Step 1:? Find P(Good?Performance):

P(Good Performance) = P(Good Performance│Vendor A) * P(Vendor A)

+ P(Good Performance│Vendor B) * P(Vendor B)

P(Good Performance)=0.9 0.5 + 0.7 * 0.5 = 0.45 + 0.35 = 0.8

Step 2:? Calculate the posterior probability of selecting Vendor A

?P(Vendor A│Good Performance) = (P(Good Performance│Vendor A) * P(Vendor A))

/ P(Good Performance)

P(Vendor A│Good Performance) = (0.9 * 0.5) / 0.8= 0.45 / 0.8 = 0.5625

The posterior probability of selecting Vendor A given good performance is approximately 0.5625 or 56.25%.

Step 3:? Calculate the posterior probability of selecting Vendor B

??P(Vendor B│Good Performance) = (P(Good Performance│Vendor B) * P(Vendor B))

/ P(Good Performance)

P(Vendor B│Good Performance) = (0.7 * 0.5) / 0.8= 0.35 / 0.8 = 0.438

The posterior probability of selecting Vendor B given good performance is approximately 0.438 or 43.8%.

In conclusion, this means that based on the new evidence (good performance), Vendor A is more likely to be chosen, but not overwhelmingly so and Vendor B is still a very good option for backup plan.

Agile - Cone of Uncertainty

The Cone of Uncertainty is a concept used in Agile and project management to describe the evolution of uncertainty over the course of a project. At the beginning of a project, the level of uncertainty is at its highest due to the limited information available about the project's scope, requirements, and potential risks. As the project progresses and more information becomes available through iterations or sprints, this uncertainty gradually decreases. This is visually represented as a cone that narrows over time.

Key points about the Cone of Uncertainty:

  • Initial High Uncertainty: At the project's start, there is a broad range of possible outcomes due to an incomplete understanding of the project details.
  • Reduction Through Iterations: As work progresses and feedback is gathered, the range of uncertainty narrows, leading to more accurate predictions and estimates.
  • Guidance for Planning: Understanding the cone helps teams to plan better by acknowledging that early estimates are rough and should be refined over time.

This concept emphasizes the importance of iterative development and continuous learning in Agile methodologies to manage uncertainty effectively and make more informed decisions as the project advances.

Example 1: Software Development Project

Scenario:

A software development team estimates the time it will take to complete a new feature. Due to high uncertainty, the initial estimates are broad.

Step-by-Step Calculation:

1. Prior Distribution: Assume the team believes the time to complete the feature follows a normal distribution with a mean of 30 days and a standard deviation of 10 days.

Prior: N(μ=30,σ=10)

2. New Data: After completing the first iteration, the team has gathered data suggesting the task is more complex than initially thought and might take longer. The new mean estimate from this data is 35 days with a standard deviation of 8 days.

Likelihood: N(μ=35,σ=8)

3. Bayesian Update: Combine the prior and the likelihood to get the posterior distribution.

Posterior mean μ_posterior calculation:

μ_posterior= ((σ_likelihood ^2 * μ(prior )) + (σ_prior ^2 * μ(likelihood )))

/ (σ_prior ^2 + σ_likelihood ^2 )

σ_posterior ^2 = (σ_prior ^2 * σ_likelihood ^2 ) / (σ_prior ^2 + σ_likelihood ^2)

Plugging in the values:

μ_posterior = ((8^2 * 30) + (10^2 * 35)) / (10^2+ 8^2 ) = (6,400 + 12,250) /164 = 33.66 days

σ_posterior ^2 = (10^2 * 8^2) / (10^2 + 8^2 )= 6,400 / 164 =7.44 days

From which is:

σ=√7.44 ≈ 2.73

Therefore, the posterior distribution is N(μ=33.66, σ ≈ 2.73)

4. Result: The updated estimate for the completion time is approximately 33.66 days with reduced uncertainty (standard deviation of ≈2.73 days).


Example 2: Marketing Campaign Duration

Scenario:

A marketing team is planning a campaign to launch a new product. Initial estimates are uncertain due to various unknowns, such as market response, advertising effectiveness, and logistical challenges.

Step-by-Step Calculation:

1. Prior Distribution: The team initially estimates the campaign will take 60 days with a standard deviation of 15 days.

Prior: N(μ=60, σ=15)

2. New Data: After the first week of pre-launch activities, the team collects data indicating that certain tasks are being completed faster than expected. Based on this data, the new mean estimate is 50 days, with a standard deviation of 12 days.

Likelihood: N(μ=50, σ=12)

3. Bayesian Update: Combine the prior and the likelihood to get the posterior distribution.

Posterior mean μ_posterior and variance σ_posterior :?

μ_posterior= ((σ_likelihood ^2 μ(prior )) + (σ_prior ^2 μ(likelihood )))

/ (σ_prior ^2 + σ_likelihood ^2 )

σ_posterior ^2 = (σ_prior ^2 * σ_likelihood ^2 ) / (σ_prior ^2 + σ_likelihood ^2)

Plugging in the values:

μ_posterior= ((25^2 * 180) + (30^2 * 160)) / (30 ^2+ 25 ^2 )

= (112,500 + 144,000) / 1525 = 168.85 days

σ_posterior ^2 = (30^2 * 25^2) / (30^2 + 25^2) = 22,500 /1525 = 14,75 days

From which is:

σ_posterior = √14,75 ≈ 3,84

Therefore, the posterior distribution is N(μ=168,85, σ ≈ 3.84)

4. Result: The updated estimate for the campaign duration is approximately 168,85 days with reduced uncertainty (standard deviation of ≈3,84 days).

By applying Bayesian updating, the team refines their estimate and reduces uncertainty as more data becomes available, aligning it with the Cone of Uncertainty principles in project management.

Conclusion

Bayesian techniques offer a robust framework for managing uncertainty in program management. By continuously updating probabilities with new data, project managers can make more informed decisions, better assess risks, and accurately track project performance. The examples provided demonstrate the practical application of these techniques and their benefits.

Whether dealing with software development, construction, or any other type of project, Bayesian techniques can enhance your ability to navigate uncertainties and achieve more predictable outcomes.

Sources:

  1. rememo.io - Project Risk Management: Bayesian Statistics
  2. maxwideman.com - Bayesian Project Management
  3. pmi.org - Bayesian Approach Earned Value Management
  4. linkedin.com - Making product decisions with bayesian analysis
  5. pm-research.com - Bayesian Methods in Asset Management
  6. acf.hhs.gov - Exploring Bayesian Methods for Social Policy Research and ...

?

APPENDIX:

History of Bayesian Statistical Probability in Modern Industry

Early Development

Bayesian statistical probability has its roots in the 18th century, named after the Reverend Thomas Bayes, who formulated Bayes' Theorem. Bayes' work was posthumously published in 1763, and it provided a mathematical foundation for updating the probability of a hypothesis based on new evidence.

20th Century Revival

Bayesian methods saw limited use until the mid-20th century. The revival began in the 1950s and 1960s due to the advent of computers, which made the complex calculations required by Bayesian methods more feasible. Influential figures like Harold Jeffreys, Edwin T. Jaynes, and Leonard J. Savage further developed and promoted Bayesian statistics during this period.

Adoption in Modern Industry

Bayesian methods began to gain significant traction in various industries from the late 20th century onwards:

  1. Medicine and Healthcare: Bayesian statistics are used for diagnostic testing, clinical trials, and personalized medicine. They allow for the incorporation of prior knowledge and continuous updating of probabilities with new patient data.
  2. Finance and Economics: Bayesian inference is employed in risk assessment, portfolio management, and economic forecasting. Its ability to update predictions with real-time data makes it highly valuable in these fields.
  3. Engineering and Technology: In the 1980s and 1990s, Bayesian methods became integral to fields like signal processing, control systems, and artificial intelligence. Bayesian networks and Bayesian inference are used in machine learning algorithms and decision-making processes.
  4. Marketing and Business Analytics: Companies use Bayesian methods for customer behavior analysis, market segmentation, and sales forecasting. The approach helps in making data-driven decisions by integrating prior knowledge with new data.

Sources:

  1. Wikipedia - Thomas Bayes
  2. Wikipedia - Bayesian Inference
  3. projectmanagement.com - Change Management Process for Project
  4. projectriskcoach.com - A Brief History of Bayesian Statistics
  5. healthcare-dive.com - Bayesian Statistics in Healthcare

Jibanananda Roy

Sr AI and ML developer

7 个月

A comprehensive article Mirko.

要查看或添加评论,请登录

Mirko Vojnovic的更多文章

  • Kano Analysis - How to measure customers' satisfaction

    Kano Analysis - How to measure customers' satisfaction

    You want your customers to be happy, right? Is there a method for measuring their satisfaction with your product? Yes!…

  • Case Study: A Short Story of Agile

    Case Study: A Short Story of Agile

    Norma owns the real estate agency of Harbor Point Homes (HPH) and has been in business for over five years. Her team…

  • CMOS Image Sensors Current Challenges

    CMOS Image Sensors Current Challenges

    Further development of CMOS image sensors continues to face several challenges in their design and implementation…

  • Mastering Metrics with Standard Deviation: Boost Efficiency and Consistency with Real-World Examples!

    Mastering Metrics with Standard Deviation: Boost Efficiency and Consistency with Real-World Examples!

    Introduction The ability to analyze data accurately and predict outcomes is invaluable in program management. Key…

  • Are we there yet?

    Are we there yet?

    A few years back, in a discussion with some friends, an interesting problem arose. A company my friend was working for…

  • GenAI: The sky is the limit… Or not?

    GenAI: The sky is the limit… Or not?

    Intro This article was brewing over the past several weeks as I contemplated various aspects of the technology and the…

    3 条评论
  • The Technical Marvel of IBM's NorthPole AI Chip

    The Technical Marvel of IBM's NorthPole AI Chip

    Synopsis Artificial Intelligence (AI) has reached new heights with IBM's latest innovation – the NorthPole AI chip…

    1 条评论
  • Is the new NASA battery a game-changer?

    Is the new NASA battery a game-changer?

    NASA recently announced the development of a new type of solid-state battery that packs twice as much energy per…

  • MEMS Microphones

    MEMS Microphones

    Author: Mirko D. Vojnovic Credits: CUI Devices, Infineon ABSTRACT: What are MEMS microphones? MEMS microphones’…

  • Prompting: the act of trying to make someone say something

    Prompting: the act of trying to make someone say something

    Crafting a good prompt is crucial to getting the best possible replies from ChatGPT. The quality of the prompt…

社区洞察

其他会员也浏览了