DECISION TREES AND MONTE CARLO SIMULATION FOR PROJECT MANAGEMENT
Fernando Hernandez
I build, train and consult on quantitative risk and financial/project models for decision making.
On MPUG.com, Satya Narayan Dash, management professional, speaker, coach and author of multiple books writes the following article on using decision trees for project management purposes: “Prep: Decision Tree Analysis in Risk Management” (https://www.mpug.com/articles/pmp-prep-decision-tree-analysis-in-risk-management/).
We add upon his very interesting article and ideas to construct additional decision-making functionalities that enhance the power of pieced-together decision trees and Monte Carlo simulation. Decision trees are quantitative diagrams with nodes and branches representing different possible decision paths and chance events. Please read his article as a starting point to understand additional these additional decision making tools.
He builds a decision tree analysis model on whether to decide to build a prototype of a certain project or just go ahead, save the money on the prototype and risk yourself to build the project without all the additional learning, experience, (-and costs-) that a prototype phase might include. In one of its classical uses, a decision tree model is built in order to decide whether it is worthy to purchase upfront information (“value of perfect information”) before deciding to go ahead with a certain project. This is that case: is it worth to pay for all the information provided by a prototype? Probabilistically speaking, does investing upfront on a prototype reduce the chances of failure for the entire project?
I have rebuilt Satya’s model using PrecisionTree, a Palisade Corporation tool where you can visually map out, organize, and analyze decisions using decision trees, right in Microsoft Excel.
Among other tasks, PrecisionTree calculates EVM for $235,000 and places a TRUE on the Do prototype branch; i.e., it recommends building a prototype since the expected value of such a decision is $235,000 (70% chance of having a success valued at $500,000 and 30% chance of failing and having a loss of $50,000). On the other hand, not building a prototype will have an expected outcome of a $100,000 loss (20% chance of succeeding and obtaining $500,000 and 80% chance of losing $250,000). PrecisionTree is prescriptive in the sense that, not only it calculates expected values, but it also prescribes the best alternative route. Being this a value maximizing tree, it will recommend a decision to do the prototype by comparing $235,000 gain as opposed to $100,000 loss.
The recommendation seems to be obvious as it hinges upon at least, two important assumptions. On one hand, by doing the prototype the probabilities of success increase from 20% to 70% (3.5 times larger probability). On the other hand, if decided to do the prototype and the project fails, the losses will only account to $50,000, whereas if no prototype were to be done, the losses will increase to $250,000, a five-fold increase from its compared losses. In other words, by doing a prototype the decision maker learns from the prototype experience and is able to decrease the likelihood of errors, the potential mishaps and will learn about the key elements to eventually succeed. He/she does not eliminate uncertainties but certainly decreases – quantitatively in two ways at least – the probabilities of failure.
This is what is best captured by taking a look at this simple tree.
ENHANCING A TREE BY CONVERTING IT TO A PROBABILISTIC ONE
One of the limitations and powers of the standard decision tree model is its simplicity. By a glance, it is simple to view a straightforward path to decision. Sometimes however, such a simplistic viewpoint of the world loses touch with the complexity of reality. For example, when success is defined as a $500,000 payoff, in many cases this will be an oversimplification of reality. Without perfect foresight it is impossible to assess whether you could reach, say some “low” success of $400,000 and some “high” success of $800,000. Any value in the middle, with a most likely scenario of $500,000, could eventually be the actual outcome.
In order to account for these variations in outcome, we could use Monte Carlo simulation to insert uncertain variations to certain cells, such as the eventual possible outcomes of success and failure.
Monte Carlo simulation is a computerized mathematical technique that allows quantification for risk in quantitative analysis and decision making. Monte Carlo simulation furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action. It shows the extreme possibilities.
Monte Carlo simulation (MCS) performs risk analysis by building models of possible results by substituting a range of values—a probability distribution—for any factor that has inherent uncertainty, in our case, the potential success and failure project outcomes. It then calculates results over and over, each time using a different set of random values from the probability functions. Depending upon the number of uncertainties and the ranges specified for them, a Monte Carlo simulation could involve thousands or tens of thousands of recalculations before it is complete. Monte Carlo simulation produces distributions of possible outcome values.
To incorporate MCS within the framework of our decision tree model, we add @RISK, Palisade’s tool that allows incorporating this methodology within a Microsoft Excel’s model such as this one. Thus, the two tools, PrecisionTree for decision-tree analysis and @RISK for Monte Carlo simulation analysis can interact altogether on a previously built model upon Excel.
For the cell that contains the success outcome, we replace its deterministic value of $500,000 with an @RISK PERT distribution that allows for variation. Let us assume on a PERT distribution a minimum value of $400,000 and a maximum value of $800,000, given its most likely predetermined value of $500,000. The cell that contains such a distribution will look like as follows[1]:
When the MCS will eventually be run on this decision-tree model, @RISK will generate thousands of random numbers between 400,000 and 800,000 according to this shaped distribution function allowing for the tree to make a different decision each time. It may be true that, given other assumptions, when a certain success outcome approaches a relatively low value close to $400,000, on a certain scenario being produced (an iteration in MCS jargon), the decision tree could generate a different answer on not to pursue doing the prototype.
We have also included uncertainty to the potential outcomes of a loss, using PERT distributions as well. On the case that a prototype is built, the potential losses will range from zero to $100,000 with a most likely scenario of $50,000 as it was originally defined by Narayan. On the worst case on which no prototype is built, potential losses increase five-fold, so we have created another PERT distribution on which minimum losses would also be zero, but a maximum loss of $500,000 could occur, given a most likely scenario of a potential loss of $250,000. In other words, the most likely scenario loss is five times worse on the alternative of deciding not to do a prototype. Also, the maximum possible scenario for both PERTS (with or without doing a prototype) is double the size of its respective most likely outcome. This uncertain elicitation can be graphically expressed on these two overlaid distribution functions, showing the Do Prototype alternative on the solid blue curve:
In other words, we have included uncertainty to potential gains or losses depending on whether we decide to do the prototype or not. If the project is a success, regardless whether a prototype was done or not, the potential gains would range from $400,000 to $800,000 given a most likely value of $500,000 as originally stated. If the project is a failure we stand to lose money. Much more money will be lost if no prototype is done, with losses ranging from zero to $500,000. However, there is coverage if a prototype is done, whereupon losses would be limited to a maximum of $100,000.
MONTE CARLO SIMULATION RESULTS
After running the MCS, with 10,000 iterations or scenarios being tested, a solid 100% of the decision paths being evaluated rendered a “Do Prototype” prescription. The mean value of the correct decision being taken was over $258k, as it is depicted on the following chart:
Among other things, this outcome reflects the following probabilistic information. All possible outcomes are positive, i.e., it is impossible to lose money “on average” if you decide to make a prototype. Possible outcomes range from $157k to $431k. Two thirds of possible outcomes will range between $200k and $300k.
Now, these very merry results are such because they hinge on a very important assumption: The probability of success rises from 20% to 70% if you decide to do the prototype. In other words, whoever assigned these two probabilities has somehow subjectively valued that it is 3.5 times more likely to succeed if you do the prototype as opposed to not doing it. From many realistic points of view, this could be highly optimistic. Given a base success probability of 20% by not doing a prototype, let us sensitize the probability of success while doing a prototype as compared. In other words, let us try to run a MCS and observe the results of what would happen at different levels of probability of success on the decision of doing the prototype. The following table gives us the results of such an exercise:
The column Mean EVM calculates the average EVM payoff if the correct decision (to do prototype or not) is being followed. The last column, Mean Decision to Prototype, “counts” the number of times on which deciding to Do Prototype was the right option.
On the Mean EVM column you can see that there is a sign change at some point between 25% to 30% probability of a success given a Do Prototype decision. In other words, you do not need to increase so much the probabilities of a success by doing a prototype to 70%, i.e, 3.5 times more, to make sense to invest on the prototype. As long as you increase your probabilities from 20% to 30% (1.5 times) you would be better off on deciding to Do the Prototype. Also remember, you can make this strong a statement, given the unchanged assumption that losses may be five-fold if deciding not to do the prototype.
Negative scenarios on Mean EVM (scenarios 1 for 20% probability and 2 for 25% probability) are also interesting to analyze further. The first scenario assumes that you would do the prototype just for the sake of decreasing potential losses if failure occurs. In other words, you do not stand to increase any likelihood of your probability of success by Doing the Prototype. You are only covering to potential additional losses if the project fails. (Remember this assumption has not been changed). The mean EVM of $40k on losses, means that “on average” you stand to lose money on this project since the probabilities of failure are, either way, 80%. Maybe then, in this case, the obvious solution would be not to pursue the project at all, since you are better off by not doing anything. (net gain/loss of zero).
For this same scenario, however, the MCS decision-tree recommends to do the Prototype on 82% of the cases. This is the number of times on which, during the simulation, the potential losses would be contained if decided to Do the Prototype as opposed to its alternative. This percentage can be understood as a measure of the strength of the prescribed decision.
This coefficient becomes absolutely true at the fourth scenario, where the probability of success is valued at 35% when deciding to Do the Prototype. In other words, just by merely increasing the chances of success on doing the prototype from 20% to 35% (1.75 times), you would be 100% certain that the best decision that maximizes EVM would be to effectively do it.
The following chart shows the relationship between the probability of success by doing the prototype and the mean EVM (Y axis). Given the linearity of the relationship, it is easy to calculate the intersection or indifference point at a probability of 25.4%.
In other words you only need to jump from 20% to 25.4% probability of success by doing a prototype in order to affirm on your decision. A chances of success of 70% and even levels as low as 25.4% prescribe you to do a prototype in order to maximize EVM.
PrecisionTree also constructs the following Sensitivity table:
The selected and prescribed strategy will always be then to Do the Prototype as it is being shown on the following graph, for which at all levels of probability, deciding to do the prototype is the prescribed solution:
In other words, if by doing the prototype the decision-maker feels that the probabilities of success jump from 20% to more than 25.4% (a 0.27 times more increase on the probability) then it would be better off to do the prototype.
SENSITIZING TWO VARIABLES AT THE SAME TIME
Once again, remember that all of these elaborations lie upon the assumption of losses potentially becoming five times worse by deciding not to do a prototype if the project fails. What would happen if we could also sensitize this other variable. Remember that maybe someone assigned a five-fold increase in potential losses given a project failure when comparing the two alternatives of choosing to do or not to do the prototype. What would happen if these potential losses would not be that significant?
So we now test with PrecisionTree, i.e., just using a deterministic analysis, variations for two variables: the probabilities of success given the option to Do the Prototype and the Number of Times the mean losses are between not doing the prototype with respect to doing it.
The following sensitivity graph depicts all possible EVM outcome combinations of the two sensitized variables:
Here is how to interpret the above illustration. If by doing a prototype you can only guarantee a decrease on the potential losses of less than 3.5 times given that you are not increasing the probability of success from its original level of 20%, then it is not worth doing the prototype. (Point 20% on the X axis and 3.5 on the Y axis).
On the other hand, if by doing a prototype you can at least expect at increase in the probability of success from 20% to 40%, it really does not matter whether potential losses at failure change with the experience learned by doing the prototype. (Point 40% on the X axis and 1.0 on the Y axis).
An intermediate point could be the one on which probabilities of success by doing the prototype jump from 30% to 35%. At this level, you only need for the potential losses to only be as much as double (not five times as on the original case) in order to make it worthwhile to invest on doing the prototype. (Point 30% on the X axis and 2.0 on the Y axis).
At this level, it is clear to see why you do not necessarily need to be an expert on assigning probabilities to potential outcomes on decision trees in order to be able to make quantitatively justifiable decisions. You only need to define your threshold tolerances, in this case, based on two variables, in order to be able to make a decision. If you feel comfortable enough on the levels that a positive decision on making a prototype would give you as to add more likelihood of success and/or less potential losses, by gaining experience on prototype design then this powerful tool being run with sensitivity analysis will help you on getting a better understanding of your decision.
Take a note that these sensitivity answers may be somehow different by the ones provided on the previous MCS section where distribution function curves were added to potential outcomes. This last two-fold sensitivity analysis is being deterministically performed upon the expected values of the model and not on the whole simulated curves. It is, of course, possible to do a MCS / sensitivity analysis but we will leave this task for a later article.
[1] Additional information can be read by taking a look at the chart under the assumption of this PERT distribution being the correct one to depict uncertainties on this possible outcome. For example, the two vertical delimiters express that there is a 12.1% chance that the success outcome will be less than $450,000; and there is a 18.8% chance that it will be larger than $600,000.