Black Swan, White Swan, they are all Swans, you just have to Look for Them
Glen Alleman MSSM
Vetern, Applying Systems Engineering Principles, Processes & Practices to Increase the Probability of Program Success for Complex Systems in Aerospace & Defense, Enterprise IT, and Process and Safety Industries
Technical and programmatic risks are amenable to mathematical models. These models are well developed ranging from Monte Carlo Simulation to Bayesian Networks. The risk narratives around "people" systems don't seem to work too well when customers are asking for the probability distribution functions for risk or the confidence of completing "on or before" a date, the confidence of coming in "on or under a budget," or most importantly the probability of the loss of mission or crew in the mission to fly to the space station.
The notion of a Black Swan is pretty much defined in one of three ways.
Let's look at each of these some more.
Disproportionate Role of High Impact
When the impact of an event - a low probability event - and as the definition says "hard to predict" comes about, it is many times labeled a Black Swan. Something happened that we didn't think about and it caused as whole bunch of trouble that we never really thought about.
This is common in recent history. From the New Orleans levees collapsing to the Gulf oil spill, to the rivers overflowing, to the first ice storm at our local airport (DIA) where the snow plows were at the airport and the drivers were NOT.
There is a high impact for a low-probability event. But the probability was not ZERO. It was low, but not that low. The calculations for casual outcomes from past activities - the basis of Bayesian statistics - could have been done. This by the way (Bayesian) is what Taleb is now telling everyone to do after a bunch of flack around the simple-minded approach in his Fooled By Randomness book. This should have been done, and was likely done by many but no one listened.
Non Computability of the Probability
This definition is a bit misleading since non-computability means that the consequential outcome was not computable. But in fact, if there is a non-zero probability and a determinate impact, then we can compute something.
Now the scientific method part is the heart of the issue. In the post-normal science domain where environmental science lives - we have difficulty determining the impact because of the complexity of the system. In this case the bio-system. This is where the non-computability lives.
But even here the probability of occurrence is not ZERO, it has some value. Small but non-ZERO.
Psychological biases that make people individually and collectively blind
Here's where the magic takes place in allowing the observer to pretend the risk - the Swan - is Black rather than White. This is where Taleb started his conversation a few years ago - after 9/11. The Fooled By Randomness approach. When in fact using the first two definitions and looking to domains like nuclear power and weapons, manned and deep spaceflight.
Fools Are Easily Fooled By Randomness
This is the basis of the "biases that make people individually and collectively blind." Now "make" is an interesting term, I'd say "allow." People pretty much CHOOSE to be blind. For magic to work in the presence of a magician, the audience must CHOOSE to see the magic. I can't explain how David Blaine does his "magic." But he doesn't violate the laws of physics.
This is where many outside the space and defense domain seem to be saying software projects get in trouble, at least from an external point of view.
What Does Low Probability Mean?
领英推荐
Using the first two definitions of low probability and high impact, what does it mean in terms of actual numbers on actual projects, in an actual domain? No magic sleight of hand is needed here to have a discussion. Let's look at the standard deviation measures. The Z is the "sigma" in the Lean Six Sigma vernacular.
Z (the normal unit distribution) is calculated using the standard deviation (sigma), and the mean (mu). This normalization removes the details of individual probability distributions and sets the stage for the discussion of probabilities in the absence of any specific data details. Once the normalization is done - in the Z? distribution - the "sigma" discussion can take place.
In the picture below, the sigma, the standard deviation, measures how many parts per million. We all should know from our high school statistics class that one standard deviation from the mean contains 66% of all the possibilities (assuming a symmetric distribution). The Six Sigma measure has only 3.4 out of a million samples under the curve. This might be the start of a "low probability" event. There are 3.4 chances in a million of it happening. Pretty low? In the project domain, I work 3.4 chances in a million would be considered low. I suspect the commercial software development domain would be considered a VERY UNLIKELY event.
But here is the critical point of the conversation. Even in the case of 3.4 chances in 1,000,000 of the event occurring, that event still has a finite probability of occurring. It is inside the curve, in the sixth standard deviation. If this event were a Swan, it would not be Black. You could see it. It's there. To call it Black would seem to say that you choose not to look.
In the end, using Black Swans is an Excuse for Not Looking for Risk
When the term "Black Swan" is used as an analogy to being fooled by randomness and surprisingly impacted by an event, here's a possible conversation.
"gee, we never thought if we removed the secondary blow-out preventer from the well cap, the thing would blow up when we over-pressured the cementing job," "Damn, that made a big mess, didn't it," must have been one of those Black Swans they talk about up there on wall street. Those software guys use when they can't determine what went wrong."
I grew up - literally - in the oil business in the Texas Panhandle, with the Pampa, Texas motto of "Pampa, the Friendly City, Where Wheat Grows and the Oil Flows," so I have some sense of how oil men (my father included) used to fool themselves (in the 1950 boom days) into thinking things would always turn out better than they did.
So Here's My Take On This Topic
Black Swans being equated with Unknown Unknowns is a lame excuse for not doing your job if you're tasked with programmatic or technical risk management. If you're writing software for money and that software doesn't kill people by accident or on purpose, then maybe you can use Black Swans as your excuse for not pursuing to the ends of the earth what might go wrong with your clever product.
But if you're in the business of killing people on purpose (weapons) or by accident (nuke power, manned space flight, fly-by-wire control systems), then the only Black Swans on the planet are the UNKNOWABLE ones. They are the ones that (the Black ones) can never be discovered. They are beyond the ability of humans to discover them.
All those Black Swans that the people applying the 3rd definition use are White in the domain I work in. They are far right-tailed White. In the 9th standard deviation. but they are WHITE. They are Knowable. It may cost more than we can afford to find them. It may take more time than we have to find them. When we find them, we may not care because our system can tolerate the existence of the White Swan that is posing as a Black Swan.
But they are there. They are not Magic. And those who proffer they are working in the domain are guided by:
"The psychological biases allow people individually and collectively to be? blind to uncertainty and unaware of the massive role of the rare event in historical affairs."
That approach is an excuse for not doing the project management job if those consequences cannot be tolerated. That approach is for people who believe in magic.
Read a previous post to see how the "accepting Black Swans" culture was expunged from the project in Making The Impossible Possible.
Of course, your project, domain, and context may vary.