Black Swans and Intentional Ignorance
Glen Alleman MSSM
Vetern, Applying Systems Engineering Principles, Processes & Practices to Increase the Probability of Program Success for Complex Systems in Aerospace & Defense, Enterprise IT, and Process and Safety Industries
The notion of a Black Swan as a low-probability event with a high negative impact is a typical example of the "long-tailed" probability distribution found in many domains, from finance to manned space flight.
In some domains, you are using the Black Swan as an excuse for not doing your homework, and seeking all the possible sources of disruption is expected. The notion of unknown unknowns has been presented before. Unknown unknowns are a start. Managing Projects in the Presence of Unknowns is the last. Of course, this discussion went off track when there was significant pushback on the "knowability" of these unknowns.
There is no magic here, just plain old Bayesian statistics and its application to the population of possible outcomes. But not necessarily observable at the needed time by an individual or, worse, an individual inexperienced or not trained to look for those pesky black swans. Without the math, experience, or processes, it does look like magic. But sadly, for many of our 401(k)'s, the facts are becoming clear (The Big Short and All The Devils Are Here, there were no Black Swans in the sense of Unknown Unknowns in domains where we would discover the processes if we wanted to. Instead, it is a case of simple ignorance, likely intentional, to transfer risk to fulfill a political agenda or fulfill the quest from greed. So, let's move off the opinion puff pieces based on populist literature and back to the actual mathematical basis for the Black Swan.
Here's a news article from our local paper demonstrating the basis of the "knowability" of risks and the intentional ignorance of this knowledge. More importantly, the notion of only looking in the prominent places for potential risks is the source of these Swans.
Indeed, the Black Swan definition of a low probability high-impact event applies to Fukushima. But everyone knows there is an island-facing slip fault not far offshore in Japan, and that fault is active. Simple geology can predict the outcome that happened. To conjecture, as TEPCO did, that this is not a problem is the same as Donald Rumsfeld's conjecture that the tribes of Iraq will be willing accept democracy after the removal of the dictator or Taleb's "we just didn't understand the complexity of the financial markets," or software project managers saying "we just didn't have enough information about the complexities of the process to see that the cost and schedule would overrun by 200%."?
Are you out of your frick'in mind?
When we talk about risk, the only risk that can not be considered is unknowable. We may be unable to afford the cost of seekiing all the risks. That's OK if we acknowledge we can't afford to look. But the lame excuse that it was an "unknown unknown" is just that, lame. Maybe an unaffordable unknown. But if the risk is knowable, then there is no Black Swan as an excuse for the project going in the ditch - or worse for the foreign policy going in the ditch.
Black Swan Resources
Here's an ever-growing collection of papers on the topic of Black Swans.
Unknowable unknowns seem extremely rare in most domains. Lazy risk management isn't extremely rare. In software program assessments, I usually ask the question, "How much money and time is being spent on quality and risk management?" There are a lot of good answers, but I don't usually get a good answer. In software, spending less than 5% of time and money on risk management, in my experience, usually leads to poor results. The poor results aren't because of unknowable unknowns, but because of laziness.