Black Swans or Dirty Ducks?
Andrew Hiles
Principal, Kingswell International Ltd. registered in UK. Founder, BCI.Resigned as HonFBCI. Prof. Emeritus BCM, Telfort Business Institute, Shanghai University. Past Expert, IoSCM.Consultant, author.
Black Swans or Dirty Ducks?
A Retrospective
Challenge the Unchallenged
It’s a duty of risk managers to challenge that which rests unchallenged. Nassim Nicholas Taleb did a great service by popularizing risk concepts in his book, The Black Swan[1]. However, it is easy to get seduced by his enthusiasm, carried away by his pace and miss a few flaws. Over the last six years since it was published The Black Swan has been virtually unchallenged as a standard reference book. However, on closer examination, several of his Black Swans turn out to be more like dirty ducks. So it really is time his book was critically re-examined to discover what it means to risk and business continuity practitioners.
The essence of the Black Swan is that it is an unexpected event that happens or an expected event that does not. Expectations are driven by our knowledge and experience and hence can mislead us. ‘First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.’[2]
In the introduction to Part One, Umberto Eco’s Antilibrary, Taleb says: ‘Read books are far less valuable than unread ones…..indeed, the more you know, the larger the rows of unread books.’ Considering his skepticism for information, confirmation and corroboration it is interesting to note his 28 pages of bibliography – some 730 presumably read books.
Theories of Risk Perception and Learning
The theory of risk perception has long been established as including a number of factors contributing to our perception and subsequent judgement – identified at Figure 1 below:
Figure 1: Risk Perception – Background Factors
For years, risk theorists have urged us to seek to widen our perception and advised that perception does not equal reality. We know that our perception of risk has been learned from:
- Passive learning (know something, do nothing). This is simply neglect.
- Active learning (know something, do something) is fundamental to risk management.
- Organization specific learning is fine – but too often lives in silos within an organization and is not shared between organizations.
Our kids do not learn from our mistakes. They have to make the same mistakes themselves. Similarly, although we should learn from other people’s mistakes, we often fail to do so. Doing so is called Isomorphic Learning:
No two disasters are the same, but we can draw similarities from different cases. However, all too frequently we fail to do so. The UK Bradford City Stadium Fire on the May 11, 1985, in which 56 people died, should have alerted all with similar potential problems. It didn’t. London Transport ignored ‘...two fires on wooden escalators that broke out at Green Park station (prior to) fatal fire at King’s Cross station.’ In 1987 in which 35 people died [1988 press article - quoted in Toft, 1997].
In Each case, in retrospect it can be seen that the incident was caused by a ‘precipitating event’ that ‘brings to light the events of the incubation period’. Taleb would presumably claim that this was only rationalized with hindsight or retrospective narrative. But the pattern should have been identified and any competent H&S professional should have flagged up the warning signs and taken preventative action: just as the pattern of Signals Passed At Danger (SPADs) on the UK rail network should have alerted the organization to the probability of a serious incident actually occurring as it did at Southall. Figure 2 below shows constituents of Isomorphic Learning.
Figure 2: Isomorphic Learning
Numerous rail crashes have been caused by signals passed at danger, just a few of which follow:
- Harrow & Wealdstone, UK in 1952 killing 112 people.
- Clapham, UK 1988 – 35 people died and 500 were injured.
- Southall UK, 1997 seven were killed, 139 injured.
- Glenbrook (NSW, Australia ) causing seven deaths in 1999.
- Waterfall, Australia in 2003 again killing seven people[3].
- Whenzhou, China, 2011, - 40 people were killed, at least 192 were injured.
- Ontario, Canada, 2013, killing three and injuring 49.
- Western Switzerland, 2013. One person was killed and 35 others were injured.
- Spain, 2013, killing 79.
Lessons from one event were not learned or passed on to prevent others.
The lessons of Piper Alpha oil rig in the North Sea in 1988 were not taken on board in the BP Macondo oil spill in the Gulf of Mexico in 2010. The lessons of the Amoco Cadiz oil tanker grounding and spill off the coast of France in 1978 were not written into the response to the Exxon Valdez tanker spill in 1989.
The carefully documented contingency plans, drawn up after the UK Foot and Mouth disaster of the 1967-1968 outbreak, were initially ignored in the 2001-2002 outbreak until it was too late.
Taleb claims that “we don’t learn from what we don’t learn[4] implying that we should study more than just past events. The reality is that we rarely learn even from past events. On Saturday, July 28, 1945, William Franklin Smith, Jr. was piloting a B25 bomber on a routine personnel transport mission from Bedford Army Airfield to Newark Airport in fog, became disoriented, and crashed into the north side of the Empire State building between the 78th and 80th floors. In December 1994 Air France Flight 8969 was hijacked at Algiers by the Armed Islamic Group who intended to blow up the plane over the Eiffel Tower in Paris. A French anti-terrorist squad foiled the attack, killing all four hijackers when it landed in Marseille. In the mid-1990s, author Frederick Forsythe rejected a plot for a 9/11 type incident as being ‘unrealistic’. The incident was not inconceivable – just highly unlikely. Despite Taleb’s claim that, as a result of 9/11: “They [people] learned precise rules for avoiding Islamic proterrorists and tall buildings” following 9/11, the reality is that earlier lessons were not transferred to similar or parallel situations. They were not applied in February 18, 2010, when Andrew Joseph Stack III, flying his Piper Dakota, crashed into Building I of the Echelon office complex in Austin, Texas, USA, killing himself and Internal Revenue Service (IRS) manager Vernon Hunter[5]. The 1993 bomb in the WTC car park had already clearly demonstrated that the buildings were targets. Although almost 3,000 victims died in 9/11, evacuation plans in place, despite some shortcomings, helped to save between 11,000 to16,000 people. 9/11 was not so much a Black Swan as an osprey – a relatively rare but known bird species.
As long ago as 1957, the educational theorist Bruner[6] identified the iconic learning process: where information is stored visually in the form of images from seeing reading, or hearing the experiences of others. This can be by word of mouth, by telephone, books, newspapers, letters, emails blogs, books or Twitter. While Taleb acknowledges that ‘Metaphors and stories are far more potent (alas) than ideas’ he does not credit them with providing valid learning.
Iconic learning moves from real, personal experiences to documented experiences of other people. However, this learning may easily be ignored. The presentation of facts is inevitably selective and the standard reaction is ‘it won’t happen to us’. The role of culture in ignoring or accepting know risk should not be underplayed: ‘culture is positioned at the heart of the … problem, because of its role in shaping blindness to certain forms of hazard (that is, those which are at variance with the taken for granted)’ People tend to remember images rather than reasoned and detailed reports of what went wrong and how such failures could be avoided in the future. So although Iconic learning should guide us, we largely ignore it. Hence Taleb’s argument that ‘we don’t learn that we don’t learn’ is largely justified. But that doesn’t mean that we shouldn’t learn what we should learn.
So, Taleb substantively discounts these traditional learning and risk assessment techniques because they are not effectively practised. But, if they were, they would be far fewer surprise disasters. The so-called Black Swans arising from such failure to assess risk are simply dirty ducks. They could and should have been predicted. What Taleb labels as post-event rationalization is frequently an almost wilful (or perhaps fatalistic) failure to heed warning signs. People still live on the sides of volcanoes, despite Pompeii. We still build new properties on flood plains that are certain to flood again. People still live in hurricane and flood areas like the Caribbean and New Orleans. They still live in increasing population density in earthquake zones, despite there being 16,667 recorded earthquakes worldwide in 2012 to Nov 27, 2012 and 3,836 in the United States according to the USGS National Earthquake Center[8] (plus ‘several million’ that go unrecorded). Nevertheless, earthquake detection and early warning systems have developed to the extent that Tweet Earthquake Dispatch (TED) provides seismologists with initial alerts of earthquakes felt around the globe via Twitter in less than two minutes[9]. Japan’s Earthquake Early Warning System was deployed in 2007. Other countries and regions have limited deployment of earthquake warning systems, including Taiwan, Mexico (installed primarily to issue alerts to Mexico City), limited regions of Romania and parts of the United States.
What Impacts Most: The 99% Norm or 1% Extreme?
To say that ‘our world is dominated by the extreme, the unknown, and the very improbable’ is therefore something of an overstatement. Certainly extreme and unexpected events can have a major (if usually temporary) impact: but the accumulation of minor events probably shapes the world just as much.
In claiming to understand history, Taleb asserts that we do not really understand events and exercise
- The illusion of understanding
- Retrospective distortion to create false logic
- Undue credence to learned people (back to his 28 pages of bibliography!).
The phrase ‘the fog of war’ springs to mind (which Taleb addresses later in the book, though it is pertinent here). It is self-evident that contemporary wartime leaders understood the uncertainties of war. Most do not appear to have been guilty of the illusion of understanding – certainly not during the war. ‘We all make mistakes. We know we make mistakes. I don't know any military commander, who is honest, who would say he has not made a mistake. There's a wonderful phrase: 'the fog of war.' What "the fog of war" means is: war is so complex it's beyond the ability of the human mind to comprehend all the variables. Our judgment, our understanding, are not adequate. And we kill people unnecessarily[10]’. Later, in Chapter Nine, Taleb implies that rationalization applies to past events rather than being used to tackle current events yet he acknowledges that military leaders ‘…thought out of the box’ and that ‘…only military people deal with randomness with genuine, introspective intellectual honesty….’
‘A plan is perfect until the battle starts’, said Marshal Pétain[11] – a message echoed by military leaders in virtually every conflict since. And yes, logic created in hindsight is a wonderful thing. Even so, there are lessons learnt in each conflict that have some bearing on performance in the next. But equally there are lessons unlearned. The British lost in Afghanistan. The Russians lost in Afghanistan. The Americans have lost in Afghanistan.
“History does not crawl, it jumps[12]” claims Taleb. Up to a point, and there have been defining moments followed by seismic shifts: the French revolution; the American War of Independence; Hiroshima. However, history both crawls and jumps. Much change is evolutionary rather than revolutionary. There was no flash of lightning marking the move from a hunter-gatherer society to farming; no clap of thunder that sounded the change from a rural society to a city-based society. The spread of Christianity or Islam took centuries. Even the Industrial Revolution took from 1760 to around 1840, spreading from Britain through Europe to the USA and arguably it is still ongoing in China, India and other developing countries. The so-called Hundred Years' War between England and France lasted from 1337 to 1453. You could argue that the invention of the crossbow, the longbow, gunpowder, the musket, the machine gun, the submarine or aeroplane changed the nature of warfare, but they didn’t change the nature or folly of war. Neither did the Great War of 1914-18 (although it certainly changed the nature of society). Neither did 9/11: terrorist acts had long before been used in this way – in Africa, Malaysia, Ireland to name just a few (although 9/11 certainly redefined ‘enemy’ and anti-terrorist tactics). And military inventions have not changed the fundamental fact that defence strategy and technology will always be overtaken by attack strategy and technology: the Maginot line of the 1930s is just one example of defensive positions being outmanoeuvred or becoming irrelevant: witness the numerous castles, forts and chateaux around the world –most of those that are not ruined are now tourist attractions or hotels.
Farming began around 4,000 BC. Many field layouts have not changed for 1,000 years. Technology has changed farming, but it remains a process of growing and harvesting food (vegetable or animal). Banking is documented from around 2000 BC when merchants in Assyria and Babylonia made grain loans to farmers: modern banks did not just happen at the wave of a wand – they evolved. Equally, obsolescence rarely happens overnight. We currently use chalk, charcoal, pencils, pens as well as PCs, laptops and tablets with word processing, voice transcription and handwriting recognition. The Russian security services are said to be re-introducing the typewriter to avoid breaches of ICT security. So maybe history simply staggers and lurches along. But for long periods of time it staggers or lurches (more or less) predictably.
Taleb cites his experience as a trader and refers to financial institutions using the Bell Curve as a risk model as if it were their risk model’s sole component and later identifies flaws in the Value at Risk (VaR) method. We have worked with a number of the top 10 global financial institutions and dozens of smaller banks, trading institutions and insurance companies. Even in 2007, none of these used such simplistic single methods to calculate risk[13]. Maybe it was in Taleb’s time in financial markets – but not in 2007. And certainly not now.
In Chapter Two Taleb introduces the fictional Yevgenia Nikolayevna Krasnova who rejects the distinction between fiction and nonfiction in her book A Story of Recursion. The subsequent success of a previously rejected book is claimed to be a Black Swan event. She may be fictional, but Giotto, Van Gogh and Manet were not. They, too, rejected the difference between traditional pictorial accuracy (fiction) and a novel figurative or impressionistic portrayal. Bach, Shostakovich, Glen Miller, Oscar Peterson, Louis Armstrong and the Beatles did the same for music. As an example of a Black Swan event, Yevgenia is weak. It is entirely predictable that traditionalists reject novelty – until it becomes fashionable to accept it.
A Question of Scale
So we turn to scalability. There is more than one concept of scalability[14]. Taleb refers to scalability in terms of replication of a personal service (a singer selling CDs of music rather than having their income being limited by the number of performances they can physically perform), suggesting financial rewards from any service performed by an individual are otherwise limited by the number of hours they have available to perform it. But this misses the important point of the rate for the job.
Sometimes the price can be too cheap. Ezekiel Gilbert, 30, a Texas john, was facing life imprisonment having killed Lenora Frago, 23, a Craigslist escort, after she took $150 of his cash but refused to have sex on Christmas Eve 2009. He was acquitted of her murder[15]. However, society prostitutes were commanding $40,000 a night at Cannes Film Festival, according to reports.[16] Even that may be cheap: 20-year-old Brazilian Catarina Migliorini sold her virginity by auction in 2012 for $780,000 (most of the proceeds were to go to build homes for impoverished families in her home state of Santa Catarina[17]). In 2005, 18-year-old Graciela Yataco, a model from Peru, was responsible for her mother's medical bills and also had to support her younger brother. So she auctioned her virginity for $1,300,000[18]. Admittedly these two altruistic ladies can only do this once, but then scarcity is another escalator to scalability of price.
Whereas qualified Ontario teachers earned an ‘excessive’ $78 per hour in 2012[19], failed Prime Ministers or Presidents, Finance Ministers who bankrupted their countries regularly earn up to $250,000 an hour for speaking engagements plus substantial amounts from consulting etc. This enabled Tony Blair, the ex UK Prime Minister, to amass an estimated fortune of over £45million[20] ($67million) and earn over £80 million[21] ($120 million) in the 5 years since he left an office paying £142,000 (then ~$280,000) a year in 2007. Scalable indeed! Football players, racing drivers and movie stars all have scalable earnings depending on their popularity and success - even excluding royalties.
And here Taleb rather assumes that scalability is really about money: other attributes like fame, reputation, happiness, charitable deeds etc. are also scalable. So there are more dimensions to scalability than Taleb acknowledges at this point, although later he states ‘our highest currency is respect’ and again in Chapter Seven, he writes of ‘a currency other than material success: hope.’
Lies, Damned Lies and Statistics
In Chapter Four, we are introduced to Mediocristan, a statistical world in which ‘When your sample is large, no single instance will significantly change the aggregate or the total.’ In Mediocristan, Taleb acknowledges that history crawls. Its neighbour is Extemistan. ‘In Extremistan, inequalities are such that one single observation can disproportionately impact the aggregate, or the total.’ In exploring these two countries, Taleb illustrates why ‘There are lies, damned lies and statistics[22].’ My wife was a Fellow of the Institute of Statisticians and says that, when asked to undertake a statistical project, her first question was always: ‘What result do you want?’ Any risk professional should already be aware of statistical bias, issues with the statistical base and quirks of analysis and presentation and treat risk statistics with appropriate caution and cynicism.
We are also urged to learn from the turkey, which, having enjoyed ‘1,000’ days of feeding and being looked after, could be justified in extrapolating that was the normal state of affairs: however, it was killed next day for Thanksgiving. It might make more sense to learn from the turkey farmer or the butcher. That is, we need some external perspective to provide context to a single datum[23]. Had we also looked at the actuarial lifespan of a turkey we would not have been deceived. In North America the usual natural lifespan for a turkey is 10 years (~3,362 days allowing for a couple of leap years)[24]. On Taleb’s turkey farm, even a turkey must have noticed some of its room-mates dying prematurely which suggests that it, too, may one day die. There is actually a 7% to 10% pre-slaughter mortality rate on turkey farms[25]. Taleb’s 1,000 days lifespan for farmed turkeys is wide of the mark. After incubation, North American turkey farms (which produce 60% of the world’s turkeys) keep typically keep some 46 million turkeys a year for 10 to18 weeks – 70 to 126 days[26] - before slaughter. The maximum recorded lifespan for a turkey in captivity is twelve years and four months[27] - but this may be a green turkey (hardly a black swan?). Regardless of this, had the turkey extrapolated from the first 10 days of feeding that it would have a further 10 days of feeding and repeated this for between six to around 10 times it would probably have been right. This point may seem a bit labored, but it goes to the root of Taleb’s dependency on assertion rather than precision. Even if we accept it at face value, Taleb’s simplistic example just shows that the longer term the extrapolation (especially when based on a single datum) the less likely it is correctly to predict an actual outcome.
So, yes, unjustified extrapolation is a ‘sucker’s problem’. Almost all investment literature warns you that past performance is no guarantee of future results, so this phenomenon is hardly unknown. That doesn’t stop people gambling, but the one key fact about gambling is that the bank always wins - eventually. Even gamblers are aware of this and what they are gambling about is not the certainty of the bank eventually winning but the timing of it. The fragility of Fannie Mae and Freddie Mac were public knowledge at least three years before the 2007 financial crisis[28] . Questions about Bernard Madoff’s $50 billion Ponzi fraud had been raised as early as 1999[29] but his firm was one of the top market makers on Wall Street in 2008. None of this stopped speculators trying to make money before the bubble burst. On June 29, 2009, Madoff was sentenced to 150 years in prison.
Whatever the reason – greed, stupidity, arrogance – investors in these funds wilfully ignored the warning signs. It’s rather like the gamblers investing in a 1.8 million to one[30] chance of winning the lottery but ignoring a hundred to one chance of a bad risk happening. When it comes to luck people are bipolar. ‘Somebody has to win – it could be me’ for good luck and ‘It won’t happen to me’ for bad luck. Self delusion. So perhaps this is the real lesson: risk-takers may be aware of the risks but are convinced they will be the exception – the ones who get away with it, make a killing (or, if they are using sweatshop outsourcers, make huge savings) and hope to get out before the financial bubble bursts or the factory in Dhaka, Bangladesh, collapses and kills 1,100 people.
This dimension of timing is entirely missing from Taleb’s book. We know (or should know) that volcanoes will blow; earthquakes will happen, floods will occur, meteorites will strike – but, we hope, not on our watch. Similarly, the dimension of position is underplayed. We know (or should know) that volcanoes will blow; earthquakes will happen, floods will occur, meteorites will strike – but, we hope, not on our patch. Whether from religious, cultural or personal fatalism; whether by considered calculation; whether by ignorance (wilful or otherwise); or whether by default, we simply accept the risk. Not in our time. Not in our place.
Evidence: False or True?
Chapter Five looks at confirmation and rightly dismisses many false techniques of confirmation which are simply bias in our selection and interpretation of information. Taleb states that ‘There is no such animal as corroborative evidence.’ This seems a bit extreme. If you see a car disappear under a bridge and then an identical car emerges from the other side, it suggests the car passed under the bridge. If your watch shows 0100 and a friend says ‘it’s getting late’ maybe it is getting late. If the sun appears directly overhead it suggests it is around mid-day. Of course, all these could be illusions, but the probability is that they are not. Taleb himself goes for corroboration in a big way, citing a multiplicity of thinkers as ‘proof’ of his hypothesis.
The chapter concludes that Black Swan events are more likely, more frequent and have greatly increasing impact because of the complexity and inter-connectivity of our societies. However, this partly ignores the place in which Black Swan events occur and the preparations made to mitigate damage. The meteorite that struck the ice of a lake near Chelyabinsk, Russia in July 2013 caused significant damage – but not as much as if it had hit New York, because of the relative sparseness of population around Chelyabinsk[31]. Hurricane Sandy caused damage over $71 billion around New York – but loss of life was limited to about 100 because of effective contingency and evacuation plans. According to Earth Sky News[32]‘earthquakes 8.0 magnitude and above have struck at a record rate since 2004. But the increased rate was not statistically different from what you’d expect from random chance.’ In any case, it is arguable that such events are not Black Swans: they have been happening for millions of years and documented for hundreds of years. The fact that governments and companies may not be ready to deal with them is another issue. I arrived in Pakistan the day after the 7.6 magnitude 2005 earthquake. The media sought to get me to endorse their criticism of the government which had too few helicopters available to deal with the crisis (it was claimed an additional 200 helicopters were needed). They were disappointed when I pointed out that governments had a choice: they could invest in the equivalent of 200 helicopters, crewed maintained, updated and renewed for a century against the possibility of a once in a hundred year event – or spend this money on hospitals, education and infrastructure. It’s a default or calculated risk, accepted based on timing and cost.
Rationalization, Perception and Linearity
Chapter Six, The Narrative Fallacy, expounds the theory that narrative is a simplification of reality that causes us to remember things that are pertinent to our opinion and ignore those that are not. In doing so, we underestimate the randomness of events and selectively seek to find simplistic causes for events. Intuitive thinking may lead us to false conclusions and we may delude ourselves into believing that such conclusions are actually the result of logical or cogitative thinking. This is well known: we are more likely to overestimate the probability of events happening if we have personally experienced them. That is why our perception of risk needs calibrating by actuarial statistics and triangulation.
In the next chapter, Taleb sees us as driven by hope and accepting linearity, despite evidence that events and progression are frequently non-linear. We prefer to see symmetry rather than asymmetry which Taleb claims is more normal. This argument is somewhat undermined later in Chapter 16 when he refers to the replication of patterns in fractals: maybe repetition and linearity are the norm.
Chapter Eight explores the problem of silent evidence – that is, seeing and believing obvious evidence and dismissing or ignoring the possibilities of unseen or undiscovered evidence. Taleb urges us to ‘Consider the number of actors who have never passed an audition but would have done very well had they had that lucky break in life.’ Yes, but many of those actors would never have made the grade and have probably found their true limit of capability in pumping gas. Equally, some of the actors who passed an audition exceeded their capability and should probably have joined the un-auditioned on the gas station forecourt. Life isn’t fair – yes, life is random and some people luck out while other equally capable (or incapable) people do not. Life is a bitch, then you die – so no surprises here. On the other hand, many successes come from hard work, skill and training: as Samuel Goldwyn, the film producer, said: ‘The harder I work the luckier I get.’ Success can be linear or non-linear.
In Chapter Nine Taleb’s bias towards the non-linear becomes a little excessive. ‘In real life you do not know the odds; you need to discover them, and the sources of uncertainty are not defined.’ But in many cases we do know the actuarial odds – or, if you prefer, they have already been discovered. Yes, they may not apply to us as individuals, but we have a broad sense of probability. While Taleb virtually rejects the controlled odds of casinos as reflecting real risk, surely controlling odds is key to effective management of many businesses. It is a fundamental of risk management. If it were otherwise, every insurance company, casino and bookmaker would have gone bankrupt by now. Indeed Taleb acknowledges that casinos ‘have nothing to do with uncertainty.’ And yet casinos, as he illustrates, can suffer potential major off-expectation hits (Black Swans). It is surely only the predictability of normal trading odds that allowed the casino he cites to generate the assets to withstand the hit from Black Swans. Taleb too readily downplays prediction and forecasting, especially short-range prediction and forecasting. It is not too difficult to predict tomorrow’s weather with a reasonable degree of accuracy – but predicting the weather in (to take an arbitrary number) 1,000 days’ time is likely to end in failure. This is no magical insight: the more variables there are to consider and the longer the timeframe, the less likely a prediction is to be correct.
Can We Predict?
In Part Two, Taleb continues the theme of unpredictability with the title ‘We Just Can’t Predict. In’ Chapter Ten ‘The Scandal of Prediction’ Taleb claims: ‘Our intuitions are sub-Mediocristani. But we do not live in Mediocristan. The numbers we are likely to estimate belong largely to Extremistan.’ This is blatantly not correct: we spend a large part of our lives in Mediocristan, where their laws of averages and percentages apply most of the time. However, we spend some of our time in Extremistan, where random events and surprises are the new normal. We may also spend some time in between the two, where random events surprise us, but their impact is not unacceptably severe or where plans we made for predictable events can mitigate such random events. Even Taleb’s world cannot be flat earth, comprising of just two countries. As professional risk managers and business continuity professionals we should divide our presence between all countries, like good tax exiles, planning to avoid the full impact of any risks.
Taleb states that ‘…events, it turns out, are almost always outlandish.’ Well, it depends on how one defines ‘events’ but the majority of events, surely from our own experience, are trivial, hardly worth noticing and therefore hardly noticed and rarely recorded. It is only the outlandish that receive attention and hit headlines. Taleb seems to contradict this later in Chapter 11, when he states ‘Small differences in where this tiny body [a comet] is located will eventually dictate the future of behemoth planets.’ This may or may not be true: and the odds of an impact with a behemoth planet are well, astronomically small. It is the positiveness of some of these assertions that jars somewhat with the overall thrust of uncertainty, unpredictability and randomness. There is ambivalence in accepting in some places the impact of small things creating larger consequences and the value of linear extrapolation (albeit with reservations) and the overall message that it is the big, random and unexpected events that make the difference and change the world. No contradiction is seen in later in saying ‘you should focus on these very small details’ although ‘examining all of them lies outside our reach.’ Maybe spending more time focusing on the small details would enable us to discover black cygnets before they become swans.
Later, Taleb suggests ‘We could plan while bearing in mind such limitations [of known models].’ Surely that is why risk managers and business continuity professionals exist? Taleb cautions about ‘failing to take into account degradation as the projected period lengthens’– an argument covered earlier in this paper concerning turkeys and weather forecasting and hardly justification for the title ‘We Just Can’t Predict’. Again, it may come down to semantics, but if we know the time is 12.00 GMT it is fairly safe to predict that, in 60 minutes it will be 13.00 GMT.
Chapter Eleven claims that ‘Prediction requires knowing about technologies that will be discovered in the future.’ Predicting a meteorite hit or a volcanic eruption does not necessarily require knowledge of future technologies - maybe present technology is enough (although perhaps future technology could improve the accuracy of predictions). Maybe long term prediction requires us to imagine non-existent technology, but sound short-term predictions may be linear – from Mediocristan. The Romans didn’t predict the automobile, but drivers in Europe still drive on Roman roads. Even if the underlying assumptions are incorrect (say, that we have enough fuel to drive from New York to Washington DC) they may still be valid for the first 30 miles (~48 kilometers) and, if we are prudent, check our fuel gauge, and fill up with gas before the tank runs dry, we may still get to Washington DC.
There is no apparent irony intended in the statements that ‘One greatly underestimated thinker is G.L.S. Shackle, now almost completely obscure, who introduced the notion of “unknowledge,” that is, the unread books in Umberto Eco’s library. It is unusual to see Shackle’s work mentioned at all, and I had to buy his books from secondhand dealers in London.’ So corroboration is OK now?
Chapter 12 expands on the vulnerability of human knowledge.’ We tend to project the (unknown) future from the (known) past. Taleb defines ‘randomness’ as ‘incomplete information’ – a useful interpretation, but one suggesting that, given more information, the random might become predictable.
Taleb’s advice in Chapter 13 can be summarized as:
- Understand Black Swans can be positive as well as negative and take advantage of the positive impacts. Some businesses thrive on positive Black Swans. Though true, this is hardly new. It is a cliché that the Chinese character for ‘crisis[33]’ is made up from the characters for ‘danger’ and ‘opportunity’ – a concept first referenced in 1938[34].
Figure 3: Crisis
- Do not be too blinkered in outlook and don’t try to predict Black Swans: ‘infinite vigilance is just not possible.’ But just because we can’t predict all black swans, why should we not try to predict some? We cannot prevent burglars breaking into our house using a Caterpillar excavator – but we still close the windows and lock the doors when we go out.
- ‘Seize any opportunity, or anything that looks like an opportunity.’ Yes, but do a risk analysis first!
- ‘Beware of precise plans by governments.’ Their predictions are likely to be hopelessly inaccurate, Taleb says[35]. Yes, probably the last correct government prediction was the Pharoah of Egypt in 1715 BC who dreamt of seven years of plenty followed by seven lean years[36] - a surprisingly recurrent phenomenon, although few governments plan for the lean years as the Pharoah did (maybe they expect to be out of office by then).
- ‘Do not waste your time trying to fight forecasters…’ – they can’t be told. No, but they can (eventually) be discredited by the accumulation of facts and the diversion of their forecasts from actuality – and you can simply ignore their forecasts. Global warming has turned into global cooling - or ‘climate change’ to hedge the bet.
Mediocristan and Extremistan: A World of Two Countries?
Part 3 contains the more technical sections, with the suggestion that Chapters 15, 17 and the second half of Chapter 16 ‘can be skipped without any loss to the thoughtful reader’. Taleb expands on the themes that:
- ‘The world is moving deeper into Extremistan, that it is less and less governed by Mediocristan…’
- Statistically, the Gaussian bell curve of probability is ‘a contagious and severe delusion’ since it is based on cases where a few extreme cases do not significantly affect the outcome.
- Mandelbrotian randomness is a better guide to statistical probability since it better reflects cases where there are more extreme patterns of distribution and a few extreme cases can substantially change the outcome.
Is the world getting more extreme? It’s now some 68 years since the last global war ended. The financial disasters of 2007-2009 were probably no worse than the Tulip Bulb Mania of 1637;[37] the South Sea Bubble of 1720;[38] or the Wall Street crashes of 1878 and 1929[39] and may have been handled better. Economic losses from natural disasters for the six months to June 30, 2013 were $85bn, about 15% lower than the average from the previous 10 years[40]. Where is Taleb’s evidence for his assertion?
Chapter 14 describes the journey from Mediocristan to Extemistan and ascribes much success to luck. He cites, as an example, Microsoft succeeding over superior Apple products because of luck. Luck may have played a part in it, but it partly depends on how you define success and probably a larger part was played by market differentiation. Apple appealed to the innovators, the connoisseurs, the designers, the elite. Apple products are higher priced partly to preserve cachet. There are simply fewer elitists than members of Microsoft’s common herd. The herd offers a bigger marketplace (but not necessarily a higher market value or profit). And aren’t both Microsoft and Apple successful companies? Didn’t Microsoft buy 150,000 non-voting Apple stock in 1997 for $150 million? Pity they sold it (converted to 18.2 million general stock) in 2003 – it was worth $7.979 billion as at June 7, 2013. Not bad for an ‘unlucky’ loser.
The nearest Taleb comes to predicting a Black Swan is when he discusses the dependence of even a highly distributed infrastructure or network on a few, highly utilized, critical nodes.
Lies and Statistics – Again
The title of Chapter 15 summarizes its content: ‘The Bell Curve, That Great Intellectual Fraud.’ Its argument is expressed in one example provided: ‘If I told you that two authors sold a total of a million copies of their books, the most likely combination is 993,000 copies for one and 7,000 for the other[41].’ The bell curve would suggest 500,000 each. Although accepting that the bell curve is justified in some situation like genetics and heredity, the dogma is that extremism rules and ‘Reality is not Mediocristan, so we should learn to live with it.’ Yet Taleb has already accepted the validity of the bell curve in areas like genetics and heredity which are in Mediocrastan. Really all this says is that we should be aware of applying inappropriate statistical methods to situations to which they do not apply.
Chapter 16 expands on fractals –the repetition of geometric patterns at different scales, revealing smaller and smaller versions of themselves.’ Taleb suggests studying fractal randomness may be a way to identify some ‘gray swans’, illustrating it by reference to the 1987 stock market crash. Past incidents may alert us to the potential of a similar future recurrence, if not the precise occurrence. This reverses the post-event rationalization line taken in Chapter 1:’History is opaque. You see the script that comes out, not the script that produces events, the generator of history.’ But whereas the argument is used in Chapter 1 to cast doubt on our forecasting capability (i.e. to identify Black Swans), it is used in Chapter 16 to support it (by identifying gray swans and so reducing Black Swans – which somewhat contradicts his advice not to try to predict Black Swans in Chapter 13).
Chapter 17 is an attack on ‘the application of phony mathematics to social science.’ Again, the point is that extreme events may deliver extreme results but that statistics do not always reveal these impacts, swamping the extremes within an average. Chapter 18 continues the theme, exposing the ludic fallacy of ‘basing studies of chance on the narrow world of games and dice.’ – effectively another attack on misapplication of the bell curve. The chapter concludes with advice to be ‘noncommoditized’ in thinking so as to ‘convert knowledge into action and figure out what knowledge is worth.’
Taleb concludes: ‘In Black Swan terms, …you are exposed to the improbable only if you let it control you. You always control what you do; so make this your end.’
Summary & Conclusion
So, how can we best summarize both Taleb’s thesis and this modest critique of them from a risk management and business continuity perspective? We need to:
- Acknowledge that virtually all natural disasters and many other disasters are (more or less) predictable except for the factors of timing and position – and even these factors may be predicted to some extent, however small.
- Understand that some Black Swans are simply dirty ducks, ospreys or gray swans– they could and should have been foreseen if risk managers identified, examined and analysed existing evidence. Time and again incident and accident reports identify warning signs that were clearly flagged (yes, they do this in retrospect but the signposts to disaster were there marking the route before the disaster took place).
- Beware of arrogance, greed, and a sense of omniscience that will lead people who should know better to ignore the most blatant signs of danger. Indeed, these characteristics amongst leaders are some of the signposts.
- Consider human nature – inconsistency is deeply embedded: we believe that good things will happen, bad things won’t, no matter what the statistics say.
- Understand the weight of timing and position as part of the decision-making or default acceptance of risk: we know it will happen sometime, someplace but not yet, not here. The argument (stated or not) is: ‘Why invest now in prevention or mitigation when we can spend the money on more urgent needs?’ Or, if we are gambling on positive risk: ‘Someone has to win’; ‘it will all work out OK in the end’.
- Address (more or less) predictable risks but be aware of the shortcomings in our prediction methods, techniques and statistics. Use appropriate statistical techniques to avoid the abnormal being hidden by averages. Triangulate statistics if practicable. Cross-check with techniques like Stochastic Processes, Boolean Simulation, Bayes Theorem, Random Finite Set Analysis, Decision Tree[42]/ Fault Tree Analysis and Similarity Judgments.[43] The bell curve is not the only weapon in the armoury.
- In assessing probability, think Murphy’s law. Risk professionals in aerospace and nuclear industries have been conducting probabilistic risk analysis at least since the 1960s – with limited accuracy. Cooke (1991) reports that NASA had predicted the probability of shuttle failure at one in every 100,000 flights. Colglazier and Weatherwax (1983) had predicted failure at one in every 35 flights. The Challenger Space Shuttle failed in 1986 after just 25 flights[44].
- Constantly be alert to new or changed areas of risk, transferring risk-related information from one context to another, from one industry to another.
- Live in both Mediocristan and Extremistan – and be aware that there are countries in between and around them, visiting each frequently. Even the two hemispheres of the world contain a significantly different range of conditions.
- Define and monitor risk triggers.
- If you have a sturdy enough umbrella, it will protect you at least to some extent from rain, snow, hail and from poop from above – whether dropped by seagulls, ospreys, dirty ducks or black swans.
- Understand that the same umbrella will not necessarily protect you from flood, fire, meteorites, bullets, 787s or A380s falling from the sky.
- Make BC Plans flexible and robust. Focus on recovering from the results of the disaster, not on its cause.
A quotation from George Bernard Shaw[45] summarizes the weakness of the Black Swan theory: ‘If history repeats itself, and the unexpected always happens, how incapable must Man be of learning from experience’. Many incidents described as Black Swans are simply failures to learn from experience: failures to see or heed the warning signs.
Taleb has provided great entertainment, changed opinions and perceptions, provoked response and performed a service in raising the profile of risk. However, we need to challenge the vehemence, inconsistence and, in some places, the logic and rigor of his arguments and examples – and yes, think outside the bell curve, too.
? Andrew Hiles 2013.
Andrew Hiles (Hon) FBCI, EIoSCM, is Executive Director of Kingswell International, global consultants and trainers in ERM, CM and BC. He is the author of Business Continuity Management: Best Global Practices and Enterprise Risk Assessment & Business Impact Analysis – Best Global Practices, both published by Rothstein Associates Inc.; and of Understanding Risk Management. The Institute of Chartered Accountants of England and Wales; editorof and main contributor to The Definitive Handbook of Business Continuity Management, published by Wiley; and editor and contributor, Reputation Management – Building and Protecting Your Company’s Profile in a Digital World, Bloomsbury. He wrote the section The Anatomy of a Crisis in Contingency Planning and Crisis Management: Assessing and Mitigating Potential Threats to your Business, Bloomsbury.
[1] The Black Swan, The Impact of the Highly Improbable, Random House, 2007
[2] Nassimk Nicholas Taleb, The Black Swan, Prologue.
[3] Safety, Culture and Risk: The Organisational Causes of Disasters, Andrew Hopkins, ISBN: 1-921022-25-6; 2005; CCH Australia. See also New thinking on disasters; the link between safety culture and risk-taking https://www.em.gov.au/Documents/New_thinking_on_disaster.pdf
[4] The Black Swan – The Impact of the Highly Improbable, Nassim Nicholas Taaleb, 2007, Prologue
[5]https://en.wikipedia.org/wiki/2010_Austin_suicide_attack
[6] Bruner, J. S. (1957). Going beyond the information given. New York: Norton.
[7] Barry A Turner, Nick F Pidgeon, Butterworth-Heinemann Limited, Originally published in 1978, and with the working sub-title 'The Failure of Foresight', this was the first book to suggest the possibility of systematically looking at the causes of a wide range of disasters. It still provides a theoretical basis for studying the origins of man-made disasters.
[8] https://wiki.answers.com/Q/How_many_earthquakes_in_2012#ixzz2Z39xINMa
[9] https://reliefweb.int/report/world/transforming-earthquake-detection-and-science-through-citizen-seismology
[10] Robert McNamara in the movie, The Fog of War.
[11] Maréchall Henri-Phillippe Pétain was a hero of the Great War (World War I, 1914-1918) for his defence of Verdun, but capitulated to the Germans in June, 1940 and became prime minister of the Vichy government of France.
[12] The Black Swan – The Impact of the Highly Improbable, Nassim Nicholas Taleb, 2007 Chapter 1.
[13] While each institution jealously guards its risk assessment methodologies, an idea of their sophistication can be gained from the following, all of which were published before The Black Swan:
https://www.alphasimplex.com/pdfs/RiskMgmtForHF.pdf 2001
https://www.math.ethz.ch/~embrecht/RM/jaeger.pdf - note the slides on Beyond the Normal Distribution and the identification of the limits of VaR, and on Stress Tests and Extreme Value Theory 2005
https://www.kellogg.northwestern.edu/research/fimrc/papers/risk_measurement.pdf 2004
https://www.strategy-business.com/article/04107?gko=0fe17&tid=27782251&pg=all 2004, which promoted the Power Law over the Bell Curve and said: “Large disruptive events are not only more frequent than intuition might dictate, they are also disproportionate in their effect.”
[14] Ibid, Chapter 3.
[15] https://www.nydailynews.com/news/crime/jilted-john-acquitted-texas-prostitute-death-article-1.1365975#ixzz2Z1DA0Rx1
[16] https://www.dailymail.co.uk/femail/article-2322573/The-Cannes-luxury-prostitutes-earning-40-000-PER-NIGHT-million-dollar-yachts-annual-film-festival.html
[17] https://gawker.com/5954690/brazilian-woman-sells-virginity-for-780000
[18] https://www.oddee.com/item_98483.aspx#m8WbtdVSeJRtm9JE.99
[19] https://business.financialpost.com/2012/10/02/why-excessive-teachers-wages-are-a-boondoggle-we-cant-afford/
[20] https://www.mirror.co.uk/news/uk-news/tony-blairs-fortune-to-treble-to-45million-198253
[21] https://www.thisismoney.co.uk/money/celebritymoney/article-2167655/Former-PM-Tony-Blair-alleged-earned-80million-2007.html
[22] Attributed to Mark Twain.
[23] Triangulation was known by the ancient Egyptians and Babylonians as a sound way of establishing position and was used in the production of the oldest maps. It is equally appropriate to validate statistics.
[24] https://www.poultryhub.org/species/commercial-poultry/turkey/
[25] www.woodstocksanctuary.org/learn/factory-farmed-animals/turkeys/?
[26] https://www.slate.com/articles/news_and_politics/recycled/2009/11/the_turkeyindustrial_complex.html
[27] https://www.dogbreedinfo.com/pets/turkey.htm
[28] https://www.nytimes.com/2005/10/27/business/27fannie.html?_r=0
[29] https://en.wikipedia.org/wiki/Madoff_investment_scandal
[30] Powerball, USA; 1.6 million for E;l Gordo, Spain; 1.14 million for UK Lotto https://news.bbc.co.uk/2/hi/uk_news/882991.stm
[31] Recalculation reduces the probability of a 100m asteroid strike as being one in a thousand years; 1-2km asteroid strike of up to one in a million years and of a 15km asteroid strike to one in 65 million years. Asteroid detection and tracking increases in sophistication. https://www.risk-ed.org/pages/risk/asteroid_prob.htm. In Europe, asteroid deflection is being studied by NEOShield, and in the USA by NASA.
[32] https://earthsky.org/earth/are-large-earthquakes-increasing-in-frequency
[33] This is not the only interpretation, and may be incorrect. However, it has been widely referenced since John F Kennedy used it in a speech on April 12, 1959. See https://www.pinyin.info/chinese/crisis.html
[34] https://en.wikipedia.org/wiki/Chinese_word_for_%22crisis%22
[35] Reasoned examples of government planning failure are contained in The Best-Laid Plans - How Government Planning Harms Your Quality of Life, by Randall O’Toole, Cato, 2007, ISBN: 978-1-933995-07-6
[36] Genesis 41, 17-32
[37] See https://www.damninteresting.com/the-dutch-tulip-bubble-of-1637/
[38] See https://www.library.hbs.edu/hc/ssb/history.html
[39] For a brief outline of these and other Wall Street meltdowns, see https://www.pbs.org/wgbh/americanexperience/features/timeline/crash/
[40] Aon Benfield 1H2013 Global Natural Disaster Analysis https://thoughtleadership.aonbenfield.com/Documents/20130724_if_global_natural_disaster_analysis.pdf
[41]https://www.fastcompany.com/magazine/100/berrett-koehler.html Publishers Weekly https://www.authorsguild.org prove
[42] See Hiles A.N. Enterprise Risk Assessment & Business Impact Analysis – Best Practices,
ISBN 1-931332-12-6. Published by Rothstein Associates Inc.
[43] https://gunston.gmu.edu/healthscience/riskanalysis/ProbabilityRareEvent.asp
[44] But who knows what the average would have been had there been another 100; 1,000; or 100,000 flights?
[45] The Irish playwright and philosopher (1856-1950)