Why you should use Catastrophic Risks and Forecasting Models
Dominick Grillas
Delivering Transformative Processes and Technologies to create Lasting Value
Risk management approaches mostly refer to a probabilistic view of risks, born from actuarial analysis, where risk thresholds determine the urgency and priority of the response. In this model, the two dimensions of risks (probability of occurrence * impact of occurrence) are used to calculate a risk score. The representation of risk probabilities with a Gaussian function (the “Bell Curve” model) reinforces the concept that most frequent occurrences (the middle of the chart) is the top priority, and the tails of the curve, typically less than 2% of chances to happen, are set aside for later consideration.
Forecasting models also embed an implicit limitation, which is the forecasting or strategic horizon of reference. A 10 years forecast spells out the actions, numbers and projections with the best intelligence available. Events or results which are beyond the threshold of the decade-long plan are discarded as they belong to the next plan.
A consequence of both management models is that risks and events which have a low probability or a cycle time (frequency) of less than once a decade are often ignored or are severely discounted in their importance.
The issue is that most catastrophic events and risks that caused companies to go out of business or implode financially are typically very low probabilities as they might happen once in a lifetime or less. The recent financial crisis demonstrated both our lack of preparedness for such events and their potential impact.
The paper below is the second installment on using scenarios for managing risks.
Risks Distribution: Looking at the Big Picture
Common wisdom takes it that “risks cannot be aggregated”. This is somehow a true, but can be misleading. The common reliance of distribution models (E.g.: Bell Curves) to plot the likelihood of occurrence provides an easy, at a glance representation of how risks are distributed by likelihood of occurrence.
Distribution diagrams are essential statistical tools used for quality and defect management; they help reduce the low frequency exceptions and the span of variability. When it comes to risk plotting, the mathematical averaging implied in a bell curve (core focus is on highest probabilities) and the probability averaging leaves infrequent events on the edges.
Since the highest frequency of occurrence triggers the highest attention, infrequent events can be rapidly downplayed or even discarded as too far away from the core focus (the Mean). Key focus is on Tier 1, which in a normal distribution represents about 70% of the sampling, and on the Tiers 2 and 3, which together represent about 98% of the recorded results.
Good risk management suggests addressing the main occurrences before considering the rarest cases. The Upper and Lower Tails of the distribution being within the 1% range each would come very last in getting attention, and likely would be considered as an inconsistent result.
Rare occurrences of risks get erased from the meaningful samples in a logical and natural way, regardless their potential impact.
Risk scores being Frequency * Impact, even a significant impact would result in a low score when calculated with a probability of less than 1%. A $1M impact for instance, with a 0.1% probability would equate a raw risk score of 1,000,000 * 0.001 = 1,000, which is the same as a $5,000 impact with a 20% probability.
This leveling of the risks is useful when dealing with ordinary and core operational risks, making this model prevalent in risk management. Unfortunately, it also illustrates that major risk events need another approach that would not be driven by risk frequency, but rather by impact size.
The work-around is actually simple: while risk distribution can cause the center-based leveling of the risks, aggregating the net exposure of each risk instead of their probability escapes the issue. Considering risk exposure as the net amount of impact taking place, would the risk occur. The net amount of risk carried by a single event, is a firm, positive value (no risk would equate to a zero net impact). Adding such values to create a “total Exposure” does not alter the individual risk exposure, which is simply added to the total. The result does not provide a distribution diagram of the risks, but the sum of liabilities carried by an organization, a department or an undertaking.
Such view of the aggregate exposure can be precious to compare the total exposures of projects, organizations or markets, for instance. They can also be useful in creating a baseline, which can be compared to post-mitigation actions steps, to verify how much of the exposure has been practically reduced.
Catastrophic risks and their potential to overwhelm the response capacity of the business are however not included into the above views of risks management, whether looking at the distribution or the total net exposure.
When traditional risk management focuses on high frequency risks, most of the catastrophic events that occurred over the past decades were both infrequent and carried a critically high impact. Largest impacts are carried by events that by themselves generated a high impact, but which correlation with another impactful event created catastrophic conditions. Falling off a boat can be an unpleasant and possibly dangerous situation. Navigating white waters can be an adrenaline packed experience. But if a fall occurs in Class V or VI rapids with sharp rocks, then the compounded risks create a potentially lethal situation.
The cross-leveraging that impacted the financial community during the recent crisis was not unheard of: it was the downfall of the re-insurance companies in Australia in the early 2000’s with a combination of opaque financial instruments, rapid growth of a key player through acquisitions, excessive cross-leverage if companies covering each other’s risks. The collapse of HIH in 2001 uncovered irregularities, but also created a massive impact to the Australian financial markets.
When financial and insurance companies collateralize real estate assets to cover its risk exposure, while real estate transactions leverage mortgages by the same financial companies. Real estate portfolios being traded as financial instruments could reduce the risk, but if the buyers are peers and businesses already involved, the risk is actually not reduced: no new capital has been brought in to dilute the risk. Exposure has just been transferred and exchanged within the same group: this is cross-leveraging.
In the estimation of risks, traditional approaches have been using a probabilistic analysis, until after the “Bubble Crash” when the Basel Committee decided on a new model based on catastrophic impacts. Other industries and companies with a strong vertical consolidation might benefit from a correlated analysis of major risk events impacting them. Vertical expansion such as an ore producing business expanding into smelting, refining and possibly manufacturing finished products from raw material might get many benefits from the consolidation, including margin, self-sufficiency and less sensitivity to market changes.
The benefits’ flip side however, is that a major disruption in the demand for the finished products or to the base price of the raw material could impact the entire chain, augmenting the negative impact with each aggregated layer. As the consolidation protects the business against small and moderate changes, large impacts, especially correlated, remain a major threat to the survival.
The analysis of the correlated impact relies on catastrophic scenarios, not probabilistic models. A particular scenario must be analyzed by itself, and factors worsening the situation should be added to the script as well. Doomsday scenarios might actually uncover some unforeseen threats, which can then be assessed independently.
Some of the major events cannot be dealt with without leveraging financial reserves. Just going through the scenario can trigger a number of ways to reduce the exposure or to make the business more resistant. In all cases, the process will identify key signs that a major event is occurring, possibly early warnings as well. A second level of response is how the business would respond to such occurrence: even invoking the reserves only maintains the viability of the company. But how would the business resume and continue operating under such dire circumstances? Scenario modeling can also help identify steps and measures that increase business resiliency.
The Problem with Forecasting Models
A problem with risk management is the short memory we tend to keep, in part due to the boundaries of forecasting models. 10 years is long range planning in most cases; a 25 years forward view is a faraway galaxy. While planning or defining strategic pathways, thinkers and visionaries try to stick to hypothesis that are credible, most likely to happen. Even when considering market turbulences and risks inherent to the strategic actions to be undertaken, the most likely scenarios are chosen to build the roadmap.
Forecasting and planning approaches work with a finite, measurable timescale, so events which are beyond the visible horizon (practically, 10 years or less) are not factored in. Practically, they might occur sometime after the roadmap being crafted ends, making them irrelevant in the process.
The catch is that an infrequent event, such as one happening once a century, might happen in 99 years or maybe tomorrow. Low frequency does not mea that the event will take place at the end of the period of reference. After all, people win the lottery almost every week, in spite of odds of 1 in 292 million for the first prize and 1 in 11 million for the second prize.
Catastrophic risk occurrences, which have been historically causing the most severe and irrecoverable impacts on companies and economies, happen rarely and over long periods of reference. They are discounted from planning assumptions, and need to be managed separately. Regulators considering catastrophic scenarios to establish how much financial reserves a company should maintain are not working with a forecasting tool, but a worst-case scenario model, which carries no timescale. Regulatory reserves are gatekeeping measures based on a broad consensus; they also introduce the concept of catastrophic scenario as the main tool to assess the degree of exposure.
The core of the Catastrophic Scenario approach includes a radically different risk grid, where high risk red blocks replace the green blocks of low occurrence from traditional grids. The change is radical in its deliberate avoidance at using the likelihood of occurrence as a meaningful scoring mechanism. Instead, the biggest impacts are risks or events that might never occur, or only once in a century. Their occurrence however, could cause unrecoverable harm to a business and exceeds its total capital value.
A benefit of working with catastrophic and infrequent scenarios is that they escape the typical forecasting boundaries. A major risk event such as a market collapse or financial chain reactions might not occur in many years; the risk however remains the same, which is potentially overwhelming impact. Freeing the analysis from the time boundaries of probabilistic views and forecasting cycles ensures that short term priorities do not dilute the attention paid to such large events.
Common wisdom would have it that only events that can be predicted and are likely to happen should be considered in risk management and mitigation efforts. Adopting models which are not based on historical records or individual experience is key to craft scenarios which are truly based on maximum correlated variability. All other risks should be managed using the previously described probabilistic or impact correlated models.