Optimisation, Simulation and Black Swan in Supply Chain Network Design

Optimisation, Simulation and Black Swan in Supply Chain Network Design

Introduction

In this article, I explore the differences between optimising supply chain networks and using simulation-based approaches. I also demonstrate how to assess whether a network is prepared for unforeseen scenarios, often referred to as "Black Swans" (credit to Nassim Taleb). Using a simplified supply chain network as an example, I optimise it under different approaches and assumptions.

While the example is taken from e-commerce logistics, the insights and conclusions are broadly applicable to any industry governed by queuing theory, including manufacturing, retail, IT, public transportation, banking, airports, and beyond.

By the end, I will clarify which, sometimes unexpected, questions simulations can answer, when simulations are necessary, and when optimisation alone would be sufficient.

Selected Tool

For these experiments and simulations, I utilised the free personal learning version of AnyLogic. I chose this tool for its simplicity, ease of understanding for non-technical audiences, and its extensive range of predefined experiment types. However, the same models and experiments could be developed using most other simulation tools or techniques.

Model and Assumptions

The model represents a high-level view of a supply chain network processing incoming orders. It runs over 30 days, simulating an average month.

Main logic of the model

Key assumptions include:

  • Orders arrive according to forecasted demand with some monthly seasonality.
  • Orders can wait a maximum of one day before fulfillment begins.
  • Lost orders do not incur direct negative impacts (except lost profit).
  • Each fulfilled order generates a contribution margin of 2 Money.
  • Daily processing capacity is limited by network capacity.
  • Network capacity incurs a fixed cost of 1 Money per unit per day.
  • Operating profit is calculated as contribution margin minus fixed costs, with variable costs excluded.

Order seasonality

Goal of Optimisation

The goal is to maximise operating profit by minimising fixed costs.


Scenarios

0. As-Is

  • Network capacity - 100
  • Contribution margin - 4,800
  • Fixed costs - 3,000
  • Operating profit - 1,800
  • Average network utilisation - 80%
  • Lost Orders - 0

Even though demand occasionally exceeds capacity (as seen in utilisation rates), the existing network handles all orders without loss. However, the 80% average utilisation suggests potential cost-saving opportunities.

Network daily utilisation

1. Optimisation

Given the deterministic nature of the current model, optimisation could even be done in Excel. However, I used the Optimisation Experiment in AnyLogic, which is straightforward to implement. The result: an optimised capacity of 82.

Optimisation Experiment Interface

Running the model with this capacity brings an operating profit of 2,172 Money (+21% improvement vs. As-Is scenario) with 93% average network utilisation and 84 lost orders.

Daily fulfillment and demand

The optimised network clears backlog only when demand is at its lowest. The network is not utilised fully because at a certain moment the loss from unfulfilled orders exceeds the profit gain from reducing capacity. Still, from profitability perspective it is better to loose some orders instead of keeping capacity and fixed costs high.

This step often concludes analysis in many companies. The business case is compelling enough to justify reducing capacity for profit gains.

But let's challenge some assumptions, starting with the very beginning – Demand.

2. Simulation Demand Deviations

Real-world order intake rarely matches forecasts exactly, which brings to a necessity to introduce additional model assumptions:

  • Forecast bias (difference between total demand and forecast) -> 0%
  • Forecast accuracy (MAPE – Mean Absolute Percentage Error) -> 10%

Initially I apply a uniform distribution. The demand on any given day varies from the forecast, making the model stochastic.

Demand = forecast * uniform(1-parameter,1+parameter)        

Taking a quote directly from AnyLogic:

The Uniform distribution is a continuous distribution bounded on both sides, i.e. the sample lays in the interval [min,max). The Uniform distribution is used to represent a random variable with constant likelihood of being in any small interval between min and max.
Sample of uniform distribution

Using Calibration experiment, I found a parameter value that aligns with my assumptions (forecast bias ≈ 0 and MAPE ≈ 10%).

Calibration Experiment Interface

The parameter value of 0.19 brings daily demand between 0.81 * Forecast and 1.19 * Forecast.

Daily forecast and demand

Disclaimer: Simulation is not ideal for purely statistical and data analytical task. The goal here is to showcase simulation thinking and AnyLogic's capabilities.

Since the model is stochastic, I cannot fully rely on the Optimisation experiment results. The optimal result might be driven by coincidence rather than a sustainable pattern.

To validate the outcome, a Sensitivity Analysis experiment is necessary to determine how randomness affects the outcome and whether the optimised value consistently meets initial assumptions.

Using replications the Sensitivity Analysis experiment runs the model multiple times with the same input and generates different outputs due to randomness. Output comparison helps to understand the model stability and reveals whether the value obtained in the Optimisation experiment is indeed optimal, or if another value better matches the initial assumptions more frequently.

For the Sensitivity Analysis experiment, I use the optimised parameter value and compare it against similar values. I focus on how the newly introduced assumptions—forecast bias and MAPE—vary, identifying the parameter value that minimises the absolute objective function, giving equal weight to both forecast bias and MAPE.

Objective = ABS(MAPE–10)+ABS(Forecast Bias)        

Even though uniform distribution theoretically has 0% forecast bias over infinite observations, the limited 30-day simulation doesn't provide enough data to fully reflect this. As the parameter value increases, the spread of forecast bias widens, though the average remains stable. Simultaneously, MAPE increases in correlation with the parameter value.

Sensitivity Analysis Experiment Interface

In the Sensitivity Analysis experiment a parameter value of 0.17 achieves the minimal average objective. Although MAPE will average below 10%, this is balanced by greater stability in the Forecast Bias.

Applying this to demand, I re-optimise capacity with the uniform(0.83, 1.17) distribution.

Optimization Experiment Interface (Uniform Distribution)

Following the same process, I conduct another Sensitivity analysis experiment to verify if the newly optimised capacity is indeed optimal.

Sensitivity Analysis Experiment Interface

Sensitivity analysis reveals that introducing just one stochastic element into the model increases the optimal capacity from 82 to 83, with a minor difference in operating profit between capacities of 83 and 84.

It's worth noting that even with the same capacity of 83, operating profit can fluctuate between 2000 and 2300, driven solely by random variations in the forecast.

Let's see what happens when the model becomes even more stochastic.

3. Simulation Demand Deviation Distribution

In reality, demand rarely follows a uniform distribution. Instead, it often aligns with normal, lognormal, gamma, or exponential distributions (or their discrete analogs).

Real demand seldom deviates significantly downward, but upward deviations can reach 40%. PERT or triangular distributions best represent this behaviour, each defined by min, mode, and max parameters.

Probability density function

Data analytical methods are better suited for identifying parameters of demand distribution. This process is out of scope of this article. However, I’ll compare metrics between the uniform distribution and more realistic triangular and PERT distributions.

Comparison of distributions

Both triangular and PERT distribution have 0% forecast bias (infinite observations) and close to 10% MAPE. However, PERT's higher kurtosis means heavier tails and more extreme values, visible in the wider range of values.

New demand formula, closer matching to real observations:

Demand = forecast * PERT(0.77, 0.96, 1.44)        

Re-optimizing capacity with this distribution suggests the lowest optimal capacity yet.

Optimisation Experiment Interface (PERT Distribution)

However, Sensitivity analysis reveals that the updated distribution requires 85 capacity units to maximise operating profit, with profit expectations now ranging from ≈1950 to ≈2350.

Sensitivity Analysis Experiment Interface

At this point, I’m confident in reducing capacity from 100 to at least 85. The final check: assessing the network's resilience to Black Swan events.

4. Simulating the Black Swan

Nassim Taleb’s Black Swan theory describes unexpected events with significant impacts, typically hard to predict due to their low probability or unprecedented nature. A system capable of thriving under such events is termed "antifragile."

Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.

As an inevitable feature of antifragile system is optionality.

Optionality is the property of asymmetric upside (preferably unlimited) with correspondingly limited downside (preferably tiny).

There are 2 types of Black Swan events: positive and negative.

Negative Black Swan Event

A negative black swan significantly disrupts the business, and it's impossible to predict.

I simulate this by introducing a major production disruption, reducing capacity by 40% for 7 days with a 1% daily probability.

However, the impact varies depending on when the disruption occurs. Compare 2 scenarios:

Disruption comparison

During the peak demand a disruption lead to 315 lost orders while the same event during the lowest demand period resulted to only 92 lost orders.

Positive Black Swan Event

Conversely, a positive Black Swan presents an opportunity with significant upside potential. The antifragile system is capable due to optionality to seize positive black swans and use maximum out of it. On opposite, fragile system either won't notice such an event or won't be able to benefit from it.

I simulate this by sharply increasing demand by 40% for 7 days with a 1% daily probability, similar to the negative event.


Instead of optimising for Black Swan events, I test the As-Is and three optimised scenarios to assess system fragility under extreme conditions.

Monte Carlo Experiment

Stochastic elements in the current model:

  • Demand deviates from the forecast based on a PERT distribution.
  • A negative and/or positive Black Swan may occur daily. There is a possibility of both happening in 1 run. There is also a high chance that none of them happen.

Given these inputs, an analytical solution is impractical. The model's complexity and stochastic nature make simulation the best approach. To find a solution, I use a Monte Carlo experiment.

Monte Carlo Simulation compares outcome probabilities based on varying parameters. For example, with a capacity of 70, operating profit ranges from 1550 to 2050, with the highest density (darkest green) between 1950 and 2000. With a capacity of 100, the range is much wider—from 1400 to 2200, with the highest density between 1750 and 1850.

Monte Carlo Experiment Results
Monte Carlo Simulation allows to maximise the probability of certain outcomes.

Case 1. What should the company do, to ensure operating profit doesn't fall below 1800?

Answer - decrease capacity to 85.

This maximises the probability, that profit doesn't fall below 1800 – 89,5%. Still, there is a certain (10,5%) chance that profit falls below 1800, but the risk of that outcome is minimal comparing to any other scenario.

Probability curve

Case 2. What should the company do, to earn more than 2200?

Answer - decrease capacity to 83.

With this capacity, there is a 15.2% chance of exceeding 2200. Still, the company should be very lucky to gain that profit. However, with a capacity of 100, the chance drops to 6% and it is impossible to reach that profit with capacity below 75.

Capacity below 75 is a breaking point where the system loses any optionality and becomes indifferent to any positive Black Swans. Company is so busy to fulfil existing demand that won't be able to capitalise any unpredicted opportunity.

Probability curve

Case 3. We are conservative and want to ensure a stable outcome of earning between 1800 and 2200. What should the company do?

Answer - decrease capacity at least to 92.

Across a wide range of capacities (70-92), operating profit will fall between 1800 and 2200 with 75-80% probability. This perspective helps to show the stability of the whole system around the current average result. However, for decision-making, I recommend focusing on the first two cases.

Probability curve

Scenario Comparison

Comparing optimisation with simulation results:

  1. Both simulations increase the likelihood of exceeding 1800 by 1-1.5 percentage points compared to optimisation.
  2. Both simulations increase the likelihood of exceeding 2200 by 1.8-2.2 percentage points compared to optimisation.
  3. The difference between simulations is quite low: in one case higher chance to earn >1800, while in another case higher chance to earn >2200

Scenario comparison

Conclusion

Simulation can often provide a more optimal solution comparing to basic optimisation, particularly when it comes to testing extreme scenarios with low probabilities.

Monte Carlo experiments allow not only for the selection of optimal parameters but also for maximising the likelihood of desired outcomes. This approach enhances optionality and reduces system fragility. Monte Carlo simulations might indicate that, regardless of whether capacity is set to 92, 82, or 72, profit will likely fall between 1800 and 2200 with a 75%+ probability (which, admittedly, is somewhat fatalistic).

Given the relatively minor difference between the results of Simulation and Optimisation, compared to their difference with the As-Is scenario, it's understandable why optimisation techniques are more commonly used. They are simpler to implement, especially when the difference in outcomes is minimal.

However, it’s important to note that the model I used here is highly simplified with just a few stochastic elements. As complexity increases and more stochastic factors are introduced, the gap between the results of optimisation and simulation will widen.


Consider using simulation in the following situations:

  • The cost of error is very high.
  • Changing a decision is difficult or impossible.
  • The system is highly interdependent.
  • There are numerous stochastic elements.
  • There are events with very low probabilities.

If at least one of these criteria applies, a simulation approach is highly recommended. Otherwise, sticking with optimisation or a similar method might save time and resources.

Shweta N.

Program Manager | InsuranceTech & E-Commerce | Strategy & Transformation Leader | Driving Scalable Solutions | PMP? | SAFe Agilist 6.0? | CSM? | CSPO? | Disruptive Strategy Certified

7 个月

It's interesting how the choice between simulation and optimization depends on the complexity of the system. Thanks for sharing these insights Maksim.

要查看或添加评论,请登录

Maksim Nizamov的更多文章

社区洞察

其他会员也浏览了