Reference Checks Reign Supreme in Vendor Risk Management
There has been an explosion in services that seek to help buyers vet suppliers. These services take unstructured data from a variety of sources online and develop a structured set of data, specific to an individual vendor. Some of them may use AI (and shortly generative AI) to make judgments about the suppliers.
This is all well and good. It’s good to have data about suppliers. Let’s assume that these services are pre-processing the data and vetting the data for reliability. In all likelihood, they do so.
Buyers need to be careful, though. All this data and this automation can actually increase buyer risk by presenting a semblance of certainty and sufficiency that may mask underlying issues. The best way to vet suppliers is to speak with people that know them well. Nothing can supplant the reference check.
?
In the period after the dot-com bubble’s collapse in 2001 and the Global Financial Crisis in 2008, there was a quiescent period of risk-taking in financial markets and ancillary markets, such as housing. It was a great time to be an investor. It was a wonderful time to be a value investor. It worked until it didn’t, as the old Wall Street aphorism goes.
One of the appealing things about this period is that volatility dropped and stayed low. Investors and central banks did things that suppressed actual, realized volatility, as well as implied volatility used to price derivative financial products. Investors were interested in the so-called “dash for yield.” They purchased fixed income products with enhanced payout structures, juicing performance by insuring the other side of the trade against the ups-and-downs of security prices. Central banks made sure to keep inflation low and the economy humming, with a growing aversion to periodic cleansing-by-recession. For some in positions of economic authority, reducing economic volatility in terms of flattening the business cycle was as important as low financial markets volatility was to the yield-chasing investors.
Risk management has always been a priority on Wall Street. New techniques emerged in the 1990s, in part because of the proliferation of data and better access to computing power. To manage risk, one must first measure risk. If an investor or a bank has a good handle on how much risk they are wearing, then they can adjust it up or down to reflect their preferences and their forecasts.
Chief among these risk measurement products was something called Value-at-Risk. Here’s a definition from Risk.net:
“Value-at-risk is a statistical measure of the riskiness of financial entities or portfolios of assets.
“It is defined as the maximum dollar amount expected to be lost over a given time horizon, at a pre-defined confidence level. For example, if the 95% one-month?VAR?is $1 million, there is 95% confidence that over the next month the portfolio will not lose more than $1 million.”
The correct way to use Value-at-Risk is to see it as predicting the maximum amount of loss the portfolio of assets will incur 95% of the time if certain assumptions hold true. In the example above, 95% of the time, the portfolio will not lose more than $1 million. The other 5% of the time, the portfolio will lose more than $1 million. It could be a lot more than $1 million. We can extend the calculation to 97.5% or 99% confidence, etc.
When we see the phrase “confidence interval,” we understand that this is a statistical calculation. There is a data-generating process at work. If we assume that we know what that process looks like, then we can apply our math to the portfolio to come up with this single number that we call VaR, for short.
The assumptions that underpin the calculation include the belief that the returns on the instruments in the portfolio follow a lognormal distribution and that there is some sort of stable correlation between the assets in the portfolio.
领英推荐
The salient point is that managers received a single number that they could take home. It gave them comfort. It gave them false comfort, as we learned in 2008.
Recall the earlier point that the years leading into the Global Financial Crisis were a period of artificial suppression of economic and financial volatility.
The Value-at-Risk calculation used historical volatility to project into the future, e.g. looking back one hundred and eighty days. The lower the volatility that went into the model, the lower the VaR number that the model spit out. The lower the VaR number that the model spit out, the less capital investors appeared to be using. The appearance of capital efficiency permitted (and likely encouraged) people to add to their risk by putting more money to work by using more leverage. If everything is going up with little risk of a drawdown, why wouldn’t you borrow money at low interest rates to earn equity-like returns? It was free money, as they say.
The problem is that the numbers going into the calculation were manipulated. Distorted inputs produce distorted outputs. This was okay in the early 2000s, but as the years rolled by and more people loaded up on risk, it started to become a dangerous loop, setting up a Minsky moment.
The initial shocks came in 2007 with flare-ups in volatility in public equities, but most people dismissed these as aberrations and resumed their prior behavior after taking drawdowns.
Who was it that managed to avoid the pain? Who were the people who profited from the Global Financial Crisis?
As Warren Buffett said so famously, “Only when the tide goes out do you learn who has been swimming naked.”
It was the savvy, typically older, investors who did their own work. They didn’t chase trendlines on a chart; they read the documents and they talked to people. They realized that there is no substitute for good old-fashioned investigation.
Go ahead and use the data-enabled approaches and AI tools to execute vendor onboarding and assessment, but know this: the best, necessary steps for vendor evaluation are, and will be always, digging into the fundamentals and asking for references. You can’t just ask a bunch of generic questions, either. You need to ask other buyers questions in such a way that they will reveal their true experiences. Get this right and you will outperform your peers hands down.
?
With EdgeworthBox, you can implement this process easily. Our platform was designed for collaborative procurement. Design your own questionnaire, specific to your circumstances. We’ll help you execute the survey. Don’t just ask your vendor’s existing customers. Ask people who know the vendor in other contexts. You don’t need to hire consultants or ask fancy questions. Sit down and think about it. You’ll be able to figure out what to ask.
If you’re interested in learning more, please shoot us an email. We’d love to chat with you. For us, procurement is an investment process. We believe that gives us an edge.