The Broken Leg Problem of VC
Photo by oceandigital via iStockPhoto / Getty Images

The Broken Leg Problem of VC

Say you devise an algorithm to predict whether someone will go to the movies tonight. On balance, this algorithm will outperform a human due to its ability to ignore psychological biases. But if you know that the individual broke their leg this morning, you can reliably predict that they won’t attend a movie, regardless of what the algorithm outputs. Paul Meehl, the clinical psychologist who pioneered research into the validity of expert predictions, referred to this scenario as the “broken leg rule”: if you have information of an extraordinary circumstance that an algorithm does not or cannot index on, then you can reliably outperform the algorithm.

When all legs are intact, however, human intuition often falls short.

Intuition refers to understanding something or making a decision without the need for conscious reasoning. Herbert Simon once defined expert intuition as “nothing more and nothing less than recognition.” That is, “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer.”

At its best, expert intuition seems nothing short of magical — chess grandmasters spot checkmates at a glance, art experts identify fakes with a touch, and surgeons make timely interventions on a hunch. The psychologist Gary Klein tells the story of a firefighting commander who was hosing down a kitchen fire when he heard himself yell, “Let’s get out of here!” without realizing why. Shortly after, the floor collapsed, and the commander realized that the abnormally hot and quiet blaze had served as a cue that the heart of the fire had been in the basement underneath.

In contrast to tasks for which humans are hardwired — recognizing emotions on faces, for example — developing intuitive expertise for complex, professional applications like chess or firefighting takes years. Studies of chess masters, for example, have found that it takes at least 10,000 hours (that’s five hours per day for six years) of deliberate practice to attain the highest levels of performance—hence the oft-cited 10,000 hour rule popularized by Malcolm Gladwell. After these 10,000 hours of practice, the brain’s powerful associative machinery can map novel scenarios onto a rich database of experience and rapidly determine the best response, producing what we call expert intuition.

But this intuition can only develop if three conditions are met:

  1. The task/environment is sufficiently regular to be predictable
  2. There is an opportunity to perform prolonged practice to learn these regularities
  3. Feedback on performance is sufficiently fast and unambiguous to enable pattern recognition

The real world seldom conforms to these conditions. When operating in a domain with significant uncertainty and unpredictability (known as a low-validity environment), humans perform poorly: they index on irrelevant factors, fall prey to psychological biases, and often substitute the difficult question at hand (e.g. will this startup become a unicorn) with easier questions (e.g. do I like the founder’s personality). As a result, simple algorithms that predict outcomes by just aggregating relevant factors (e.g. frequency of lovemaking minus frequency of quarrels for predicting marital stability) often perform better than the relevant experts (e.g. marriage counselors).

Indeed, a robust body of literature has found that seemingly trivial algorithms, on balance, outperform the subjective judgments of trained professionals, whether school counselors predicting the grades of their students, physicians predicting life expectancy, bankers evaluating credit risk, judges discerning recidivism odds, sports pundits betting on football games or wine traders predicting future prices. As the late Daniel Kahneman wrote, “No exception has been convincingly documented.”

Not only are we humans inept at making predictions in high-noise environments — we also struggle to distinguish between moments when our predictions are based on expertise versus those that are not. Regardless of whether we’ve accumulated enough high-quality experience to make expert judgments, our associative machinery produces intuitive answers to the questions we’re faced with — and it feels the same, creating an illusion of expertise.

Unfortunately, venture capital fits the bill for a low-validity environment: the information available when making investment decisions is extremely noisy, and investors often won’t know if they made a good investment until a decade down the line.

Even worse, decision-making in venture capital revolves around intuition. If you ask a venture capitalist how they decide which startups to invest in, they’ll most likely reply with something along the lines of “we look at the strength of the founding team, the quality of the business model/product, and the size of the market” — in fact, 95%, 74%, and 68% of surveyed venture capitalists index on those factors, respectively. But this response merely kicks the can down the road since the question of how venture capitalists judge teams, products and markets remains unanswered.

In contrast to private-equity firms, hedge funds, or asset managers, most venture-capital funds eschew the quantitative analyses traditionally employed in finance — there’s seldom cash flow models, comparative company analysis, ratios analysis, or anything that ends in “analysis” or “model.” Rather, venture capitalists rely on their own judgment: throughout their careers, whether in venture capital or otherwise, they develop heuristics regarding the types of founders, products, and markets that succeed and make decisions based on these heuristics.

In other words, they use intuition.

And intuition takes on an even bigger role in distinguishing between startups that merely pass muster and those worth investing in. A typical venture capital fund sees over 100 startups for every one that they choose to add to their portfolio; as a result, making an investment decision doesn’t just require judging that a startup has a high-power team and high-quality product in a high-opportunity market. Rather, there needs to be some extraordinary level of “conviction” that the company can achieve a unicorn exit, and the level of conviction must surpass that of the dozen other funds that passed on the investment. Oftentimes, determining whether the conviction is sufficient isn’t grounded in tangible observations and might not even be articulable — in other words, it’s based on intuition.

Are VCs doomed then? Fortunately not: outlier startups have a proverbial broken leg. The common denominator of successful startups isn’t some common factor identifiable algorithmically, but rather the fact that they each have some uncommon, extraordinary characteristic that is by nature outside of the distribution — and therefore difficult to systematically predict. This dynamic explains why the best investments often stem from personal relationships between investors and founders: the investors knew proprietary information that the founders had extraordinary potential that can’t be encapsulated by an algorithmic approach.

As a result, despite the pioneering efforts of firms like SignalFire , Correlation Ventures , and Ulu Ventures , quantitative venture investing hasn’t quite become the norm. If we had sufficient data on past startups and their outcomes, we could likely train a model that predicts which startups will enjoy venture-scale success; however, that data isn’t readily available. Due to the power law in venture capital, data on successful startups is inherently scarce, meaning that training models may require exponentially more data relative to asset classes where outliers matter less. Therefore, in the short-to-medium term, humans have an advantage over algorithms — and success isn’t merely based on luck.

In contrast to public equity markets, venture capital returns are both dispersed and persistent. Top performers and low performers in public equity trading generate similar returns (the 95th percentile returns 10% and the 5th percentile returns 5%), and year-by-year performance has almost zero correlation, suggesting that, in the vast majority of cases, public equity trading depends on luck, and much of the industry operates under an illusion of skill.

Venture capital returns, on the other hand, are highly dispersed (the 95th percentile returns 40% and the 5th percentile is in the red) and persistent (year-by-year returns have a 0.7 correlation). Therefore, the top venture capital funds must have secured a compelling, durable advantage that allows them to consistently generate outsized returns, whether in the form of outlier skill or network effects.

Nonetheless, the fallibility of expert intuition in low-validity environments holds important lessons for venture capitalists. Human intuition is easily fooled by a host of psychological biases, but studies have demonstrated that we can combat these biases by maintaining awareness of potential cognitive heuristics and focusing on objectively evaluating the characteristics that matter — that is, by focusing on the broken legs.

Special thanks to Danny Crichton and Shahin Farshchi for their input on this article.

Darren Grady

Talent Amplifier | Change Catalyst | Team Builder | Executive Coach | Ex NIKE, Intel, Kaiser Permanente

3 周

Valuable, Dario, thanks for sharing!

回复
Aditya Gupta

CS + Math @ Stanford | Regeneron STS + Coca-Cola Scholar

1 个月

this is an excellence piece. love the connections to chess

回复
Alexander Jarvis

Advisor to funds and startups.

2 个月

Prodigiously written! Shared in my NL

回复
Daniel Flores

CS + English @ Stanford ? Gates Millenium Scholar ? Prev SWE @ Uber

2 个月

Killing It, Dario Soatto

回复
Aryaditya (Adi) Lankipalle

Student at The Wharton School | Huntsman Program in International Studies and Business

2 个月

Good read!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了