A Review of The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution
This is a short review/series of notes from the book The Man Who Solved the Market with a focus on the quantitative investing approach that Renaissance Technologies employs. It is worth saying at the outset that unsurprisingly this non-technical book delivers no new ‘secrets’ as to the scientific approach, though it does aggregate together a lot of known information. The majority of the book focuses on the chronology of historical events, detailing personalities and relationships.
General Themes/Observations from the Book
- Lots of data. As much as possible. Data mine first, ask fitting questions second.
- Lots of asset classes
- Lots of categories of signals, with lots of signals in each category: Trend, Mean reversion, Pairs, Seasonality, Factors (fundamental), Factors (mathematical), Correlations (lead/lag), Autocorrelation
- Sampling targets (eg returns, volume, etc) to small windows to reduce exogenous effects.
- Lots of small bets
- Single framework for all asset classes and all signals. Optimize and risk manage this. Easy to add/evaluate the effect of new signals.
- Bet sizing
- Ranking signal strength using pvals
- Use of leverage
One observation from the book was that the fund was using tick data in 1989, along with as much computational power as it could access to look for patterns in features. As such it would seem they were 15 years ahead of most of the rest of the market. If a 15 year lead was extrapolated to today, what would that mean?
What is surprising is that the general approach is (very) common knowledge in this part of the algorithmic trading world. As such, how is it that Medallion has done so well?
Crude notes follow, broken down into sections:
The Pattern Recognition Approach
If we have enough data, I know we can make predictions.
“There are patterns in the market,” Simons told a colleague. “I know we can find them.”
Jim Simons dedicated much of his life to uncovering secrets and tackling challenges. Early in life, he focused on mathematics problems and enemy codes. Later, it was hidden patterns in financial markets.
The framework to identify patterns in current prices that seemed similar to those in the past.
Carmona’s results came from running a program for hours, letting computers dig through patterns and then generate trades.
Simons and his colleagues were proposing a third approach, one that had similarities with technical trading but was much more sophisticated and reliant on tools of math and science. They were suggesting that one could deduce a range of “signals” capable of conveying useful information about expected market moves.
Simons concluded that markets didn’t always react in explainable or rational ways to news or other events, making it difficult to rely on traditional research, savvy, and insight. Yet, financial prices did seem to feature at least some defined patterns, no matter how chaotic markets appeared, much as the apparent randomness of weather patterns can mask identifiable trends. It looks like there’s some structure here, Simons thought. He just had to find it. Simons decided to treat financial markets like any other chaotic system. Just as physicists pore over vast quantities of data and build elegant models to identify laws in nature, Simons would build mathematical models to identify order in financial markets.
Carmona’s idea was to have computers search for relationships in the data Straus had amassed. Perhaps they could find instances in the remote past of similar trading environments, then they could examine how prices reacted. By identifying comparable trading situations and tracking what subsequently happened to prices, they could develop a sophisticated and accurate forecasting model capable of detecting hidden patterns.
Their software was a bit clunky at times, unable to quickly test and implement trade ideas or discover lots of new relationships and patterns
everyone in the finance business was trying to invest the Renaissance way: digesting data, building mathematical models to anticipate the direction of various investments, and employing automated trading systems.
All that power allows quants to find and test many more predictive signals than ever before. “Instead of the hit-and-miss strategy of trying to find signals using creativity and thought,” a Renaissance computer specialist says, “now you can just throw a class of formulas at a machine-learning engine and test out millions of different possibilities.” The approach is scientific,” Simons says. “We use very rigorous statistical approaches to determine what we think is underlying.” Another lesson of the Renaissance experience is that there are more factors and variables influencing financial markets and individual investments than most realize or can deduce. Investors tend to focus on the most basic forces, but there are dozens of factors, perhaps whole dimensions of them, that are missed. Renaissance is aware of more of the forces that matter, along with the overlooked mathematical relationships that affect stock prices and other investments, than most anyone else.
For all the unique data, computer firepower, special talent, and trading and risk-management expertise Renaissance has gathered, the firm only profits on barely more than 50 percent of its trades, a sign of how challenging it is to try to beat the market—and how foolish it is for most investors to try.
Simons and his colleagues generally avoid predicting pure stock moves. It’s not clear any expert or system can reliably predict individual stocks, at least over the long term, or even the direction of financial markets. What Renaissance does is try to anticipate stock moves relative to other stocks, to an index, to a factor model, and to an industry.
Strategies
The currency moves were part of Medallion’s growing mix of tradeable effects, in their developing parlance.
Historic and overlooked pricing patterns —the stuff Simons obsessed over
Economic Release Strategy
As they scrutinized their data, looking for short-term trading strategies to add to Medallion’s trading model, the team began identifying certain intriguing oddities in the market. Prices for some investments often fell just before key economic reports and rose right after, but prices didn’t always fall before the reports came out and didn’t always rise in the moments after. For whatever reason, the pattern didn’t hold for the US Department of Labor’s employment statistics and some other data releases. But there was enough data to indicate when the phenomena were most likely to take place, so the model recommended purchases just before the economic releases and sales almost immediately after them.
Trend Strategy
The firm began incorporating higher dimensional kernel regression approaches, which seemed to work best for trending models, or those predicting how long certain investments would keep moving in a trend.
Axcom’s model usually focused on two simple and commonplace trading strategies. Sometimes, it chased prices, or bought various commodities that were moving higher or lower on the assumption that the trend would continue. Other times, the model wagered that a price move was petering out and would reverse, a reversion strategy.
Mean Reversion Strategy
Laufer created computer simulations to test whether certain strategies should be added to their trading model. The strategies were often based on the idea that prices tend to revert after an initial move higher or lower. Laufer would buy futures contracts if they opened at unusually low prices compared with their previous closing price, and sell if prices began the day much higher than their previous close.
Crucially, Brown and Mercer retained the prediction model Frey had developed from his Morgan Stanley experience. It continued to identify enough winning trades to make serious money, usually by wagering on reversions after stocks got out of whack. Over the years, Renaissance would add twists to this bedrock strategy, but, for more than a decade, those would just be second order complements to the firm’s core reversion-to-the-mean predictive signals. An employee boils it down succinctly: “We make money from the reactions people have to price moves.”
Seasonality Strategy
Laufer discovered certain recurring trading sequences based on the day of the week. Monday’s price action often followed Friday’s, for example, while Tuesday saw reversions to earlier trends. Laufer also uncovered how the previous day’s trading often can predict the next day’s activity, something he termed the twenty-four hour effect. The Medallion model began to buy late in the day on a Friday if a clear up-trend existed, for instance, and then sell early Monday, taking advantage of what they called the weekend effect. Simons and his researchers didn’t believe in spending much time proposing and testing their own intuitive trade ideas. They let the data point them to the anomalies signaling opportunity. They also didn’t think it made sense to worry about why these phenomena existed. All that mattered was that they happened frequently enough to include in their updated trading system, and that they could be tested to ensure they weren’t statistical flukes.
They did have theories. Berlekamp and others developed a thesis that locals, or floor traders who buy or sell commodities and bonds to keep the market functioning, liked to go home at the end of a trading week holding few or no futures contracts, just in case bad news arose over the weekend that might saddle them with losses. Similarly, brokers on the floors of commodity exchanges seemed to trim futures positions ahead of the economic reports to avoid the possibility that unexpected news might cripple their holdings. These traders got right back into their positions after the weekend, or subsequent to the news releases, helping prices rebound. Medallion’s system would buy when these brokers sold, and sell the investments back to them as they became more comfortable with the risk.
Autocorrelation Strategy
When the currency rose one day, it had a surprising likelihood of climbing the next day, as well. And when it fell, it often dropped the next day, too. It didn’t seem to matter if the team looked at the month-tomonth, week-to-week, day-to-day, or even hour-to-hour correlations; deutsche marks showed an unusual propensity to trend from one period to the next, trends that lasted longer than one might have expected. When you flip a coin, you have a 25 percent chance of getting heads twice in a row, but there is no correlation from one flip to the next. By contrast, Straus, Laufer, and Berlekamp determined the correlation of price moves in deutsche marks between any two consecutive time periods was as much as 20 percent, meaning that the sequence repeated more than half of the time. By comparison, the team found a correlation between consecutive periods of 10 percent or so for other currencies, 7 percent for gold, 4 percent for hogs and other commodities, and just 1 percent for stocks. “The time scale doesn’t seem to matter,” Berlekamp said to a colleague one day, with surprise. “We get the same statistical anomaly.”
“People persist in their habits longer than they should,” he says.
HMM Strategy
The quartet published an internal, classified paper for the IDA called “Probabilistic Models for and Prediction of Stock Market Behavior” that Simons and his colleagues ignored the basic information most investors focus on, such as earnings, dividends, and corporate news, what the code breakers termed the “fundamental economic statistics of the market.” Instead, they proposed searching for a small number of “macroscopic variables” capable of predicting the market’s short-term behavior. They posited that the market had as many as eight underlying “states”—such as “high variance,” when stocks experienced larger-than average moves, and “good,” when shares generally rose. Here’s what was really unique: The paper didn’t try to identify or predict these states using economic theory or other conventional methods, nor did the researchers seek to address why the market entered certain states. Simons and his colleagues used mathematics to determine the set of states best fitting the observed pricing data; their model then made its bets accordingly. The whys didn’t matter, Simons and his colleagues seemed to suggest, just the strategies to take advantage of the inferred states…. approach to predicting stock prices, relying on a sophisticated mathematical tool called a hidden Markov model. Just as a gambler might guess an opponent’s mood based on his or her decisions, an investor might deduce a market’s state from its price movements.
Stat Arb Strategy
Team members didn’t know a thing about the stocks they traded and didn’t need to—their strategy was simply to wager on the re-emergence of historic relationships between shares, an extension of the age-old “buy low, sell high” investment adage,
The Morgan Stanley traders became some of the first to embrace the strategy of statistical arbitrage, or stat arb. This generally means making lots of concurrent trades, most of which aren’t correlated to the overall market but are aimed at taking advantage of statistical anomalies or other market behavior. The team’s software ranked stocks by their gains or losses over the previous weeks, for example. APT would then sell short, or bet against, the top 10 percent of the winners within an industry while buying the bottom 10 percent of the losers on the expectation that these trading patterns would revert. It didn’t always happen, of course, but when implemented enough times, the strategy resulted in annual profits of 20 percent, likely because investors often tend to overreact to both good and bad news before calming down and helping to restore historic relationships between stocks.
Frey proposed deconstructing the movements of various stocks by identifying the independent variables responsible for those moves. A surge in Exxon, for example, could be attributable to multiple factors, such as moves in oil prices, the value of the dollar, the momentum of the overall market, and more. A rise in Procter & Gamble might be most attributable to its healthy balance sheet and a growing demand for safe stocks, as investors soured on companies with lots of debt. If so, selling groups of stocks with robust balance sheets and buying those with heavy debt might be called for, if data showed the performance gap between the groups had moved beyond historic bounds. A handful of investors and academics were mulling factor investing around that same time, but Frey wondered if he could do a better job using computational statistics and other mathematical techniques to isolate the true factors moving shares.
The firm was improving on the statistical-arbitrage strategies Frey and others had employed at Morgan Stanley by identifying a small set of market-wide factors that best explained stock moves. The trajectory of United Airlines shares, for example, is determined by the stock’s sensitivity to the returns of the overall market, changes in the price of oil, the movement of interest rates, and other factors. The direction of another stock, like Walmart, is influenced by the same explanatory factors, though the retail giant likely has a very different sensitivity to each of them. Kepler’s twist was to apply this approach to statistical arbitrage, buying stocks that didn’t rise as much as expected based on the historic returns of these various underlying factors, while simultaneously selling short, or wagering against, shares that underperformed. If shares of Apple Computer and Starbucks each rose 10 percent amid a market rally, but Apple historically did much better than Starbucks during bullish periods, Kepler might buy Apple and short Starbucks. Using time-series analysis and other statistical techniques, Frey and a colleague searched for trading errors, behavior not fully explained by historic data tracking the key factors, on the assumption that these deviations likely would disappear over time. Betting on relationships and relative differences between groups of stocks, rather than an outright rise or fall of shares, meant Frey didn’t need to predict where shares were headed, a difficult task for anyone. He and his colleagues also didn’t really care where the overall market was going. As a result, Kepler’s portfolio was market neutral Frey’s models usually just focused on whether relationships between clusters of stocks returned to their historic norms—a reversion-to-the-mean strategy
Bet Sizing
Axcom’s trading models didn’t seem to size trades properly. They should buy and sell larger amounts when their model suggested a better chance of making money, Berlekamp argued, precepts he had learned from Kelly.
Trading Frequency
Berlekamp also argued that buying and selling infrequently magnifies the consequences of each move. Mess up a couple times, and your portfolio could be doomed. Make a lot of trades, however, and each individual move is less important, reducing a portfolio’s overall risk.
With a slight statistical edge, the law of large numbers would be on their side, just as it is for casinos. “If you trade a lot, you only need to be right 51 percent of the time,” Berlekamp argued to a colleague. “We need a smaller edge on each trade.”
By late 1989, after about six months of work, Berlekamp and his colleagues were reasonably sure their rebuilt trading system—focused on commodity, currency, and bond markets—could prosper. Some of their anomalies and trends lasted days, others just hours or even minutes,
Medallion made between 150,000 and 300,000 trades a day, but much of that activity entailed buying or selling in small chunks to avoid impacting the market prices, rather than profiting by stepping in front of other investors.
Medallion still held thousands of long and short positions at any time, and its holding period ranged from one or two days to one or two weeks
The fund did even faster trades, described by some as high-frequency, but many of those were for hedging purposes or to gradually build its positions. Renaissance still placed an emphasis on cleaning and collecting its data, but it had refined its risk management and other trading techniques. “I’m not sure we’re the best at all aspects of trading, but we’re the best at estimating the cost of a trade,” Simons told a colleague a couple years earlier.
Medallion still did bond, commodity, and currency trades, and it made money from trending and reversion-predicting signals, including a particularly effective one aptly named Déjà Vu. More than ever, though, it was powered by complex equity trades featuring a mixture of complex signals, rather than simple pairs trades, such as buying Coke and selling Pepsi. The gains on each trade were never huge, and the fund only got it right a bit more than half the time, but that was more than enough. “We’re right 50.75 percent of the time . . . but we’re 100 percent right 50.75 percent of the time,” Mercer told a friend. “You can make billions that way.”
They did more trading than ever, cutting Medallion’s average holding time to just a day and a half from a week and a half, scoring profits almost every day.
The fund began trading more frequently. Having first sent orders to a team of traders five times a day, it eventually increased to sixteen times a day, reducing the impact on prices by focusing on the periods when there was the most volume.
Rationale behind Strategies
… encouraging the team to focus on uncovering what he called “subtle anomalies” others had overlooked. Beyond the repeating sequences that seemed to make sense, the system Berlekamp, Straus, and Laufer developed spotted barely perceptible patterns in various markets that had no apparent explanation. These trends and oddities sometimes happened so quickly that they were unnoticeable to most investors. They were so faint, the team took to calling them ghosts, yet they kept reappearing with enough frequency to be worthy additions to their mix of trade ideas. Simons had come around to the view that the whys didn’t matter, just that the trades worked.
Some of the trading signals they identified weren’t especially novel or sophisticated. But many traders had ignored them. Either the phenomena took place barely more than 50 percent of the time, or they didn’t seem to yield enough in profit to offset the trading costs. Investors moved on, searching for juicier opportunities, like fishermen ignoring the guppies in their nets, hoping for bigger catch. By trading frequently, the Medallion team figured it would be worthwhile to hold on to all the guppies they were collecting.
Until then, Simons and his colleagues hadn’t spent too much time wondering why their growing collection of algorithms predicted prices so presciently. They were scientists and mathematicians, not analysts or economists. If certain signals produced results that were statistically significant, that was enough to include them in the trading model. “I don’t know why planets orbit the sun,” Simons told a colleague, suggesting one needn’t spend too much time figuring out why the market’s patterns existed. “That doesn’t mean I can’t predict them.”
If Medallion was emerging as a big winner in most of its trades, who was on the other side suffering steady losses? Over time, Simons came to the conclusion that the losers probably weren’t those who trade infrequently, such as buy-and-hold individual investors, or even the “treasurer of a multinational corporation,” who adjusts her portfolio of foreign currencies every once in a while to suit her company’s needs, as Simons told his investors. Instead, it seemed Renaissance was exploiting the foibles and faults of fellow speculators, both big and small.
consensus would emerge that investors act more irrationally than assumed, repeatedly making similar mistakes. Investors overreact to stress and make emotional decisions. Indeed, it’s likely no coincidence that Medallion found itself making its largest profits during times of extreme turbulence in financial markets,
“Our P&L isn’t an input,” Patterson says, using trading lingo for profits and losses. “We’re mediocre traders, but our system never has rows with its girlfriends—that’s the kind of thing that causes patterns in markets.”
Simons hadn’t embraced a statistics-based approach because of the work of any economists or psychologists, nor had he set out to program algorithms to avoid, or take advantage of, investors’ biases. Over time, though, Simons and his team came to believe that these errors and overreactions were at least partially responsible for their profits, and that their developing system seemed uniquely capable of taking advantage of the common mistakes of fellow traders. “What you’re really modeling is human behavior,” explains Penavic, the researcher. “Humans are most predictable in times of high stress— they act instinctively and panic. Our entire premise was that human actors will react the way humans did in the past . . . we learned to take advantage.”
By 1997, Medallion’s staffers had settled on a three-step process to discover statistically significant moneymaking strategies, or what they called their trading signals. Identify anomalous patterns in historic pricing data; make sure the anomalies were statistically significant, consistent over time, and nonrandom; and see if the identified pricing behavior could be explained in a reasonable way. For a while, the patterns they wagered on were primarily those Renaissance researchers could understand. Most resulted from relationships between price, volume, and other market data and were based on the historic behavior of investors or other factors. One strategy with enduring success: betting on retracements. About 60 percent of investments that experienced big, sudden price rises or drops would snap back, at least partially, it turned out. Profits from these retracements helped Medallion do especially well in volatile markets when prices lurched, before retracing some of that ground. By 1997, though, more than half of the trading signals Simons’s team was discovering were nonintuitive, or those they couldn’t fully understand. Most quant firms ignore signals if they can’t develop a reasonable hypothesis to explain them, but Simons and his colleagues never liked spending too much time searching for the causes of market phenomena. If their signals met various measures of statistical strength, they were comfortable wagering on them. They only steered clear of the most preposterous ideas. “Volume divided by price change three days earlier, yes, we’d include that,” says a Renaissance executive. “But not something nonsensical, like the outperformance of stock tickers starting with the letter A.” Recurring patterns without apparent logic to explain them had an added bonus: They were less likely to be discovered and adopted by rivals, most of whom wouldn’t touch these kind of trades. “If there were signals that made a lot of sense that were very strong, they would have long-ago been traded out,” Brown explained. “There are signals that you can’t understand, but they’re there, and they can be relatively strong.” The obvious danger with embracing strategies that don’t make sense: The patterns behind them could result from meaningless coincidences.
Often, the Renaissance researchers’ solution was to place such headscratching signals in their trading system, but to limit the money allocated to them, at least at first, as they worked to develop an understanding of why the anomalies appeared. Over time, they frequently discovered reasonable explanations, giving Medallion a leg up on firms that had dismissed the phenomena. They ultimately settled on a mix of sensible signals, surprising trades with strong statistical results, and a few bizarre signals so reliable they couldn’t be ignored. “We ask, ‘Does this correspond to some aspect of behavior that seems reasonable?’” Simons explained a few years later.
Driving these reliable gains was a key insight: Stocks and other investments are influenced by more factors and forces than even the most sophisticated investors appreciated. For example, to predict the direction of a stock like Alphabet, the parent of Google, investors generally try to forecast the company’s earnings, the direction of interest rates, the health of the US economy, and the like. Others will anticipate the future of search and online advertising, the outlook for the broader technology industry, the trajectory of global companies, and metrics and ratios related to earnings, book value, and other variables. Renaissance staffers deduced that there is even more that influences investments, including forces not readily apparent or sometimes even logical. By analyzing and estimating hundreds of financial metrics, social media feeds, barometers of online traffic, and pretty much anything that can be quantified and tested, they uncovered new factors, some borderline impossible for most to appreciate. “The inefficiencies are so complex they are, in a sense, hidden in the markets in code,” a staffer says. “RenTec decrypts them. We find them across time, across risk factors, across sectors and industries.” Even more important: Renaissance concluded that there are reliable mathematical relationships between all these forces. Applying data science, the researchers achieved a better sense of when various factors were relevant, how they interrelated, and the frequency with which they influenced shares. They also tested and teased out subtle, nuanced mathematical relationships between various shares—what staffers call multidimensional anomalies—that other investors were oblivious to or didn’t fully understand. “These relationships have to exist, since companies are interconnected in complex ways,” says a former Renaissance executive. “This interconnectedness is hard to model and predict with accuracy, and it changes over time. RenTec has built a machine to model this interconnectedness, track its behavior over time, and bet on when prices seem out of whack according to these models.” Outsiders didn’t quite get it, but the real key was the firm’s engineering—how it put all those factors and forces together in an automated trading system. The firm bought a certain number of stocks with positive signals, often a combination of more granular individual signals, and shorted, or bet against, stocks with negative signals, moves determined by thousands of lines of source code. “There is no individual bet we make that we can explain by saying we think one stock is going to go up or another down,” a senior staffer says. “Every bet is a function of all the other bets, our risk profile, and what we expect to do in the near and distant future. It’s a big, complex optimization based on the premise that we predict the future well enough to make money from our predictions, and that we understand risk, cost, impact, and market structure well enough to leverage the hell out of it.” How the firm wagered was at least as important as what it wagered on. If Medallion discovered a profitable signal, for example that the dollar rose 0.1 percent between nine a.m. and ten a.m., it wouldn’t buy when the clock struck nine, potentially signaling to others that a move happened each day at that time. Instead, it spread its buying out throughout the hour in unpredictable ways, to preserve its trading signal. Medallion developed methods of trading some of its strongest signals “to capacity,” as insiders called it, moving prices such that competitors couldn’t find them. It was a bit like hearing of a huge markdown on a hot item at Target and buying up almost all the discounted merchandise the moment the store opens, so no one else even realizes the sale took place. “Once we’ve been trading a signal for a year, it looks like something different to people who don’t know our trades,” an insider says. Simons summed up the approach in a 2014 speech in South Korea: “It’s a very big exercise in machine learning, if you want to look at it that way. Studying the past, understanding what happens and how it might impinge, nonrandomly, on the future.”
Data Sources
As the researchers worked to identify historic market behavior, they wielded a big advantage: They had more accurate pricing information than their rivals. For years, Straus had collected the tick data featuring intraday volume and pricing information for various futures, even as most investors ignored such granular information. Until 1989, Axcom generally relied on opening and closing data, like most other investors; to that point, much of the intraday data Straus had collected was pretty much useless.
But the more modern and powerful MIPS (million instructions per second) computers in their new offices gave the firm the ability to quickly parse all the pricing data in Straus’s collection, generating thousands of statistically significant observations within the trading data to help reveal previously undetected pricing patterns. “We realized we had been saving intraday data,” Straus says. “It wasn’t super clean, and it wasn’t all the tick data,” but it was more reliable and plentiful than what others were using.
Profits were piling up as Renaissance began digesting new kinds of information. The team collected every trade order, including those that hadn’t been completed, along with annual and quarterly earnings reports, records of stock trades by corporate executives, government reports, and economic predictions and papers. Simons wanted more. “Can we do anything with news flashes?” he asked in a group meeting. Soon, researchers were tracking newspaper and newswire stories, internet posts, and more obscure data—such as offshore insurance claims —racing to get their hands on pretty much any information that could be quantified and scrutinized for its predictive value. The Medallion fund became something of a data sponge, soaking up a terabyte, or one trillion bytes, of information annually, buying expensive disk drives and processors to digest, store, and analyze it all, looking for reliable patterns. “There’s no data like more data,” Mercer told a colleague, an expression that became the firm’s hokey mantra. Renaissance’s goal was to predict the price of a stock or other investment “at every point in the future,” Mercer later explained. “We want to know in three seconds, three days, three weeks, and three months.” If there was a newspaper article about a shortage of bread in Serbia, for example, Renaissance’s computers would sift through past examples of bread shortages and rising wheat prices to see how various investments reacted, Mercer said. Some of the new information, such as quarterly corporate earnings reports, didn’t provide much of an advantage. But data on the earnings predictions of stock analysts and their changing views on companies sometimes helped. Watching for patterns in how stocks traded following earnings announcements, and tracking corporate cash flows, research and- development spending, share issuance, and other factors, also proved to be useful activities. The team improved its predictive algorithms by developing a rather simple measure of how many times a company was mentioned in a news feed—no matter if the mentions were positive, negative, or even pure rumors.
Next, Dwyer took his visitors downstairs to see Renaissance’s data group, where over thirty PhDs and others Dwyer’s tour usually concluded back upstairs in Renaissance’s computer room, which was the size of a couple of tennis courts. There, stacks of servers, in long rows of eight-foot-tall metal cages,
Single Model
Laufer made an early decision that would prove extraordinarily valuable: Medallion would employ a single trading model rather than maintain various models for different investments and market conditions, a style most quantitative firms would embrace. A collection of trading models was simpler and easier to pull off, Laufer acknowledged. But, he argued, a single model could draw on Straus’s vast trove of pricing data, detecting correlations, opportunities, and other signals across various asset classes. Narrow, individual models, by contrast, can suffer from too little data. Just as important, Laufer understood that a single, stable model based on some core assumptions about how prices and markets behave would make it easier to add new investments later on. They could even toss investments with relatively little trading data into the mix if they were deemed similar to other investments Medallion traded with lots of data. Yes, Laufer acknowledged, it’s a challenge to combine various investments, say a currency-futures contract and a US commodity contract. But, he argued, once they figured out ways to “smooth” out those wrinkles, the single model would lead to better trading results.
Straus and others had compiled reams of files tracking decades of prices of dozens of commodities, bonds, and currencies. To make it all easier to digest, they had broken the trading week into ten segments—five overnight sessions, when stocks traded in overseas markets, and five day sessions. In effect, they sliced the day in half, enabling the team to search for repeating patterns and sequences in the various segments. Then, they entered trades in the morning, at noon, and at the end of the day. Simons wondered if there might be a better way to parse their data trove. Perhaps breaking the day up into finer segments might enable the team to dissect intraday pricing information and unearth new, undetected patterns. Laufer began splitting the day in half, then into quarters, eventually deciding five-minute bars were the ideal way to carve things up. Crucially, Straus now had access to improved computer processing power, making it easier for Laufer to compare small slices of historic data. Did the 188th five-minute bar in the cocoa-futures market regularly fall on days investors got nervous, while bar 199 usually rebounded? Perhaps bar 50 in the gold market saw strong buying on days investors worried about inflation but bar 63 often showed weakness? Laufer’s five-minute bars gave the team the ability to identify new trends, oddities, and other phenomena, or, in their parlance, non random trading effects. Straus and others conducted tests to ensure they hadn’t mined so deeply into their data that they had arrived at bogus trading strategies, but many of the new signals seemed to hold up. It was as if the Medallion team had donned glasses for the first time, seeing the market anew. One early discovery: Certain trading bands from Friday morning’s action had the uncanny ability to predict bands later that same afternoon, nearer to the close of trading. Laufer’s work also showed that, if markets moved higher late in a day, it often paid to buy futures contracts just before the close of trading and dump them at the market’s opening the next day. The team uncovered predictive effects related to volatility, as well as a series of combination effects, such as the propensity of pairs of investments—such as gold and silver, or heating oil and crude oil—to move in the same direction at certain times in the trading day compared with others. It wasn’t immediately obvious why some of the new trading signals worked, but as long as they had p-values, or probability values, under 0.01—meaning they appeared statistically significant, with a low probability of being statistical mirages—they were added to the system. Wielding an array of profitable investing ideas wasn’t nearly enough, Simons soon realized. “How do we pull the trigger?” he asked Laufer and the rest of the team. Simons was challenging them to solve yet another vexing problem: Given the range of possible trades they had developed and the limited amount of money that Medallion managed, how much should they bet on each trade? And which moves should they pursue and prioritize? Laufer began developing a computer program to identify optimal trades throughout the day, something Simons began calling his betting algorithm. Laufer decided it would be “dynamic,” adapting on its own along the way and relying on real-time analysis to adjust the fund’s mix of holdings given the probabilities of future market moves—an early form of machine learning
Portfolio Optimization
Brown and Mercer seized on a different approach. They decided to program the necessary limitations and qualifications into a single trading system that could automatically handle all potential complications
Their inputs were the fund’s trading costs, its various leverages, risk parameters, and assorted other limitations and requirements. Given all of those factors, they built the system to solve and construct an ideal portfolio, making optimal decisions, all day long, to maximize returns. The beauty of the approach was that, by combining all their trading signals and portfolio requirements into a single, monolithic model, Renaissance could easily test and add new signals, instantly knowing if the gains from a potential new strategy were likely to top its costs. They also made their system adaptive, or capable of learning and adjusting on its own, much like Henry Laufer’s trading system for futures. If the model’s recommended trades weren’t executed, for whatever reason, it self-corrected, automatically searching for buy-or-sell orders to nudge the portfolio back where it needed to be, a way of solving the issue that had hamstrung Frey’s model. The system repeated on a loop several times an hour, conducting an optimization process that weighed thousands of potential trades before issuing electronic trade instructions. Rivals didn’t have self-improving models;
Risk Management
Never place too much trust in trading models. Yes, the firm’s system seemed to work, but all formulas are fallible. This conclusion reinforced the fund’s approach to managing risk. If a strategy wasn’t working, or when market volatility surged, Renaissance’s system tended to automatically reduce positions and risk. For example, Medallion cut its futures trading by 25 percent in the fall of 1998.
Slippage
Patterson began helping Laufer with a stubborn problem. Profitable trade ideas are only half the game; the act of buying and selling investments can itself affect prices to such a degree that gains can be whittled away. It’s meaningless to know that copper prices will rise from $3.00 a contract to $3.10, for example, if your buying pushes the price up to $3.05 before you even have a chance to complete your transaction—perhaps as dealers hike the price or as rivals do their own buying—slashing potential profits by half. From the earliest days of the fund, Simons’s team had been wary of these transaction costs, which they called slippage. They regularly compared their trades against a model that tracked how much the firm would have profited or lost were it not for those bothersome trading costs. The group coined a name for the difference between the prices they were getting and the theoretical trades their model made without the pesky costs. They called it The Devil. For a while, the actual size of The Devil was something of a guess. But, as Straus collected more data and his computers became more powerful, Laufer and Patterson began writing a computer program to track how far their trades strayed from the ideal state, in which trading costs barely weighed on the fund’s performance. By the time Patterson got to Renaissance, the firm could run a simulator that subtracted these trading costs from the prices they had received, instantly isolating how much they were missing out. To narrow the gap, Laufer and Patterson began developing sophisticated approaches to direct trades to various futures exchanges to reduce the market impact of each trade.
Information Leakage
Simons worried his signals were getting weaker as rivals adopted similar strategies. “The system is always leaking,” Simons acknowledged in his first interview with a reporter. “We keep having to keep it ahead of the game.”
Asset Classes
By 2003, the profits of Brown and Mercer’s stock-trading group were twice those of Laufer’s futures team
RIEF
Once a year, the fund returned its gains to its investors—mostly the firm’s own employees—ensuring that it didn’t get too big.
The size limit meant Medallion sometimes identified more market aberrations and phenomena than it could put to use. The discarded trading signals usually involved longer-term opportunities. Simons’s scientists were more confident about short-term signals, partly because more data was available to help confirm them.
That gave Simons an idea—why not start a new hedge fund to take advantage of these extraneous, longer-term predictive signals
… researchers settled on one that would trade with little human intervention, like Medallion, yet would hold investments a month or even longer. It would incorporate some of Renaissance’s usual tactics, such as finding correlations and patterns in prices, but would add other, more fundamental strategies, including buying inexpensive shares based on price-earnings ratios, balance-sheet data, and other information.
Client Director, Global Financial Services at Amazon Web Services (AWS)
3 年Great read, highly recommended