Strategy Intermittently Kaputt
In early 2019, the CME modified how (when!) fills of an iceberg order are disseminated (see here). This eliminated an easy "trick" to infer the full size of a trade based on a private fill message before the market as a whole knew of the trade.
But let's back up a little. On the CME, private fill messages are sent before the trade is disseminated in the public market data feed. But (ideally) the information advantage of this is limited. Suppose your order was in the front of the queue and you receive a fill. All you can (or should be able to) tell is that a trade of the size of your order has happened - nothing more. If your order was further down in the queue, your fill comes later, but you can also infer that any volume ahead of must have been traded. However, only the first 10 or so fills have a latency advantage over the public feed. This means that for trades filling tens of orders and trading through multiple levels, the true size of the trade should be conveyed first in the public feed.
And yet, the data shows reactions to large ES (S&P500 E-mini future), the leading instrument, which come too early to have been triggered by the public feed and which are too large to be based on the limited information content of the private feeds.
Fig. 1 below shows the traded volume in a number of products (all strongly correlated to the ES) which likely were based on a private trigger for which (a) there was at least a 100 ms gap to the preceding ES trade and (b) the ES trade was at least 5 levels deep.
The third filtering condition is related to the secret sauce behind this. Suppose a large aggressive order reaches the matching engine. The matching engine fully processes the event before it starts disseminating any private fills or public data. Private and public updates contain two timestamps: the TransactTime (the gateway-in timestamp of the aggressive order) and the SendingTime (when the update was sent).
For large trades which fill many resting orders over multiple levels, the processing time will be larger than for smaller trades. So there is likely a relationship between the TransactTime-to-SendingTime latency and the number of orders involved. This means that one can infer the total number of filled orders from the fill message of even the first order in the queue. And given the current level-3 order book, one can then also estimate the depth of the trade.
Thus, the third filtering criteria used for Fig. 1 was the TransactTime-to-SendingTime of the (public) market data update containing the trade. The condition that a minimum time since the previous trade must have passed was to rule out situation where a larger-than-normal latency was due to the matching engine still processing the previous event.
领英推荐
A few things stand out:
The narrative for this sequence of events would be: (1) I found a great strategy! (2) It works, let's scale up! (3) Why are we suddenly losing money? Let's scale down and observe. (4) It keeps losing money, let's stop. (5) Backtests show it should work again for some reason. (6) Great then, let's go in again.
Fig. 2 below shows the reason for the intermittent disappearance. The left panel is from a month when the strategy worked. The x-axis is the TransactTime-to-SendingTime latency, the y-axis is the total number of orders involved in the ES trigger trade (incl. the aggressive order, hence the minimum is 2). Each marker represents a case where this trade was followed by a trade in MES before it could have happened based on the public data.
The color represents the size of the reaction trade with blue and orange being "sizeable" reactions. Dots are for cases when the actual depth of the trigger trade was less than 5 levels deep, square were ES trades at least 5 levels deep. Fig. 1 above only shows reactions to the right of the shaded region (but the deployed strategy likely had a different filter; plus: possibly the timestamps in the private fill message were less noisy).
The left panel shows that all large reactions (yellow markers) followed deep ES trades (markers are squares) - the strategy "worked". Fast-forward 12 months and the situation has changed as seen in the right panel. We now see a cluster of large reactions which were not based on deep ES trades and hence likely not profitable.
It is unclear what changed. Was it a technical change on the CME? Was it caused by participants? A bit of a mystery - possibly even for the parties involved.
Molto istruttivo
Ultra-low Latency Trading using FPGAs and ASICs, ML Inference in hardware
2 个月Another fascinating article, Stefan. I wonder if there is a directional bias in the size these traders are willing to make these trades, as the volume traded somewhat follows the slope of the price of the SPY ETF. It is not the only factor, but maybe one of the factors.
Head of Quantitative Research
2 个月Stefan Schlamp another interesting article. You’re one of the few people on here worth reading.
Head of Algorithmic Trading at Vontobel
2 个月That is very interesting, indeed. How does the chart looks like if you remove the 3ms condition, Stefan?