Engineering High Performance Financial Systems: Common Misconceptions
https://www.gigabit-wireless.com/gigabit-wireless/low-latency-wireless-networks-for-high-frequency-trading/

Engineering High Performance Financial Systems: Common Misconceptions

"The U.S. stock market was now a class system, rooted in speed, of haves and have-nots. The haves paid for nanoseconds; the have-nots had no idea that a nanosecond had value.”, Michael Lewis, Flash Boys,

Speed leads to competitive advantage. If we can get information before our competitors get it and if we can act on it faster than they can, we have the opportunity to capitalise on. The famous story of Rothschild making his fortune by learning about the outcome of Napoleonic Wars sooner than anyone else in Britain is not unique in history.

Traders and salespersons have always adopted tools to provide them such competitive advantage. High Frequency Trading (HFT) is the latest weapon in their arsenal. But speed is not the only competitive advantage HFT provides. Volumes that HFT generates lead to higher profits even at narrow margins.

Because of this, the need for higher performance propagates to all aspects of financial technology. Higher trading volumes require higher processing throughput in middle and back office operations. However, performance in financial technology, including in sales and trading, is largely an afterthought in application design and implementation. In many other cases flawed premature optimisations are applied acting as the medicine that worsens the illness rather than curing it.

Following common misconceptions contribute to a relaxed attitude towards performance in financial technology,

Misconception #1: This is not for me, I am not doing HFT

Engineering for performance is not always about speed though speed leads to higher throughput for the same infrastructure footprint. This leads to growth opportunities without significant additional investment. In the absence of suitable CFRs/NFRs (cross/non-functional requirements), benchmarking a system while in development against existing technology and anticipated loads goes a long way in ensuring it has the expected capacity.

Misconception #2:We have multi-gigabit networks and clusters of 24 core processors, everything will run faster on them

It is surprising how quickly excessively verbose messages clog gigabit networks. Throwing silicon at a performance problem and expecting that problem to resolve itself generally leads to disappointments. Neglecting physical realities of computation and communication while designing and implementing applications result in systems that require substantial infrastructure investment for barely acceptable performance. Any future increase in load leads to application renovation which is both costly and risky.

Misconception #3: I’ll cache my way out of performance issues

Caching is a double edged sword. While it can reduce latency substantially, it may also add complexity to services that employ it. That complexity comes with its own overheads. Keeping caches coherent in distributed systems is complicated. Especially, if these services need to be scaled to sustain higher loads. Therefore, not all caching strategies work effectively for all performance challenges. Caching also tempts developers to not resolve performance issues where they occur but unnecessarily cache data downstream.

Misconception #4: We will scale when the load increases

It depends. Systems can only scale if they have been designed for scalability. Also, based on the architecture, scaling may only solve specific performance issues and not others. For example, if the only the entire processing pipeline can be scaled, it will not lower processing latency but increase throughput. Alternatively, if processing tasks within an application can be processed asynchronously and if those tasks can be scaled, latency as well as throughput can be improved.

Misconception #5: Technology (containers/ lambda/ cloud/ big data etc.) is the saviour

If and only if applications on top of that technology have been designed for performance.

In a nutshell

Applications need to be engineered for performance while they are being engineered for functionality. There is a fine line dividing engineering for performance from premature optimisation. Former relies on established best practices and targeted optimisations guided by analysis and measurements. Latter is based largely on speculation and not resolving causes of performance issues but focusing only on the effects.?

Eventually, simple, lightweight and stateless services using efficient communication protocols are inherently faster and scalable. They also require less infrastructure investment and have less development overheads and maintenance costs.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了