19 ways to reduce latency in telecom networks
LS of South Africa Radio Communication Services
Smart Spectrum Solutions to unleash the full potential of your radio spectrum
Summary
In this new edition, we are going to list and suggest various ways of reducing this quality of service indicator. It goes without saying that some parameters are more difficult than others to address, depending on the operating mode. In the end, is it the value (of latency) or its variation that is most important??
In the next edition, will look at the various indicators that characterize latency (jitter, stability, volatility, etc.). In the meantime, enjoy reading!
Introduction
Latency in networks is influenced by various factors, including:
1.????? Distance: The physical distance between the source and destination
2.????? Media type
3.????? Network Infrastructure: The efficiency and capacity of network components.
4.????? Network Congestion: High levels of network traffic causing delays.
5.????? Signal Propagation: The speed at which signals travel through the medium.
6.????? Processing Time: Time taken by network devices to process and forward data packets.
Let’s review some parameters that shape latency values.
Therefore, a network can be broken down into 3 physical components: access / transport / server, plus the associated IT protocols and processing.
Depending on the use case and support, the contributors to latency will have a different weighting.
19 options of reducing latency
Subsequently, these are just a few of the parameters to which attention should be paid in order to help reduce latency:
1.????? Support from copper to Fiber, from 4G to 5G
2.????? Network optimization: Reduce the number of hops, use high-performance equipment, optimize network configurations with optimized routing table and peering agreement,
3.????? Data compression: Reduce packet size to accelerate transmission,
4.????? Use of reduced buffers: Minimize packet processing time in network equipment,
5.????? Congestion avoidance: Implement congestion control mechanisms to avoid saturation of network links,
6.????? QoS (Quality of Service): QoS mechanisms prioritize certain types of traffic and guarantee low latency for critical applications,
7.????? Managed all Transport layer protocol: DSCP, SCTP
8.????? Implement MEC Mobile Edge Compute: applications close to users reducing transport and central servers workload,
9.????? 5G: Implement Stand Alone (SA): slicing design. During the last Orange Open Tech Days (November 2023) a 50% gain from 5G NSA (20ms) to 5G SA (10ms) architecture has been demonstrated.
10.?? Radio coverage: antenna & beam forming to increase signal level (RSSI & SNIR)
11.?? Downsize Packets: MTU at 1500 bytes (IPv4) (router), driving to less collisions,
12.?? Implement QUIC protocol (Quick UDP Internet Connections),
13.?? Implement WebRTC (Web Real-Time Communication),
14.?? For video implement SRT / SST protocol,
15.?? Implement BBR protocol. BBR (Bottleneck Bandwidth and Round-Trip time) is a TCP congestion control algorithm designed to maximize throughput while minimizing latency. Studies and feedback show that BBR can deliver significant improvements in latency, particularly in unstable or congested network conditions. The gains can range from a few milliseconds to several tens of milliseconds.
16.?? Implement L4S. The L4S (Low Latency, Low Loss & Scalable Throughput) protocol promises a significant reduction in latency in telecoms networks. However, it is difficult to give an exact figure in milliseconds for this reduction. Although figures vary, the first full-scale tests have shown reductions in latency of the order of a factor of 10.
17.?? Implement MPLS protocol that enables dedicated virtual paths (LSPs) to be created between network nodes, which can reduce latency variations by avoiding routing calculations at each hop. It is important to understand its mechanisms and adapt it to your specific environment to reduce latency.
18.?? For ISP, peering: networking peering facilities can be a valuable tool for ISPs to reduce latency. However, the extent of latency reduction will depend on various factors, including geographic location, network infrastructure, traffic patterns, and peering agreement terms
19.?? TCP Acceleration?enables a network operator to isolate their own part of a TCP connection and optimize it for their own network conditions. TCP Accelerator improves the effective subscriber throughput by up to 30% and reduces loading times by up to 25%, even in adverse network conditions.
Correlation
We can establish a correlation between the 2 key measurements, latency and bit rate.?
The graph below shows several values that attempt to indicate the shape of the curve between latency = function (bit rate).?
We can estimate that there is a flat curve around the 25ms mark, whatever the bit rate. Therefore, the difficulty lies at the limits when latency is lower than 25ms.
Recommendations by Latencetech (beta)
To support network supervisors, Latencetech Inc has studied how GenAI can help optimize the network. Recommendations are published by sending a Prompt file to an LLM containing historical latency readings for the various protocols. Work is continuing with a view to integrating other parameters of the monitored network design, such as the location of active equipment and the environment, in a future version, to provide more precise recommendations.
“MSLMA”?Multi Segmented Latency Monitoring Architecture
With its non-impacting MSLMA (Multi Segmented Latency Monitoring Architecture) based on light container docker technologies, Latencetech Inc improves the overall view of the monitoring by increasing the granularity. Subsequently enabling Telcos and End users to better identify and localize QoE bottleneck.
The diagram below shows the main segments of a telecom network, from user access, transport and aggregation to the server hosting the applications. Each segment, defined by a starting point and an end point, can be given special attention by the addition of a sender agent and a mirror target, enabling the status of this section to be monitored.
Conclusion
As we have just seen, the latency quality of service indicator depends on a large number of parameters. Controlling, measuring and predicting it requires a good understanding of network topology, as well as the ability to modify certain parameters (it's not impossible that Pareto's law applies), the ultimate objective being the search for a better user experience and its economic impact, as mentioned in the previous article.
By Marc Soulacroup