Latency, jitter, packet loss, bandwidth restrictions, etc...Simulating Network Conditions with Traffic Control (TC)
Linux Traffic Control

Latency, jitter, packet loss, bandwidth restrictions, etc...Simulating Network Conditions with Traffic Control (TC)

Hello friends,

here we are again, and here is a new musical recommendation...”Believer” by Imagine Dragons (2017)...- ”I want to stop”...-”We can’t”

Traffic Control

In this link → Traffic-Control-HOWTO ← you will find all the necessary information about Traffic Control.

As a miserable summary we can say that Traffic control (TC) in Linux systems refers to the set of mechanisms and tools used to manage and control network traffic on a Linux-based computer. TC is a crucial component for optimizing network performance, ensuring fair bandwidth allocation, and implementing quality of service (QoS) policies.

TC is a subsystem within the Linux kernel that provides traffic shaping, bandwidth management, and QoS functionality. It allows administrators to control the flow of network packets, prioritize traffic, and set policies for different types of network communication.

This is a very good way to improve your skills, for example, working with shaping, policing and QoS, in a simulated o real scenarios.

In this post we will focus on how traffic control can help us with our labs, simulating network conditions like latency, jitter, packet loss or bandwidth restrictions.

The tests that will be shown here are base on the network implemented in this lab → 58 remote sites and 15K routes. A new PoC: securing communications over the Internet with strongSwan,FRRouting,Keyfactor EJBCA Community and docker

The idea is to apply different network conditions to see how it affects the performance between the end devices.

IPERF3: The Test Tool

iperf3 is an open-source command-line tool used for measuring network bandwidth performance. It allows users to test the maximum TCP and UDP throughput between two devices on a network. iperf3 is particularly useful for assessing network performance, diagnosing issues, and optimizing network configurations. It provides a straightforward way to measure data transfer rates, packet loss, and other metrics, making it an essential tool for network administrators and engineers.

Reference: Throughput between EndHost04 (SITE 004) and EndHost59 (SITE 059)

We will perform some test between EndHost04 located in SITE004 and EndHost59 located in SITE059 without any constraints. As a reminder: the sites are interconnected through a central site SITE001 using an overlay network (VPNs). Thus we have the underlay between SITE004 and SITE001, and between SITE059 and SITE001, and the overlay composed by 2 VPNs, SITE004-SITE001 and SITE059-SITE001:

EndHost04-FW004-FW001-FW059-EndHost59
EndHost04-FW004-FW001-FW059-EndHost59

We will do 3 test:

- ping connectivity to get an idea of the rtt (~ latency)

- TCP test from server to client

- UDP test with a X Mbps of bandwidth

PING

ICMP echo

TCP (1 session)

iperf3 -c 172.30.59.101 -R -M 1379 -w 256K

Here's a breakdown of the command:

1. iperf3: This is the name of the command-line utility for measuring network bandwidth and performance.

2. -c 172.30.59.101: This part of the command specifies the client mode, and it tells iperf3 to connect to a server with the IP address 172.30.59.101. In other words, it's instructing the iperf3 client to initiate a network test with a server located at that IP address.

3. -R: This option stands for "reverse," and it tells iperf3 to perform a reverse test. In a normal iperf3 test, data is sent from the client to the server. However, with the -R option, data will be sent from the server to the client instead. This can be useful for testing the reverse direction of a network connection.

4. -M 1379: This option sets the maximum TCP segment size, also known as the Maximum Transmission Unit (MTU), to 1379 bytes. This option can be used to control the size of the data packets sent during the test. Setting a specific MTU can help simulate network conditions with a smaller packet size, which can be useful for testing specific scenarios or troubleshooting network issues related to packet size.

5. -w 256K: This option sets the TCP window size to 256 kilobytes (KB). The TCP window size controls how much data can be in transit before an acknowledgment is received. A larger window size can potentially improve network throughput, especially on high-speed and high-latency networks. However, the optimal window size can vary depending on the specific network conditions.

UDP

iperf3 -c 172.30.59.101 -u -R -b 100M

Here's an explanation of this command:

1. iperf3: This is the name of the iperf3 command-line utility for measuring network bandwidth and performance.

2. -c 172.30.59.101: This part of the command specifies the client mode and tells iperf3 to connect to a server with the IP address 172.30.59.101, similar to the previous command.

3. -u: This option tells iperf3 to use UDP (User Datagram Protocol) instead of TCP for the network test. Unlike TCP, UDP is connectionless and does not guarantee data delivery or reliability. UDP is often used for testing network capacity and latency under real-time conditions.

4. -R: Similar to the previous command, this option stands for "reverse," and it tells iperf3 to perform a reverse test, where data is sent from the server to the client over the UDP connection.

5. -b 100M: This option sets the bandwidth for the UDP test to 100 megabits per second (Mbps). It specifies the target bandwidth for the test. In a UDP test, iperf3 will attempt to send data at the specified rate to measure how well the network can handle that level of traffic. This can be useful for testing network capacity and identifying potential bottlenecks.

Network constraints: latency, jitter and QoS...like a PRO

It’s time to apply some network constraints in our Lab...With the aim that TC policies affect all sites, they will be applied to 'SITE001' on the underlay interface (eth1) of FW001:

Traffic Control example

Of course, we can apply these constraints to any interface of any router shown in the lab, including VTI interfaces (overlay network).

This sound good...here goes how I apply some delay and jitter to specific virtual interface on FW001…Te cagas por las bragas!!! → You shit on your panties!!! a funny expression that denotes surprise...at least in my mind! Ha,ha,ha,ha….

We will affect to Site004 adding delay and jitter only to that site, to do that check this magic command:

sudo tc qdisc add dev vtifw4 root netem delay 100ms 60ms        

With this command we configure a delay of 100 milliseconds on the interface vtifw4. The 60ms value refers to variability or jitter, which means the actual delay can vary within a range of +/- 60 milliseconds around the 100-millisecond delay. In summary, this configuration simulates a network with an average delay of 100 ms and a jitter of 60 ms in the delay.

In the following image, you can see the effect of the command on the communication between EndHost4 and EndHost59, and how it does not impact the communication between EndHost20 and EndHost59:

sudo tc qdisc add dev vtifw4 root netem delay 100ms 60ms

As you can imagine you can apply other policies, like shaping, policing, QoS...at physical or virtual interface level (overlay)...of course if it is necessary, or if you want or if you are testing your thingies...

The next image shows how apply some bandwidth limits (~15 Mbps) to specif traffic from Site059 to Site004 at central site FW001 vtifw4 (overlay interface to Site004)…remember in normal situation we have ~170Mbps of throughput:

shape and control network traffic

As you can see we limited the bandwidth from specific service in Site059 (source IPv4 172.30.59.101 source TCP port 5201) to specific EndHost (172.30.4.101) in Site004.

In this post, we won't delve deep into tc and its features; my wife has threatened divorce again... but here's a brief explanation...These commands are used to configure Quality of Service (QoS) settings using the tc (traffic control) tool in Linux. They shape and control network traffic on the vtifw4 network interface:

tc qdisc add dev vtifw4 root handle 1: htb default 30        

  • This command adds a hierarchical token bucket (htb) queue discipline to the root of the vtifw4 interface.
  • It assigns a handle of 1: to this qdisc, and sets the default class to 30. The handle is used as an identifier for this qdisc. tc class add dev vtifw4 parent

1: classid 1:1 htb rate 100mbit burst 15k ceil 80mbit        

  • This command adds a class under the root qdisc (1:) on the vtifw4 interface.
  • The class is identified as 1:1 and uses the htb qdisc with specific parameters.
  • It sets a maximum rate of 100 megabits per second (mbit) with a burst size of 15 kilobytes (k). The ceil parameter sets an upper limit of 80mbit.

tc class add dev vtifw4 parent 1:1 classid 1:10 htb rate 10mbit burst 15k ceil 15mbit        

  • This command adds a subclass (1:10) under the parent class 1:1.
  • It also uses the htb qdisc and specifies a maximum rate of 10mbit, a burst size of 15k, and an upper limit (ceil) of 15mbit for this class.

tc qdisc add dev vtifw4 parent 1:10 handle 10: sfq perturb 10         

  • This command adds a Stochastic Fair Queueing (SFQ) qdisc under class 1:10 with a handle of 10:.
  • SFQ is used to fairly share bandwidth among packets in the class.
  • The perturb 10 parameter adds some randomness to the queue, which can help prevent certain types of congestion.

tc filter add dev vtifw4 protocol ip parent 1:0 prio 1 u32 match ip dst 172.30.4.101/32 match ip src 172.30.59.101/32 match ip sport 5201 flowid 1:10        

  • This command adds a filter to the vtifw4 interface that matches specific criteria for outgoing IP packets. It looks for packets with a destination IP address of 172.30.4.101, a source IP address of 172.30.59.101, and a source port of 5201. When packets meeting these criteria are encountered, they are directed to class 1:10.

Latency

If you are only interested in testing an increase in latency, here is your command:

tc qdisc add dev eth1 root netem delay 50ms

In this case we apply a latency of 50ms to FW001 underlay interface, the interface that is connected to Internet. The effect is obvious for all remote sites:

tc qdisc add dev eth1 root netem delay 50ms -> Delay

Now is time to compare the throughput with this latency between Site004 and Site59, remember the original value ~170Mbps:

tc qdisc add dev eth1 root netem delay 50ms -> TCP 1 session

As expected, with an higher latency we have a lower bit rates in a TCP session. We will check again creating multiple TCP sessions in parallel … for example 50 sessions:

tc qdisc add dev eth1 root netem delay 50ms -> TCP 50 sessions

As expected, the achieved bit rate increases with the sum of each of the created sessions. Obviously, reducing latency is better, but the speed of light is not infinite, and my wife's patience isn't either…

Now we will test again but with lower latency...not so high, only 20 ms in total...remember again, in normal state ~170Mbps…

tc qdisc add dev eth1 root netem delay 10ms -> TCP 1 session

From 170Mbps to 70Mbps with an increment of 20ms ...in a single TCP session...the delay is so important...

Loss

In this test we will check how the performance is impacted adding a “little bit” of loss...

tc qdisc add dev eth1 root netem loss 1%

Latency + Jitter + Loss

Now “Full Equipped”...ext probe will include latency, jitter and some loss...because the ISPs not always works as expected…

tc qdisc add dev eth1 root netem delay 80ms 20ms loss 1%        

delay 80ms 20ms: This part of the command is configuring network delay. It adds a delay of 80 milliseconds (ms) to the network traffic. The 20ms value represents the variation or jitter in the delay, indicating that the actual delay may fluctuate by up to 20ms around the 80ms baseline. So, it simulates a network with an average delay of 80ms and some variability.

loss 1%: This part of the command introduces packet loss. It specifies that 10% of the packets passing through this network interface will be dropped or lost. This simulates network conditions where a portion of the data packets is not successfully delivered.

tc qdisc add dev eth1 root netem delay 80ms 20ms loss 1%

Conclusion

The use of TC (Linux Traffic Control) can help you understand how things work in specific scenarios. Explaining the effects of delay, jitter, loss, packet corruption, or packet duplication (the last two not included here for... well, you know, my wife is here...) to colleagues, coworkers, customers, service owners, bosses, financial experts, or even to a monkey can be quite challenging. Sometimes, showing a simple throughput test from your miserable laptop speaks for itself... to the purists, please forgive me, the life is so short...

Documentation

https://tldp.org/HOWTO/Traffic-Control-HOWTO/index.html

https://www.dhirubhai.net/pulse/58-remote-sites-15k-routes-new-poc-securing-over-asier-gonzalez-diaz

https://iperf.fr/iperf-doc.php

Gyan Mishra

IT Technologist & Innovations Specialist

1 年

Asier, I really loved the idea of using Linux Kernel TC to simulate real world latency and jitter characteristics for docker based network simulations. All of my work used docker container based routers and using containerlabs. So TC would work excellent with any containerlabs!! Cheers

回复
Gyan Mishra

IT Technologist & Innovations Specialist

1 年

Excellent post on iperf3! Have you tried Cisco’s Trex traffic generator which is similar and can do route generation as well. Thanks

要查看或添加评论,请登录

社区洞察

其他会员也浏览了