Leveraging GNS3 for Efficient Testing and Deployment

Leveraging GNS3 for Efficient Testing and Deployment

Problem Statement

Replicating a complete end-customer network in the R&D lab is often challenging due to limited hardware availability. Additionally, setting up services across various devices typically involves configuring multiple control-plane protocols (L2, L3, IP/MPLS), further complicating the process. Without the ability to recreate such setups in the lab, there is a significant risk of defects reaching field trials and deployments, threatening the success of these trials and potentially jeopardizing product prospects.?

Solution

To reduce the risk of defects leaking into the field, the live/field network can be simulated or virtualized using the following options:

Option 1

Use multiple instances of software on a Linux environment with multiple NICs for interconnection. While this option is simple, it lacks flexibility in managing multiple topologies and often requires a high number of NICs for larger networks.

Option 2

Utilize GNS3, an open-source freeware tool that allows users to create virtual network topologies visually. GNS3 offers several advantages:

  • Platform Independent: Supports Linux, Windows & MAC.
  • Graphical Interface: Features a user-friendly drag-and-drop interface for adding or removing devices.
  • Save & Restore: Allows users to save multiple topologies and restore them when needed.
  • Multi-Vendor Interoperability: Supports testing with devices from various vendors, such as Cisco and Juniper.
  • Automation Testing: Existing automation suites for control-plane L2 and L3 protocols can be adapted and run in parallel, optimizing regression testing cycles.
  • Quick & Efficient: Facilitates faster defect resolution through debugging, without needing physical hardware (e.g., using GDB with GNS3).
  • Network Replication: Organizations can mirror the real network setup in GNS3 before upgrades, reducing the risk of network outages and ensuring smoother field rollouts—a key step towards customer success.
  • Service Demonstration: Useful for demonstrating new services that often require multiple nodes.

Let's Dive Deeper into GNS3

We will explore how to use GNS3 to build a simple two-node topology simulating the LACP (Link Aggregation Control Protocol).

Step 1: Install GNS3 by following the instructions provided here.

Step 2: Refer to the section titled "Your First GNS3 Topology" in the documentation here and create a blank project.

Step 3: In GNS3, name the new project as "LACP".

New Project: LACP

Step 4: Set up a VirtualBox template using any Ubuntu image.

  • Select the "VirtualBox VM templates" option.

New VirtualBox VM template

  • Select the required VirtualBox VM from the drop-down list.

Selecting VirtualBox VM

  • Configure the template with 4 network ports.

Configure 4 network ports

  • Add the template to the list of devices, naming it "LinuxMachine".

Apply the template "LinuxMachine"

  • Similarly, you can add additional templates. Let’s create another one and name it as "LinuxMachineClone".

Apply the template "LinuxMachineClone"

Step 5: Build a two-node topology by dragging and dropping one instance of each template created earlier. Establish 4 back-to-back connections between the two nodes. You can also add multiple instances of the same template if needed.

Two-node LACP Topology

Step-6: Start both instances by right-clicking on each node and selecting the "Start" option.

Step-7: Configure LACP (Linux bonding driver) on both nodes using the appropriate settings as shown in the provided configurations.?

Install the Bonding module

sudo modprobe bonding
sudo lsmod | grep bonding
echo 'bonding' | sudo tee -a /etc/modules         

Configure Temporary Bonding

sudo ip link set enp0s3 down
sudo ip link set enp0s8 down
sudo ip link set enp0s9 down
sudo ip link set enp0s10 down
sudo ip link add bond0 type bond mode 802.3ad
sudo ip link set dev bond0 type bond ad_actor_system <unique-mac-add>
sudo ip link set enp0s3 master bond0
sudo ip link set enp0s8 master bond0
sudo ip link set enp0s9 master bond0
sudo ip link set enp0s10 master bond0
sudo ip link set bond0 up?        

Step-8: Run bonding commands to verify interface bundling and ensure LACP is functioning as expected.

#cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.8.0-40-generic

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable

Slave Interface: enp0s3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:f7:b5:5c
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1

Slave Interface: enp0s8
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:e2:6a:36
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1

Slave Interface: enp0s9
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:8b:b1:98
Slave queue ID: 0
Aggregator ID: 3
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1

Slave Interface: enp0s10
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:3b:fa:71
Slave queue ID: 0
Aggregator ID: 4
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1        
#ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 1a:7c:a7:5f:b2:6e brd ff:ff:ff:ff:ff:ff permaddr 08:00:27:f7:b5:5c
3: enp0s8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 1a:7c:a7:5f:b2:6e brd ff:ff:ff:ff:ff:ff permaddr 08:00:27:e2:6a:36
4: enp0s9: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 1a:7c:a7:5f:b2:6e brd ff:ff:ff:ff:ff:ff permaddr 08:00:27:8b:b1:98
5: enp0s10: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 1a:7c:a7:5f:b2:6e brd ff:ff:ff:ff:ff:ff permaddr 08:00:27:3b:fa:71
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:7c:a7:5f:b2:6e brd ff:ff:ff:ff:ff:ff        
#ip link xstats type bond
bond0          
                    LACPDU Rx 65
                    LACPDU Tx 172
                    LACPDU Unknown type Rx 0
                    LACPDU Illegal Rx 0
                    Marker Rx 0
                    Marker Tx 0
                    Marker response Rx 0
                    Marker response Tx 0
                    Marker unknown type Rx 0        

Step-9: Analyze LACP PDUs by capturing traffic on one of the links.

Start the packet capture
LACP PDUs capture by Wireshark

Step-10: Assign IP addresses to the bond0 interfaces (20.20.20.10/24 and 20.20.20.11/24) on both instances to verify connectivity between the interfaces.?

#sudo ifconfig bond0 20.20.20.10 netmask 255.255.255.0 up
#ping 20.20.20.11
PING 20.20.20.11 (20.20.20.11) 56(84) bytes of data.
64 bytes from 20.20.20.11: icmp_seq=1 ttl=64 time=3.15 ms
64 bytes from 20.20.20.11: icmp_seq=2 ttl=64 time=2.00 ms
64 bytes from 20.20.20.11: icmp_seq=3 ttl=64 time=1.61 ms
64 bytes from 20.20.20.11: icmp_seq=4 ttl=64 time=2.63 ms
64 bytes from 20.20.20.11: icmp_seq=5 ttl=64 time=5.10 ms
64 bytes from 20.20.20.11: icmp_seq=6 ttl=64 time=1.28 ms
64 bytes from 20.20.20.11: icmp_seq=7 ttl=64 time=4.23 ms
64 bytes from 20.20.20.11: icmp_seq=8 ttl=64 time=3.07 ms
64 bytes from 20.20.20.11: icmp_seq=9 ttl=64 time=1.13 ms
64 bytes from 20.20.20.11: icmp_seq=10 ttl=64 time=1.58 ms
64 bytes from 20.20.20.11: icmp_seq=11 ttl=64 time=9.20 ms
^C
--- 20.20.20.11 ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 10012ms
rtt min/avg/max/mdev = 1.130/3.179/9.200/2.247 ms        

Following these steps, complex Layer-3 and IP/MPLS topologies with multiple nodes can also be created (as shown below), allowing for protocol validation from a control plane perspective. This method provides a flexible and cost-effective alternative to physical hardware, making it easier to test, develop, and demonstrate network services.

8-Node MPLS VPLS Topology

It's worth mentioning again that this tool can also be utilized to conduct Multi-Vendor Interoperability validations. For instance, through the "Edit->Preferences" option, you can create the following templates and follow the steps mentioned above to add a device instance from those templates:

  • A template for an IOS router using any IOS image.
  • A template for QEMU using a QEMU VM image.

New template for IOS router

Conclusion

Simulating networks in the lab is critical for identifying and resolving defects before they impact live deployments. With tools like GNS3, organizations can replicate complex customer networks without the need for extensive hardware, reducing costs and minimizing risks. GNS3’s flexibility, multi-vendor support, and automation capabilities make it an ideal solution for efficient network testing, service demonstrations, and troubleshooting. By simulating real-world environments in a virtual space, companies can ensure smoother field rollouts, enhance customer satisfaction, and drive successful deployments.

要查看或添加评论,请登录

Gaurav Singh的更多文章

社区洞察

其他会员也浏览了