Build your lab: Containers + GNS3
Hi friends,
Here we are with a new post, but before that, can we have some music, please? I recommend listening to the piece "La Música Notturna delle Strade di Madrid - Op. 30 n. 6 (G. 324)" by Luigi Boccherini, composed in 1780, from start to finish. I'm sure you'll love it.
Now, let's get down to business. In this post, I will show you a "simple" way to use FRRouting, containers, and GNS3 to build both simple and complex network topologies. While there are numerous options in this field, it's important to remember that the most important thing is to research and determine which tool best suits your needs and how to use it effectively.
GNS3 allows me to run and connect various devices, such as virtual machines or containers, easily. Of course, you can also connect GNS3 to the "real" world to provide connectivity to other resources. To make it clearer, here's an example: Working and monitoring some aspects of RPKI in the lab with full Internet routing table, FRRouting and python.
In that lab, the following has been created:
- An SRv6 IPv4/IPv6 L3VPN operator network composed of 6 P nodes and 3 PE nodes.
- 4 routers simulating internet access.
- 1 ExaBGP+mrtparse node that loads the internet routing table and publishes it to the client, which includes over 1 million IPv4/IPv6 routes.
- 4 firewalls in the client's network.
- 2 RPKI validators.
- A monitoring system (Nagios Core) and automation.
But things can get crazier, check out this lab: IEC 62439 High availability automation networks → HSR/PRP: I'm crazy and bored ...HSR Ring as a Critical WAN/Core to achieve 0 loss
Here, I present to you 6 virtual machines that form an HSR ring. This ring represents the core of a network with the goal of achieving a zero-loss rate in case of failures. Yes, it may sound a bit crazy, but my passion for the industrial sector leads me to embrace these stupidities whenever I can. In addition to the core nodes, we have deployed various containers representing customer routers and end devices.
The use of GNS3 along with containers and virtual machines can be considered as a first step towards implementing more automated labs. The "advantage" of doing it this way, at least for now, lies in the ability to create connections between devices graphically, similar to connecting an Ethernet cable or a pair of fibers. Once you feel comfortable with these tools, you'll be in a stronger position to initiate semi-automated deployments, where the lab can be defined in different files.
In summary, we will have containers and virtual machines that can run "inside" or "outside" of GNS3. By using these containers and virtual machines, we can "simulate" practically any application or service, allowing us to perform a fairly comprehensive analysis of its behavior.
But let's focus, what are we going to do here? To keep this post from getting too long, we will create an image of a "SUPER-ROUTER" to deploy containers that will function as infrastructure devices, such as switches, routers, and firewalls, in our lab. The steps to follow are as follows:
1. Create an image of the "SUPER-ROUTER" based on Debian + FRRouting.
2. Import the image into GNS3.
3. Create a VXLAN + BGP EVPN lab.
4. Bonus: Use a tool that facilitates image and container management.
1. Create an image of the "SUPER-ROUTER."
On Docker Hub, you can find Frrouting images that are probably suitable, but I like to build what I'm going to use. Our image may be larger or less optimized, but as the song goes, "I don't care, I love it."
For our installation, we will use the official Debian image as a base (also available on Docker Hub). Through a simple and somewhat "rudimentary" Dockerfile, we will create our own "SUPER-ROUTER."
Remember that before proceeding, you need to have Docker installed on your machine.
A personal piece of advice: whenever I have the opportunity, I choose Debian. It's a stable system with which I have gained experience and feel comfortable, somewhat akin to how I feel about Cisco. However, just like in the world of celebrities where we find Brad Pitt and Charlize Theron, there's also room for Robert Pattinson and Gal Gadot. In other words, there are many quality options available... ha, ha, ha.
Install Docker on Debian.
To install Docker on Debian, follow the steps outlined here: Install Docker Engine on Debian
The summary would be this:
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Dockerfile
I'm attaching the Dockerfile that I use to build the image, but before that, I advise you to take a look here → Best practices for writing Dockerfiles ← The idea is for you to generate a Dockerfile better than this one:
FROM debian:latest
# ENV
ENV CONTAINER_IMAGE_VER="v1.0.0"
ENV TZ="Europe/Madrid"
ENV USER_NAME="frruser"
ENV USER_PASSWORD="frruser123"
ENV ROOT_PASSWORD="root123"
# ECHOS FOR TESTING PURPOSES, THE INFO WILL BE PRINTED, TAKE CARE IF YOUR WIFE/HUSBAND IS WATCHING...
RUN echo $CONTAINER_IMAGE_VER
RUN echo $TZ
RUN echo $USER_NAME
RUN echo $USER_PASSWORD
RUN echo $ROOT_PASSWORD
# THE PARTY STARTS: ADD USERS
RUN echo 'root:$ROOT_PASSWORD' | chpasswd
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN adduser --quiet --disabled-password --shell /bin/zsh --home /home/$USER_NAME --gecos "User" $USER_NAME
RUN echo "${USER_NAME}:${USER_PASSWORD}" | chpasswd && usermod -aG sudo $USER_NAME
# UPDATE THE BASE AND INSTALL REQUIERED PACKAGES BEFORE FRR
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
curl \
sudo \
gnupg2 \
lsb-release
# FRR SOURCES
RUN curl -s https://deb.frrouting.org/frr/keys.gpg | sudo tee /usr/share/keyrings/frrouting.gpg > /dev/null
ENV FRRVER="frr-stable"
RUN echo deb '[signed-by=/usr/share/keyrings/frrouting.gpg]' https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
# HERE WE GOOOOO!!!!
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
nftables \
conntrack \
apt-utils \
openssh-server \
vim \
dnsutils \
whois \
mtr-tiny \
wget \
python3-pip \
oping \
zsh \
git \
iputils-ping \
tcpdump \
net-tools \
kitty \
kitty-terminfo \
traceroute \
iproute2 \
bridge-utils \
ifupdown \
ifupdown-extra \
lldpd \
atop \
htop \
nmap \
iperf3 \
frr \
frr-pythontools \
frr-rpki-rtrlib \
&& rm -rf /var/lib/apt/lists/*
# COPY THE FILES THAT YOU CONSIDER INTERESTING FOR YOUR PURPOSES...
COPY files/firewall.nft /etc/frr/firewall.nft
COPY files/frr_daemons /etc/frr/daemons
COPY files/frr /etc/pam.d/frr
COPY files/90-router.conf /etc/sysctl.d/90-router.conf
COPY files/wg0.conf /etc/wireguard/wg0.conf
COPY files/gre.sh /etc/frr/gre.sh
COPY files/wg.sh /etc/frr/wg.sh
COPY files/router.sh /etc/frr/router.sh
COPY files/vrf.sh /etc/frr/vrf.sh
COPY files/vxlan.sh /etc/frr/vxlan.sh
COPY files/startup_sp_debian.sh /etc/frr/startup_sp_debian.sh
# PERMISSIONS
RUN chmod a+x /etc/frr/*.sh
# PERMIT CONNTRACK TO USERs
RUN setcap cap_net_admin=eip /usr/sbin/conntrack
# ADD SERVICE USER TO THE GROUPS OF YOUR INTEREST
RUN usermod -aG frr,frrvty $USER_NAME
# SOME PERSONAL MODIFICATIONS...ADDING ZSH
USER $USER_NAME
ENV TERM xterm
ENV ZSH_THEME robbyrussell
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
USER root
ENV TERM xterm
ENV ZSH_THEME robbyrussell
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
ENTRYPOINT /etc/frr/startup_sp_debian.sh && /bin/zsh
As you can see, among other actions, various files are copied into the image. This is done in a somewhat "rudimentary" manner to create different "non-persistent" objects on each container restart. For example, I like to configure the eth0 interface of the "SUPER-ROUTER" to belong to a specific management VRF. In the vrf.sh file, I can define the VRFs that the container needs. It's important to note that not everything can be created using FRRouting and its vtysh access. System elements like VRFs, interfaces, gre, wireguard, bridges, among others, cannot be configured "natively" with FRRouting.
Continuing with the example of vrf.sh, this file will run on each restart, creating the necessary structure defined within it. I apply this same logic to other aspects like bridges, MPLS, VXLAN, etc. These files could be consolidated into one, but in this case, I've created several that are executed from a general one called "startup_sp_debian.sh."
Depending on the interest generated by the post, I can describe in detail the different files. The following image illustrates the image generation process:
2- Import the image into GNS3.
As I mentioned, I don't want to extend this post too much, so I won't go into details on how to install and use GNS3. You can find all the necessary information here:
Regarding the details of the GNS3 solution architecture, you can find them here: Architecture
领英推荐
When I use GNS3, I employ a Debian server as the host, which serves the functions of the QT GUI interface and controller. Additionally, I have another compute server running on a virtual machine (“inside the host”), also using Debian, using libvirt QEMU/KVM. In this setup, I typically do not use a compute server on the host; instead, I delegate this task to the virtual machine. With this configuration, I can connect the containers and virtual machines running on the GNS3 virtual machine to each other (nested virtualization), as well as to other virtual machines and containers running on the host (outside the GNS3 environment). This includes servers like RADIUS, TACACS, LDAP, MFA, PKI, RPKI Validators, firewalls, etc. I can even connect them to external "real" devices....yes, yes, this is awesome.
To import the image of our "COJO-ROUTER" into GNS3, we just need to access: Edit → Preferences → Docker → Docker Containers → Click in New Box
The next window displays a wizard-like dialog that allows you to import the image (as Docker container template). The only thing you need to keep in mind to find it is to select the GNS3 computing resource where you have created the image. In my case, it's the Debian server called debian-ilab, but it could have been the host itself. Once you've finished, the image will look like this:
By selecting the Edit box, you can change various items of the image. In my case, I always modify the number of interfaces (I usually leave it at 10 by default), as well as the additional persistent volume; I usually include "/etc" and "/home." These options are a matter of personal preference or your specific requirements.
3- Create a VXLAN BGP EVPN lab.
In order to visualize the results of our work, I have configured a VXLAN BGP EVPN lab with a CLOS topology. This lab consists of 2 SPINE nodes (Route Reflectors), 6 LEAF nodes, 2 CE nodes, and 14 end hosts. The infrastructure nodes are containers running frrouting with Debian (the image was created in the post), while the end hosts are Alpine containers.
In the SPINE-LEAF network, I have implemented OSPFv2 as the underlay, and MP-iBGP as control plane, where the SPINE nodes serve as Route Reflectors for ASN 64512.
The following screenshots display certain relevant status information:
Below are the results of an ICMP test conducted between containers c_VRF101-101 (172.18.1.101) and c_VRF101_2 (172.18.2.101), along with a traffic capture between nodes c_LEAF01 and c_SPINE01:
The typical overlay and underlay can be clearly observed in a network of this nature (with symmetric IRB):
We can also perform the same exercise for flows within the same VLAN (Layer 2), for instance, between c_VRF101-03 and c_VRF101-06:
In this case, the ICMP request is sent using the interface between c_LEAF03 and c_SPINE02, but the ICMP reply is received through the interface between c_LEAF03 and c_SPINE01. As you know, in this type of networks, we can make use of Equal Cost Multiple Path (ECMP)...
While I primarily use the labs for concept testing, often focusing on the control plane, we can also conduct load tests, taking into account the limitations of our environment. For instance, using iperf3, we evaluated the performance of the path between containers c_VRF101-101 (via c_LEAF01) and c_VRF101-101 (via c_LEAF06):
We achieved a result of over 300 Mbps, which is quite satisfactory.
4- Bonus: Use a tool that facilitates the management of images and containers.
Working with containers can generate a large amount of information, which in many cases we need to remove. The Docker command line is very useful, but sometimes a graphical interface provides us with a view that makes it easier to anticipate what is happening. There are several tools that can be helpful, depending on what you need. If you're not sure where to start, consider using Portainer CE (Community Edition). There's also a BE (Business Edition) version available, and the choice between them will depend on your specific needs.Portainer → “Portainer is your container management platform to deploy, troubleshoot, and secure applications across cloud, datacenter, and Industrial IoT use cases.”
The following image displays the environment of Portainer on debian-ilab (the host where the lab containers in GNS3 run). Keep in mind that GNS3 does not create containers with the same names as those used in the lab...
Once you close the lab, the containers are destroyed (deleted)... but don't worry, the information contained in /etc will remain safe, so the Frrouting configuration and the creation of all interfaces you've defined in the *.sh files in /etc/frr will be restored....We're the shit!!!!
Conclusion
In this specific case, customizing your own images will allow you to deploy everything you need, including tools such as traffic control, firewall, data capture, and more. Below, I show you a capture of the TCP session 5201 iperf3 between c_VRF101-101 and c_VRF101-102 on c_LEAF01:
Currently, anyone has access to a wealth of information on virtually any topic, along with a multitude of tools at the click of a button. There are no excuses for not making an effort to learn how things work... Come on!!!
Documentation