Analyzing the traffic between containers...
Analyzing the traffic between containers

Analyzing the traffic between containers...

Hi friends,

On this occasion, I want to share a brief post about networks and security. But before that, allow me to recommend a musical piece: "Safe and Sound" by Capital Cities (2011). It's a song that I personally enjoy a lot, and I believe it's worth dedicating some time to listen to it and dance.

Now, let's focus on the main topic. Lately, I've been working on the implementation of containers for various purposes. So far, everything has been smooth. However, I've had a sudden realization about the importance of understanding the traffic flows between these containers. Analyzing and then integrating these flows into our monitoring systems has become a pressing need.

To obtain this valuable information (or at least the sufficient amount to satisfy my curiosity), I've reviewed the netflow deployment that I've done. The netflow implementation isn't confined solely to network infrastructure devices such as routers, switches, and firewalls, but also extends to servers, both physical and virtual. Yes, indeed! My servers are all equipped with netflow functionality. You know, always striving to maintain a strong focus on security and performance. You can take a look at these links -> Netflow also in my home..I’m talking about N-E-T-F-L-O-W not about N-E-T-F-L-I-X, Analyzing flows with NfSen, nfdump, fprobe, netflow5/9 and IPFIX...all of this is open source.

Now, the "trick" to send all traffic information, including containers flows, or at least a portion of it, is to enable fprobe on all interfaces of the hosts where the containers are running (check the links).

Let me illustrate this with an example…

ANALYZING TRAFFIC BETWEEN CONTAINERS

I have recently deployed a new service called MISP. You can take a look here→ MISP – The Open Source Threat Intelligence Sharing Platform. Threat intelligence to not be so foolish. This service is implemented within 2 containers: misp_db and misp_web, running on a virtual server debiansrv03.home.arpa.

No hay texto alternativo para esta imagen
MISP service containers: misp_db and misp_web
No hay texto alternativo para esta imagen
MISP service containers: misp_db and misp_web

Let's perform some checks regarding the network resources in use, both within the containers and on the host. Let's see which network resources have been allocated to the containers:

No hay texto alternativo para esta imagen
Containers: Addresses and NetworkIDs

Let's pay particular attention to the IPv4 addresses and the first 12 characters of the NetworkID (for instance, 172.18.0.X 1b4df9191708):

No hay texto alternativo para esta imagen
Containers: Addresses and NetworkIDs

Now, let's turn our focus to the host (debiansrv03.home.arpa). The network interfaces on the host debiansrv03.home.arpa are as follows:

No hay texto alternativo para esta imagen
debian.home.arpa links

The assigned addressing is as follows:

No hay texto alternativo para esta imagen
debian.home.arpa addresses

It's evident that on the host, there is a bridge named br-1b4df9191708, to which the containers are connected, and its IPv4 address corresponds to the default gateway of the containers.

Another way to see this is by using the command "ip neigh show dev br-1b4df9191708" (provided the ARP table entries have not expired):

No hay texto alternativo para esta imagen
debian.home.arpa "ip neigh ..."
No hay texto alternativo para esta imagen
HL topology

If you have taken a look at the previous post, Analyzing flows with NfSen, nfdump, fprobe, netflow5/9 and IPFIX...all of this is open source, you'll see that enabling the sending of traffic statistics from a Linux host (in this case, a Debian server) to a Netflow collector using the debian fprobe package is quite straightforward.

To enable the “transmission” of traffic statistics from all interfaces, follow the installation guide from the post. Then, edit the file /etc/default/fprobe and replace the network interface name with the term "any." The configuration should look like this:

No hay texto alternativo para esta imagen
debiansrv03.home.arpa: /etc/default/fprobe

Once you have made the changes, don't forget to restart the service:

No hay texto alternativo para esta imagen
fprobe service restart

Great, we're on the right track! Now it's time to review the traffic flows between containers, but before that, a word of caution: I can't assure that all the data will be collected...

No hay texto alternativo para esta imagen
NfSen: basic analysis data input
No hay texto alternativo para esta imagen
NfSen: processed results

As you can see, the traffic flows between containers are noticeable in this case, specifically TCP traffic with a destination port of 3306. Port 3306 is the default port for the classic MySQL protocol.

Of course, you can also achieve partial automation. Here's a script that captures this information, using a time window of the last 15 minutes:

No hay texto alternativo para esta imagen
Python script to analyze data

Conclusion

Using Nfdump and NfSen offers significant benefits from an operational and cybersecurity standpoint, providing network monitoring, traffic analysis, threat detection, forensic investigation, performance optimization, and the ability to integrate with other security tools. These combined capabilities help strengthen a security posture and ensure a more secure and efficient network… Bla, bla, bla... and now containers with hardly any effort.

Documentation

https://docs.docker.com/engine/reference/commandline/network_inspect/

https://github.com/phaag/nfdump

https://nfdump.sourceforge.net

https://github.com/phaag/nfsen

https://nfsen.sourceforge.net

https://meetings.ripe.net/ripe-50/presentations/ripe50-plenary-tue-nfsen-nfdump.pdf

https://github.com/mbolli/nfsen-ng

https://docs.kernel.org/trace/fprobe.html

https://packages.debian.org/bookworm/fprobe

要查看或添加评论,请登录

Asier Gonzalez Diaz的更多文章

社区洞察

其他会员也浏览了