High performance container networking
Courtesy Microsoft Research

High performance container networking

Researchers and engineers from Microsoft Research and Microsoft Azure have contributed nine scientific papers to the technical program of the 16th Annual USENIX Symposium on Networked Systems Design and Implementation – NSDI ‘19 – to be held in Boston, Massachusetts between February 26 and February 28, 2019. Our papers cover some of the latest technologies Microsoft has developed in networked systems.

While I would love to discuss all of our papers in detail, that would make this post far too long. Instead, as I’d previously written a couple of posts about cloud reliability and availability, today I’ll focus on another topic near and dear to me: network performance.

The world of containers

Recently, a lightweight and portable application-sandboxing mechanism called containers has become popular among developers who build applications for a wide variety of targets ranging from IoT Edge to planet-scale distributed web applications for multi-national enterprises. A container is an isolated execution environment on a Linux host. It supports its own file system, processes and network stack. A single machine—host—can support a significantly larger number of containers than standard virtual machines, providing attractive cost savings. Running an application inside a container isolates it from the host and other applications running in other containers. Even when the applications are running with superuser privileges, they cannot access or modify the files, processes or memory of the host or other containers. There is more to say, but this is not intended to be a tutorial for containers. Let’s instead talk about networking between containers.

As it turns out, many container-based applications are developed, deployed and managed as groups of containers that communicate with one other to deliver the desired service. Unfortunately, until recently, container networking solutions had either poor performance or poor portability, which undermined some of the advantages of containerization.

Enter Microsoft FreeFlow. Jointly developed by researchers at Microsoft Research and Carnegie Mellon University, Freeflow is an inter-container networking technology that achieves high performance and good portability by using a new software element we call the Orchestrator. The Orchestrator is aware of the location of each container, and by leveraging the fact that containers for the same application do not require strict isolation, Orchestrator is able to speed things up. FreeFlow uses a variety of cool techniques such as shared memory and Remote Direct Memory Access (RDMA) to improve network performance—that is, higher throughput, lower latency, and less CPU overhead. This is accomplished while maintaining full portability and in a manner that is transparent to application developers. It’s said a picture is worth a thousand words and the following figure does a nice job of illustrating Freeflow’s capabilities.

FreeFlow uses a variety of cool techniques such as shared memory and Remote Direct Memory Access (RDMA) to improve network performance—that is, higher throughput, lower latency, and less CPU overhead. This is accomplished while maintaining full portability and in a manner that is transparent to application developers.

READ MORE BY CLICKING BELOW


要查看或添加评论,请登录

Victor Bahl的更多文章

社区洞察

其他会员也浏览了