Workload metrics like throughput and latency determine the overhead caused by virtualization. This performance overhead is due to the underlying technology used by virtualization (docker, KVM).
- Containers and VMs impose almost no overhead on CPU and memory usage.
- They only impact I/O and OS interaction. This overhead comes from extra I/O cycles and can increase if the number of I/O increases. This overhead increases I/O latency and reduces the number of CPU cycles available for useful work. This can be reduced by reducing the number of I/O but it is rarely the real world scenario.
- Docker gives the flexibility of networking by docker's own DNS - NAT. However, it comes with its own drawback of slowness in transmission path length because of the use of bridging and NAT layers. NAT overhead can be easily eliminated by using –net=host, but this gives up the benefits of network namespaces.
- Applications that are filesystem or disk intensive should bypass AUFS* by using volumes, using the -v option (avoiding AUFS overhead), and should not use docker's internal filesystem for storage avoiding AUFS.
- While KVM can provide very good performance, its configurability is a weakness. Good CPU performance requires careful configuration of large memory pages to avoid TLB miss, vCPU pinning to avoid unavailability of CPU, and exposing cache topology**.
- KVM delivers only half as many IOPS because each I/O operation must go through QEMU***. While the VM’s absolute performance is still quite high, it uses more CPU cycles per I/O operation, leaving less CPU available for application work.
*AUFS: Docker's filesystem, (Another UnionFS)
**Cache Topology: defines the strategy for data storage in a clustered cache. It refers to the behavior of a cache cluster under different circumstances.
***QEMU: QEMU is a machine emulator that can run operating systems and programs for one machine on a different machine. Here, VM on host OS.