Docker: Why use it and How it Works? (Docker vs VM's)
With the rise of DevOps, an onslaught of technologies are being developed and are on the rise that aim to integrate seamless development across different platforms and machines. One such important platform that every developer should know about is Docker.
What is Docker?
Typically, if you are developing an application, it might be easy to run it on your machine but transferring it into the same working state on another machine will be a hassle because you’d have to make sure to have the same configuration settings (environment variables e.t.c.), same network settings, same dependencies, and libraries installed. Configuring all this on other machines can take a lot of time and is a prolonged process.
Docker aims to make all this easier for you. To put it simply, Docker is an application software that packages your software code, dependencies, configuration settings, and libraries into neat, packaged, isolated?containers.?These containers can then be seamlessly transferred onto any other developer machine. This way, your application can be run on your team's machines the same way as it does on your machine, without the hassle of all the configuration time and cost it would have taken.
Why use it?
You might be thinking, “aren’t VMs also made for this purpose?”. And you’re right. However, there are differences between the 2 that make Docker look more compelling. To understand them, let’s look at a Virtual Machine architecture first.
In the case of Virtual Machines, the hardware (storage, network, compute) is divided into chunks for the VM, so for example, you might have 20% of all resources just for your VMs. Your physical hardware communicates with your Operating System. The OS then communicates with the HyperVisor which tells your OS that it’s just a piece of software that requires a chunk of its resources. When in reality, Hyper-V is responsible for running an OS. Basically, VMs have to virtualize your hardware to be configured, and configuring them can also take time and money.
Docker does things differently. It aims to virtualize your OS to run the application software. This is better in so many ways, so let’s take a look at them:
So now that you’ve understood why Docker is so relevant and important in the industry these days, let’s take a look at how it works.
领英推荐
How does it work?
At the core of Docker is the Docker Engine. This is installed on the Host Machine and is responsible for running the Docker Containers. Docker Engine uses a client-server architecture, where the client and the server communicate through the REST API. The?Docker Client?sends the commands to the server through the REST API, where it translates the commands. The server is responsible for communicating with the OS in order to manage the Containers.
Let’s look at the Docker Engine in a bit more detail.
The Docker Engine comprises of 4 main components: the?Docker Client?and?Docker Daemon?(Docker Server), Docker Containers, Docker Images and the Docker Registry.
Docker Images are basically a template, read-only?files?that build Docker Containers. They are built using?DockerFiles,?and consist of a set of instructions including the application code, environment variables, configuration settings, libraries and other dependencies. A Docker Image is basically an instance of the Docker Container running at a time, and can be compared to a snapshot of a VM (both use the same analogy). They also contain metadata about Docker Containers, such as it’s requirements and capabilities. They are built using layers, with each layer corresponding to a more recent version of the code compared to the one below it. When the code is updated and the image is created, a new layer is added on top.
The Docker Registry is a collection of various images uploaded by different developers. Developers can either share their image using the?push?command, or they can?pull?an image from the Registry. These registries can be self-hosted and private, or public (for example the Docker Hub).
To create an image, we use the?docker build?command on our DockerFile, and the image is created from the source code. Then, we create a container using the?docker create?command, which also prepares the container for running. Finally, to run the container, we use the?docker run?command.
A small note on Multi-Container Applications
For applications that require multiple containers, we use?Docker Compose. This tool helps us to define and share multi-container applications through the use of a YAML file. This file defines all the services and it’s details. Generally speaking, it is recommended to use a single service for each process.
With Docker Compose, you can pre-define all the services in the YAML file and not have to rely on?bash?commands. For more information on Docker Compose, visit?https://www.baeldung.com/ops/docker-compose.
Conclusion
So to sum up, we’ve learned what Docker is. We’ve learned why it is so important in the industry and why people prefer to use it over VM. We’ve also taken a brief overview of its architecture and now have a basic idea of it. Hopefully, after reading this article you have a basic idea of Docker and are ready to start learning it. I’m also sharing some additional resources that you can look start to look more into Docker.