Major Docker Components

Major Docker Components

As promised in my previous post, we will now look into the Docker components and the individual roles they play in making Docker so awesome

Before moving on, a quick recap.

Docker is a server virtualization technology that came into being just because the virtual machines were not good enough in resource utilization anymore. Docker uses a concept known as “Docker containers”, which are lightweight, isolated, secure user spaces that can facilitate a run-time environment for your application.

Container are spun up using Docker images, that includes specifications for the application environment, dependencies, source etc. that enables the application to run. A single image can be used to spun up as many containers as required.

The previous post “Docker and its Containers” simply introduced the concept of Docker containers and how Docker facilitates the required isolation, security and lightweightness. In this post, we will be looking into the other major components that made Docker a trending technology in terms of server virtualization.

The Docker Engine makes it all possible!

Docker Engine is undoubtedly the most important component in Docker due to the fact that it facilitates the environment required to create and build docker images and containers.

To make things more interesting, let us imagine a Docker container (the standard container that includes everything that makes your application work) to be a normal shipping container. In this context, the Docker Engine is basically the dockyard!

 Just like the dockyard provides the environment and infrastructure required to build and ship physical containers, the Docker Engine facilitates the user with access to all the Docker services. Every service required to build a standard container from access to root file systems, process trees, network stacks, access to kernel and its features, resource allocation to containers, creating images, spinning up containers and ultimately stopping them are all handled by the Docker Engine in a standardized manner.

 So like I said, the Docker Engine makes it all possible.

Docker Images

Once the infrastructure is ready, the dockyard needs to be filled with containers. And in Docker’s context, containers are created using Docker images.

An image is similar to a template that is used to build containers. In other words, images are similar to the shipping manifests or the cargo documents of physical containers. Manifests include a list of cargo included in each container, thusproviding more clarity to the isolated spaces that we call as containers.

A Docker image has a layered architecture, each layer representing a command that constitutes a specific docker container. Each of these layers is also an image in Docker context. These image layers pile up together to create one big template that can spun on containers!

The layers in the Docker image tells us what is included in the container. For an example the most basic layer in an image could be an ubuntu base image. On top of it there could be some dependency installations. A layer could represent a set of files or even the source code. Therefore, all these layers together only will create the complete container required to run our application.

Layered Images

Docker images without doubt are very powerful. They are the creators of the containers that we spent one post talking about! (No image, no container). The image tells the Docker Engine exactly what needs to be done to ensure that regardless the environment or the number of times the image is used, the output is the successfully running application environment.

Docker images are super interesting! Like I said, an image that we use to create a container has multiple sub images or layers in itself. Each layer is read only once the image is built. Since the layers are read only the sub images can be shared among containers without downloading them or recreating them again! No wonder containers can be easily started!

When a container is spun up using an image, an additional read/write layer is added on top of the existing layers so that the content can be changed. The layered image approach allows you to create the base image for containers as required. It is super easy to make configuration changes. No need to crack open base image to make changes, but we can just add a new layer with the updates. Each layer has its own unique ID. The full image include these IDs and some metadata telling the order in which these layered images should be piled on one another.

Ever wondered how all this is possible? Well……

Union Mounts To the Rescue! 

 

 

 

 

 

In images, with multiple layers of sub images inside, with the help of metadata (image IDs and order), the images are piled upon one another into a single view. The metadata tells the Docker Engine which layers should be piled up in which order. Ultimately the layers are combined to a single layer as shown below.

 In Docker, the higher layers always overwrite the lower layers if there are conflicts. That is why the updates can be easily added as a brand new layer on top.

This process of combining of file systems is achieved through union mounts. Union Mounts have the ability to mount several file systems on top each other and combine it to a single file system. This enables layering. All these layers are read only. A new layer in added on top which is read/write when a container is spun up.

When updating a file saved in a lower layer, Docker first copies that layer to the R/W layer and then make the changes in the R/W layer itself. Since the union mounts combine all the layers together and the top layers override the bottom layers when in conflict, the single R/W layer is all that is required to make updates to our image.

Docker Registries and Repositories

Docker images can be either user defined or pulled in from an online Docker image registry. You can create your own docker images to suit your application environment using Dockerfiles (which will be later discussed). But initially to get some exposure to Docker images and containers, you can use the images available in the Docker Hub at https://hub.docker.com.

Docker Hub has hundreds of official repos for many popular softwares like mongodb, ubuntu, elasticsearch, ruby, nginx etc. These repos are managed by the official developers and Docker together thus can be trusted regarding the content inside.

For an instance, you can pull an ubuntu image by the simple command

docker pull ubuntu

If you want a specific version, let us say 16.04, you have to include it as a tag when pulling the images from docker hub.

docker pull ubuntu:16.04

If no tag is given, the latest version will be pulled from the Docker Hub.

Interesting as it is, the real power of Docker lies in creating our own Docker images and spinning up containers using them. With this basic understanding on how Docker makes it all possible using the Docker Engine and what Docker images are capable of, we can later look into HOW to create our own Dockerfiles.

Till then,

Happy Coding!

要查看或添加评论,请登录

Senuri Wijenayake, PhD的更多文章

  • Docker and its Containers

    Docker and its Containers

    Docker is a trending form of server virtualization. Out of the many popular use cases, docker is well known as a…

    3 条评论
  • Media Queries – A Deeper Look

    Media Queries – A Deeper Look

    As promised in this post I will be looking into how to make your media queries a little bit (or a lot more) specific…

  • Media Queries Explained

    Media Queries Explained

    What is a Media Query? Added in CSS3, a “Media Query” acts as a rule that applies a certain set of CSS code, given that…

    2 条评论

社区洞察

其他会员也浏览了