Isolating Services with Docker: Simplifying Infrastructure and Deployment
Muhammad Umer Saleem
Sr.Software Engineer | Javascript | TypeScript | Node.js | React | React Native | Web3.0 | Solidity |
Docker isolates services, such as Tomcat, Nginx, Apache, and MySQL, from the underlying operating system. When setting up an infrastructure or application stack, it's common practice to deploy web services on separate machines. For instance, Apache may run on one machine, Nginx on another, and MySQL on yet another.
If all services were to run on a single large machine, they wouldn't be isolated from each other. This lack of isolation can lead to interference between services due to shared libraries, binaries, and configuration resources, potentially causing performance issues. Therefore, the best approach is to isolate them by deploying each service on separate servers to ensure high availability and prevent interference between components.
To host our applications, we require infrastructure. In cloud computing, we utilize virtual machines (VMs) to set up this infrastructure. Each VM has its own operating system (OS), providing isolation for the services running on it. However, this isolation leads to the necessity of setting up multiple VM instances, which can become costly from both a capital expenditure (CapEx) and operational expenditure (OpEx) perspective.
VMs are expensive because each VM requires its own OS, incurring maintenance costs, licensing fees, and time for booting up services. This over-provisioning of VMs can significantly increase costs.
Isolation without an OS can be achieved through containers. Containers allow multiple services to run on the same OS but remain isolated. Each container can be allocated its own CPU and memory resources, and it has its own set of libraries and binaries, minimizing interference between services.
Containers are processes running within directories, creating boundaries between them. They share the host machine's OS kernel and do not require a separate OS per application. Containers package up code and dependencies, offering isolation without virtualization.
While virtual machines use hardware virtualization and require individual OS installations, containers utilize OS-level virtualization, leveraging the host OS's compute resources. Docker is a tool that manages containers, serving as a container runtime environment. With Docker, developers can easily create, deploy, and manage containers for their applications.
Docker containers run on the Docker Engine, which is a lightweight, standardized, and secure platform for containerization. Here's a breakdown of these characteristics:
Overall, Docker containers offer a convenient and efficient way to package, distribute, and run applications in a standardized and secure manner.
Docker's architecture revolves around a client-server model. Let's break it down:
领英推荐
Let's discuss this practically. Firstly, create a directory like "hello-world" project in the terminal. Navigate to that folder and open it in Visual Studio Code. Add a new file to the project folder, for example, app.js. You don't need to be a JavaScript developer; just follow along with me. Write some code like console.log("Hello World").
mkdir hello-world
cd hello-world
touch app.js
code .
So, this is a simple file with the output "hello world" when running with node app.js.
console.log("Hello World");
Our next step is to create a Dockerfile with the name Dockerfile without any extension. Visual Studio will prompt you to install the related extension for the Dockerfile. Install the Docker extension in Visual Studio for file identification. In our Dockerfile, we'll write some instructions to package our application.
touch Dockerfile
Typically, we start from a base image. This base image has a bunch of files, and we're going to add additional files to it. This is similar to inheritance in programming. So, what is a base image? We'll take the Node image that is already installed on Linux. These images are officially published on Docker Hub. If you go to the Docker Hub website and search for the Node image, you'll find it there. Docker Hub is a registry of Docker images. Back to the Dockerfile, we're starting with the Node image. Node images have different types on Docker Hub based on different Linux distributions. As Linux has different flavors, here we'll specify which distribution we need to use. I'm going to use Alpine, which is a very small Linux distribution.
Next, we need to copy our application or program file. For that, we'll use the COPY instruction or command. We'll copy all the files in the current directory like ./app. Then, we'll use the CMD command for execution. What command will execute here? Yes, you're right, it's "node /app/app.js". Alternatively, we can use WORKDIR /app for executing the CMD command like "node app.js". All the other commands assume that we are in the current directory. So, the instructions in the Dockerfile clearly document our deployment process.
Next, go to the terminal and tell Docker to package the application using "docker build -t hello-world". After this, you might expect the image file in the current directory, but there won't be an image file there because the image is not stored in the directory. In fact, the image is not a single file. You might ask, how does Docker save the image? This is a complex part, so you need to go back to the terminal and see all the images saved on the computer by typing the command "docker images" or "docker image ls". Take a look at the outcome. You'll find related images that are saved on your computer. You'll find the information of each image like repository, Tag, image Id, create, size. Find your hello-world image over there. Now, you can run your image on any computer. For example, on your development machine, you can run the image using the command "docker run hello-world". You'll see the outcome "hello world", yahoo you made it. Then, the next step is to publish your image on Docker Hub so anyone can use this image.
# Use a base image
FROM node:alpine
# Set working directory
WORKDIR /app
# Copy application files
COPY . .
# Specify the command to run the application
CMD ["node", "app.js"]
In summary, Docker revolutionizes the deployment and management of applications by isolating services within lightweight, portable containers. By sharing the host operating system's kernel, Docker containers eliminate the need for separate OS installations, making them more resource-efficient and faster to start compared to traditional virtual machines.Docker's architecture, built around a client-server model, streamlines the process of creating, deploying, and managing containers. With Dockerfiles guiding the packaging of applications into Docker images, developers can ensure consistency and portability across different environments. Docker Hub further facilitates collaboration and distribution by providing a cloud-based registry service for storing and sharing Docker images.Practically, Docker simplifies the deployment process, allowing developers to package their applications with ease and run them on any computer. By following a few simple steps outlined in the article, developers can create Docker images and deploy their applications seamlessly.
In conclusion, Docker offers a convenient, efficient, and standardized approach to packaging, distributing, and running applications, making it an indispensable tool in modern software development and infrastructure management