Docker for DevOps
Docker for DevOps

Docker for DevOps

Hi Folks,

In this article, we will be learning about Docker from zero. This means we will be covering all the commands for Docker.

First of all, we will create one AWS instance of type t2.micro, and then we will run commands to install Docker. For the installation commands, we can refer to the Official documentation: https://docs.docker.com/engine/install/ubuntu/

Once all these steps are done from the above documentation run sudo apt install docker.io

Then we will add the ubuntu user to the Docker group using the command usermod -aG docker ubuntu

Once all the above steps are done we will run the docker --version command to check the docker version or we can say to verify the docker installation.

Basic docker Commands :

docker search image_name: This command is used to search the docker images at the docker image registry i.e. Dockerhub.

docker pull image_name: This command is used to pull the Image from the Docker hub registry to the local system. If the image is already there in the system then the docker engine will not fetch it from the docker hub.

docker images: This command is used to check all the available images in the system.

docker ps: This command is used to check the running containers.

docker ps -a: This command is used to check all the containers not only the running containers.

docker logs container_id: This command is used to check logs for those containers whose status is exited.

docker rmi image_id: This command is used to remove the images. To remove the images first we need to stop the container then remove the container and then only we can remove the image.

docker system prune: This command is used to clean up unused data in your Docker environment. It removes stopped containers, unused networks, dangling images, and more.

docker rm container_id: This command is used to remove the container with the help of container_id.

docker stop container_id: Before removing the container we need to stop the container first so to stop the container we use this command.

docker run -it --name container_name image_name /bin/bash: This command is used to run the docker container. docker run -it (interactive) --name(to provide the name to the container) container_name (any name we can provide) image_name (the image pulled from docker hub which we are using to create container) /bin/bash are used to create the container.

history: This command is used to get the history of all the commands used by us.


Now attaching the sample for commands discussed above :

usermod -aG docker ubuntu : Since we used ubuntu AMI for creating the ec2 instance we need to add an ubuntu user to the docker group.

docker ps

docker images

docker ps -a

docker search ubuntu: we searched the ubuntu image from the docker hub and got multiple tags for ubuntu images we can pull from the docker hub registry.

docker search nginx

docker pull nginx: to pull the image from docker hub

docker run -it --name kamalpreetcontainer nginx /bin/bash In this command we are creating container names kamalpreetcontainer from nginx image.

docker images: Using this command we will get the pulled images from the docker hub like Nginx and ubuntu in our case.

docker ps: This command will show us the created container like kamalpreetcontainer.

docker ps -a: This command will show all the containers not only the running containers.

docker logs 3a9d8788a598: This we can use to debug why the container is not running. Here 3a9d8788a598 is the container ID which we obtained using the command docker ps -a.

exit: Once we run the container we are inside the container so to move out of the container we use the exit command.

docker logs kamalpreetcontainer: Used to get logs for a container using the container name.

docker run -it --name kamalpreetcont nginx /bin/bash

docker images

docker ps

docker ps -a

docker start kamalpreetcont: Used to start the container

docker ps

docker tag ubuntunew kamalpreet1313/ubuntunew:latest: This command is used to tag the images with a user-defined tag.

First, we need to log in to the docker hub registry using the docker login command.

Once we are logged in to the Docker hub registry using our credentials we can further push the images to the Dockerhub repository of our own using the below command.

docker push kamalpreet1313/ubuntunew:latest Here in this command kamalpreet1313 is the repository and ubuntunew:latest is the tagged image.

Docker Volume

docker volume create --name=django-todo-volume --opt type=none --opt device=/root/volume/insvol --opt o=bind In this command --name to name the directory named as volume. --opt refers to options. We need to provide options like type of directory, location of the directory we need to create or map to volume, etc.

docker volume ls command will show the created volumes.

docker run -it --name cont -p 8000:8000 mount source=django-todo-volume,target=/data django-todo-app:latest further while running the docker run command we can specify the source and target with mount keyword. The source will be our docker volume created and target will be the directory created as workdir in the docker file.


Below we will be using one project from the GitHub repository. Special thanks to TrainWithShubham Shubham bhaiya for providing his repository so that we can play using his repository. I forked his repository and tried to create docker files for all the Projects using readme.md file.

git clone https://github.com/LondheShubham153/node-todo-cicd.git

Once we clone the repository we move inside the repo using cd node-todo-cicd.git command. Here the docker file will already be there so I will first remove the docker file using the git rm Dockerfile command.


vim Dockerfile: to create the docker file again we use the vim editor.

FROM node:12.2.0-alpine // This is the base image for the project

WORKDIR app // This is used to create workdir, This can be further used to map to volumes.

COPY . . //This command is used to copy the code from this current directory to the directory created above i.e. app.

RUN npm install //The run command is used to run the intermediate command

RUN npm run test //the run command is used to run the intermediate command

EXPOSE 8000 // used to expose the port

CMD ["node","app.js"] // CMD is used to run the final command.

Once we save and quit the docker file using esc :wq then we need to build an image from the Docker file.

docker build -t newcont . This command is used to create the image from the Docker File. The name of the image created would be newcont.

docker run -d --name dockercont newcont /bin/bash This command is similar to the previously used docker run command. Just -d is used to run it in daemon mode.

Now we will see how to Containerize the website. For this, first of all, we will pick any template from tooplate.com . Once we select the template then we will download it using the wget command. Once we get it we unzip the file to run the project. And tar the file so that we can use the ADD command instead of the COPY command.

We can use the below docker file.

FROM ubuntu:latest // base image

RUN apt update && apt install git -y //update system and install git

RUN apt install apache2 -y // install apache2

CMD ["/usr/sbin/apache2ctl","-D","FOREGROUND"] // command used to run the code. Basically, these are provided by developers of the code.

WORKDIR /var/www/html //created workdir

EXPOSE 80 // exposed the port

VOLUME /var/log/apache2 // created volume

ADD nano.tar.gz /var/www/html // added the tar.gz file to the working directory


Once we are done with the docker file we will create the image from the image we will create the Container.

Once the container is running we can use the public IP of the instance with the port mapped to check the containerized site.



Docker Compose File :

Consider the case where we are working on a two-tier app deployment using docker Images. If we are not using the docker-compose.yml file then we need to run commands manually for both the frontend docker image and the Backend Docker Image. This means first we need to build the backend docker images and then run it. Similarly, then we need to build a Frontend docker image and then Run it. We even need to add a network in some cases so this is a tedious task. Sometimes if we are working in a PROD environment then it may lead to the site being down due to some incorrect commands.

So to resolve this issue and use only a single command to make the project live we can use the docker-compose.yml file.

In the docker-compose.yml file, we write services (And services contain docker containers and their details).


Sample docker-compose.yml file to deploy two-tier flask app with Mysql backend :

version: '3' //We need to specify the Version at the top. currently, version 3 is used.

services: // under services we need to specify the Two services for Two-tier app

backend: // The first service is Backend Service

build: //Here we are providing context : . to build the docker image in the current Directory.

context: .

port: // port is used to map the Port

-"5000:5000"

environment: // here we are providing the environment variables used for the service

MYSQL_HOST: mysql

MYSQL_USER: admin

MYSQL_PASSWORD: admin

MYSQL_DB: myDB

depends_on: // in depends_on we specify the Service which should be built before this service in which we use depends_on. This means MySQL service should be built before the Flask service

-mysql

mysql: //The second service is Mysql service

image: mysql:5.7 //Since we have not configured the build file for Mysql so we will use the image here.

ports:

-"3306:3306" // port mapping

environment: // environment variables

MYSQL_ROOT_PASSWORD: root

MYSQL_USER: admin

MYSQL_PASSWORD: admin

MYSQL_DB: myDB

volumes: //Here we can specify the volumes / common directory for both services

- ./message.sql:/docker-entrypoint-initdb.d/message.sql // here we use docker-entrypoint-initdb.d to run file before creating services. This means message.sql file will run and the table will be created automatically due to this particular line.

- mysql-data:/var/lib/mysql // mapping directory with the file in the current directory.

volumes:

mysql-data


Now we can use just a single command docker-compose up to run both services. This means with a single command both services will be up and even a table will be created frommessage.sql file.


Multistage Docker File:

As we all know the docker images are created in the layered format from the Docker files. So in case we have 7 steps in the Dockerfile then for the first time all 7 steps will run but in case we run the docker file for the second time then it will pick the details from the cache. This means if we make changes in step 4 then till step 3 it will be from the cache and then from step 4 onwards those commands will be executed again. Since it is a layered approach there is a disadvantage for the docker file. This means the size of a Docker Image created via a simple Docker file can be around 1 GB which is very huge. So to overcome this we generally use multistage docker files. So in the Multistage Docker file, we will use the "as" command to refer few steps as a single step.

For example if

This is the simple docker file, It is having 7 steps and once we create an image out of it it will be around 1 GB.

But If we convert it to a Docker Compose file means we use two stages and for the first stage, we use it as a keyword to refer to it in the second stage. then the actual size of the images is totally reduced.


FROM python: 3.9

WORKDIR /app

COPY ./backend /app

RUN pip install -r requirements.txt

EXPOSE 500

CMD ["python","app.py "]



Docker Compose File :

# ------------------- Stage 1: Build Stage ------------------------------

FROM python:3.9 AS backend-builder

WORKDIR /app

COPY backend/ .

RUN pip install --no-cache-dir -r requirements.txt


//In the above stage we are using it from the normal docker-compose file just used "as" Keyword to refer to it as backend-Builder. Created working directory as app and copied all the content of backend directory to app directory and then ran pip install command to install all the requirements from requirement.txt file.

# ------------------- Stage 2: Final Stage ------------------------------

FROM python:3.9-slim

WORKDIR /app

COPY --from=backend-builder /usr/local/lib/python3.9/site-packages/ /usr/local/lib/python3.9/site-packages/

COPY --from=backend-builder /app /app

EXPOSE 5000

CMD ["python", "app.py "]

In the second stage, we used the above stage as the backend-builder and copied data from two sources from the backend-builder stage to this stage. and then exposed the port and ran the python app.py command. This means now we can say the Backend-builder stage will be considered as step 1 and then 4 more steps. Using the Multistage docker file we can reduce the size of the docker image that we create.



Docker commands are powerful tools that streamline the process of containerization, making it easier for developers to build, deploy, and manage applications across different environments. Throughout this article, we've explored various Docker commands, from building images to running containers, managing volumes, and networking.

By mastering these commands, you'll be better equipped to leverage Docker's capabilities and optimize your workflow. Whether you're a seasoned Docker user or just getting started, understanding these commands is essential for efficiently managing your containerized applications.

As you continue your journey with Docker, don't hesitate to experiment with these commands and explore additional features offered by the Docker ecosystem. Keep learning, stay curious, and embrace the power of containerization to drive innovation in your projects.

Thank you for joining me on this Docker command exploration. I encourage you to share your experiences with Docker and continue the conversation in the comments below. Happy containerizing!

要查看或添加评论,请登录

社区洞察