Day 19 of #90DaysOfDevOps Challenge

Day 19 of #90DaysOfDevOps Challenge

Docker Volumes

When we are working on docker, the data we store gets lost when the container is destroyed. So, to create persistent storage and store the data safely, and also share the data between containers we can create docker volumes.

Docker volumes are part of the file system and are managed by docker in /var/lib/docker/volumes/ path in Linux systems.

  • The data doesn't persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.
  • A container's writable layer is tightly coupled to the host machine where the container is running. The data cannot be easily moveable somewhere else.
  • Writing into a container's writable layer requires a storage driver to manage the filesystem.

There are two ways of storing files in docker -

  1. Volumes that are part of the host filesystem and managed by docker.
  2. Bind mounts which can be stored anywhere in the host system.

One volume can be attached to multiple containers and that is how it can also be used as a file-share system between containers.

Volumes also support?volume drivers, allowing storing the data on remote hosts or cloud providers, among other possibilities.

on the other hand, Bind-mounts can be stored anywhere in the system and can be modified by non-docker processes as well. The files and directories that we are mounting need not to be existing already in the host machine, they can be created at the run time. And Bind mounts have lesser functionality than volumes.

Options:

--mount		        Attach a filesystem mount to the containe

--volume , -v		Bind mount a volumer        

There are two options -v (--volume) or --mount.

-v

  • -v?syntax combines all the options in one field, while the?--mount?syntax separates them.
  • -v consists of 3 fields in general which are separated by a colon (:) -

  1. Name of the volume if it is named volume. If it is an anonymous volume then this field will be omitted.
  2. The path where the volume is mounted in the container.
  3. This is an optional and comma-separated list of options.

Example -

docker run -d --name C1 -v myvol:/app nginx:latest\        

--mount

  • --mount is more verbose.
  • --mount Consists of multiple key-value pairs, separated by commas and each consisting of a?<key>=<value>?tuple.

  1. The?type?of mount can be?bind,?volume, or?tmpfs.
  2. The?source?of the mount. For named volumes, this is the name of the volume. For anonymous volumes, this field is omitted. It can be specified as?source?or?src.
  3. The?destination?takes as its value the path where the file or directory is mounted in the container. Can be specified as?destination,?dst, or?target.
  4. The?readonly?option, if present, causes the bind mount to be?mounted into the container as read-only. Can be specified as?readonly?or?ro.
  5. The?volume-opt?option, which can be specified more than once, takes a key-value pair consisting of the option name and its value.

Example -

docker run -itd?--name C1?--mount source=myvol,target=/app nginx:latest        

Docker Network

To establish communication among Docker containers and with the Docker host, the Docker network is to be used. Docker's default network is bridge and it is created with the name docker0.

There are several types of network drivers -

  1. bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are commonly used when your application runs in a container that needs to communicate with other containers on the same host.
  2. host: Remove network isolation between the container and the Docker host. And shares the host’s network with the container.
  3. overlay: Overlay networks connect multiple Docker daemons and enable Swarm services and containers to communicate across nodes.
  4. none: Completely isolate a container from the host and other containers.?none?is not available for Swarm services.
  5. Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses.

  • Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.

  1. ipvlan: IPvlan networks give users total control over IPv4 and IPv6 addressing. The VLAN driver builds on top of that giving operators complete control of layer 2 VLAN tagging and even IPvlan L3 routing for users interested in underlay network integration.


Tasks:

Task - 1

  • Create a multi-container docker-compose file that will bring?UP?and bring?DOWN?containers in a single shot ( Example - Create application and database container )

Created a dockerfile which is built on Apache image. It will host my webpage on Apache and will expose port 80 on the container -

No alt text provided for this image

My webpage -

No alt text provided for this image

docker-compose.yml file to build the above image and then use it to create my web container and also the DB container from the public image -

version: '3.3'
services:
  db:
     image: mysql
     container_name: mysql_db
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: pass
  web:
    image: apache
    build: ./webapp
    depends_on:
      - db
    restart: always
    ports:
      - 8090-8095:80        

  • Use the?docker-compose up?command with the?-d?flag to start a multi-container application in detached mode.

First, build the image to be used in our container.

docker-compose build        
No alt text provided for this image

The image has been built and tagged as Apache:latest.

No alt text provided for this image

Now run -

docker-compose up -d        
No alt text provided for this image

Now check both the containers are running -

No alt text provided for this image

  • Use the?docker-compose scale?command to increase or decrease the number of replicas for a specific service. You can also add?replicas?in the deployment file for?auto-scaling.\

docker-compose scale command is now deprecated.

No alt text provided for this image

Use the --scale flag with the docker-compose up command.

Syntax:

docker-compose up -d --scale service=num
docker-compose up -d --scale web=2        

I had to remove the container name and provide the range of ports since multiple containers with the same name and mapped to the same port cannot be created.

No alt text provided for this image

Now, verify that it has created 2 web containers.

No alt text provided for this image

  • Use the?docker-compose ps?command to view the status of all containers, and?docker-compose logs?to view the logs of a specific service.

Use the docker-compose ps command to see all the running containers by your docker-compose script.

Syntax:

docker compose ps [OPTIONS] [SERVICE...]        
No alt text provided for this image

Use the docker-compose logs command to view the

Syntax:

docker compose logs [OPTIONS] [SERVICE...]        
No alt text provided for this image

  • Use the?docker-compose down?command to stop and remove all containers, networks, and volumes associated with the application

No alt text provided for this image

Task - 2

  • Learn how to use Docker Volumes and Named Volumes to share files and directories between multiple containers.

Docker Volumes:

There are 2 ways of attaching a volume to a container. Either you can map a directory from the host machine to the container using the -v flag, or you can create a volume and then attach it to an absolute path inside the container.

If the directory to which you have mapped has some data then it will be copied to the volume and if that volume is further attached to any other container then that data will be shared with that container. That is how a shared volume is created.

If the path which is mapped does not exist then it will be created by docker.

Example:

To create the volume, use-

docker volume create --name dockervol        

To map the volume, use-

docker run -itd -v dockervol:/var/www/html/ --name=C1 nginx        
No alt text provided for this image

Now let's add some files in the /var/www/html/ path in C1 container.

No alt text provided for this image

I will create a new container C2, with the same volume and the same data added above should reflect in Container C2.

No alt text provided for this image

  • Create two or more containers that read and write data to the same volume using the?docker run --mount?command.

No alt text provided for this image

  • Verify that the data is the same in all containers by using the docker exec command to run commands inside each container.

We have verified that in the above screenshots.

docker exec -it C3 bash        

Here, the -t flag gives us access to the terminal, -i flag allows us to interact with the container through the terminal.

  • Use the docker volume ls command to list all volumes and the docker volume rm command to remove the volume when you're done.

No alt text provided for this image

You can also use the docker-compose.yml file to create the volume and attach it to the container.

version : "3.3
services:
? web:
? ? image: varsha0108/local_django:latest
? ? deploy:
? ? ? ? replicas: 2
? ? ports:
? ? ? - "8001-8005:8001"
? ? volumes:
? ? ? - my_django_volume:/app
? db:
? ? image: mysql
? ? ports:
? ? ? - "3306:3306"
? ? environment:
? ? ? - "MYSQL_ROOT_PASSWORD=test@123"
volumes:
? my_django_volume:
? ? external: true"        

Note:

If you have multiple containers and you want to remove all of them in one go, then use the below command -

docker rm -f $(docker ps -a -q)        

-f Used when containers are running. If they are stopped then remove -f.

docker ps -a -q lists all the container IDs.

No alt text provided for this image

Thank you for reading! ??

Happy Learning!

Gajanan B.

DevOps |AWS Cloud| Git| Docker| Jenkins |Kubernetes| Terraform| Ansible| Linux| Python| Mainframe

1 年

good going! Aishwarya keshri

要查看或添加评论,请登录

Aishwarya keshri的更多文章

  • Day26 of #90DaysOfDevOps Challenge

    Day26 of #90DaysOfDevOps Challenge

    Jenkins Pipeline: Jenkins Pipeline is a suite of plugins that supports implementing and integrating continuous delivery…

  • Day24 of #90DaysOfDevOps Challenge

    Day24 of #90DaysOfDevOps Challenge

    ??Project: Containerise and deploy a node.js application using Jenkins Job.

  • Day23 of #90DaysOfDevOps Challenge

    Day23 of #90DaysOfDevOps Challenge

    CI/CD Continuous integration and continuous delivery help in automating the software development lifecycle stages…

    2 条评论
  • Day22 of #90DaysOfDevOps Challenge

    Day22 of #90DaysOfDevOps Challenge

    ??Jenkins Jenkins is an open-source tool that helps in creating pipelines and automating the software development…

  • Day21 of #90DaysOfDevOps Challenge

    Day21 of #90DaysOfDevOps Challenge

    Important interview questions and Answers for Docker: 1. What is the difference between an Image, Container and Engine?…

  • Day20 of #90DaysOfDevOps Challenge

    Day20 of #90DaysOfDevOps Challenge

    Docker Cheat Sheet: ??Docker images- Show all locally stored top-level images: docker images Pull an image from a…

  • Day 18 of #90DaysOf DevOps Challenge

    Day 18 of #90DaysOf DevOps Challenge

    ?Docker Compose Using docker commands, we can only run and manage a single container at a time, but there can be…

  • Day17 of #90DaysOfDevOps Challenge

    Day17 of #90DaysOfDevOps Challenge

    Dockerfile: Instead of manually creating docker images by running multiple commands one by one, we can write a script…

    1 条评论
  • Day 16 of #90DaysOfDevOps Challenge

    Day 16 of #90DaysOfDevOps Challenge

    ?Docker: Docker is a containerization tool that helps us create a lightweight container with all the required packages…

  • Day15 of #90DaysOfDevOps Challenge

    Day15 of #90DaysOfDevOps Challenge

    ?Python Libraries: The collection of modules used in Python while writing different programs is known as libraries…

社区洞察

其他会员也浏览了