Docker Decoded: A Beginner's Guide to Container Magic

Docker Decoded: A Beginner's Guide to Container Magic

What is Docker ?

Docker is an?open-source centralized platform designed?to create, deploy, and run applications.

Docker uses?container?on the host's operating system to run applications.


Docker Architecture

Docker architecture

Docker follows Client-Server architecture, which includes the three main components that are?Docker Client,?Docker Host, and?Docker Registry

  1. Docker Client

  • Docker client uses?commands?and?REST AIs?to communicate with the Docker Daemon (Server)
  • When a client runs any docker command on the docker client terminal, the client terminal sends these docker commands to the Docker daemon
  • Docker daemon receives these commands from the docker client in the form of command and REST API's request

2. Docker Host

  • Docker Host is used to provide an environment to execute and run applications. It contains the docker daemon, images, containers, networks, and storage

3. Docker Registry

  • Docker Registry manages and stores the Docker images
  • There are two types of registries in the Docker
  • Pubic Registry - Public Registry is also called as?Docker hub

  • Private Registry -?It is used to share images within the enterprise

4. Docker objects

  • Docker images and containers are called as docker objects


What is an Docker Image

  • An image is a read-only template with instructions for creating a Docker container
  • Often, an image is based on another image, with some additional customization
  • To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it
  • Each instruction in a Dockerfile creates a layer in the image, When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt
  • This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies


What is a Container ?

  • A container is a way to package application with all the necessary dependencies and configuration
  • Portable artifact, easily shared and moved around
  • Makes development and deployment more efficient


Where do containers live ?

  • Container Repository
  • Private repositories
  • Public repository for Docker (DockerHub)


How containers improved Application development and deployment process

Before Containers

  • Installation process is different on each OS environment
  • Many steps or something would go wrong
  • Configuration on the server needed, which would lead to dependency version conflicts
  • Textual guide of deployment would lead to misunderstandings


After Containers

  • Own isolated environment
  • Packaged with all needed configuration
  • One command to install the app
  • Run same app with two different versions
  • Developers and operations work together to package application in a container
  • No environmental configuration needed on server (except Docker runtime)

Advantages of Docker

  • It runs the container in seconds instead of minutes.
  • It uses less memory.
  • It provides lightweight virtualization.
  • It does not a require full operating system to run applications.
  • It uses application dependencies to reduce the risk.
  • Docker allows you to use a remote repository to share your container with others.
  • It provides continuous deployment and testing environment.

Disadvantages of Docker

  • It increases complexity due to an additional layer.
  • In Docker, it is difficult to manage large amount of containers.
  • Some features such as container self -registration, containers self-inspects, copying files form host to the container, and more are missing in the Docker.
  • Docker is not a good solution for applications that require rich graphical interface.
  • Docker provides cross-platform compatibility means if an application is designed to run in a Docker container on Windows, then it can't run on Linux or vice versa.

Basic Docker Commands

  • docker pull - This command is used to pull a Docker image from a registry
  • docker run - This command is used to run a container from a Docker image
  • docker start - This command starts one or more stopped containers
  • docker stop - This command stops one or more running containers
  • docker images - This command lists the Docker images available locally on your machine
  • -d - This is an option used with docker run to run a container in detached mode, meaning it runs in the background
  • -p - This is an option used with docker run to map ports between the container and the host
  • -ps - This will display a list of running containers
  • -a - This is an option used with commands like docker ps to display all containers, including those that are stopped
  • docker logs - This command is used to view the logs of a running container
  • docker exec -it (container ID/name) /bin/bash - This command is used to execute a command inside a running container

Docker Network

Docker network

  • Docker provides a networking model that allows containers to communicate with each other and with the outside world
  • Docker networking enables different containers to be connected and communicate in various ways
  • Docker networks are used to provide complete isolation to docker containers


Network Drivers

  • Networking drivers are components responsible for managing the communication and connectivity between containers
  • Docker provides a modular and extensible networking architecture that allows users to choose or define different network drivers based on their specific requirements
  • Docker contains the following network drivers,
  • Bridge -?Bridge is a default network driver for the container. It is used when multiple docker communicates with the same docker host
  • Host -?It is used when we don't need for network isolation between the container and the host.
  • None -?It disables all the networking.
  • Overlay -?Overlay offers Swarm services to communicate with each other. It enables containers to run on the different docker host.
  • Macvlan -?Macvlan is used when we want to assign MAC addresses to the containers.

Docker File

  • A Dockerfile is a text document that contains commands that are used to assemble an image
  • Docker builds images automatically by reading the instructions from the Dockerfile
  • The docker build command is used to build an image from the Dockerfile
  • You can use the -f flag with docker build to point to a Dockerfile anywhere in your file system (docker?build?-f?/path/to/a/Dockerfile)


How to build your own docker image ?

Step 1: Write Your Application Code

Start by preparing the application code you want to containerize. For example, create a simple web application using a framework like Flask. Place your code in a directory, say myapp.


Step 2: Create a Dockerfile

Craft a Dockerfile, a script that contains instructions for building your Docker image. In your project directory, create a file named Dockerfile without any file extension.

Use an official base image - FROM python:3.8-slim         
Set the working directory - WORKDIR /app         
Copy the local code to the container - COPY ./myapp /app         
Install dependencies - RUN pip install --no-cache-dir -r requirements.txt         
Expose the application's port - EXPOSE 5000         
Define the command to run your application - CMD ["python", "app.py"]        


Step 3: Create a requirements.txt File

If your application has dependencies, create a requirements.txt file listing them. Place this file in the same directory as your Dockerfile.

Flask==2.0.1        


Step 4: Build the Docker Image

Open a terminal in the directory containing your Dockerfile. Use the following command to build your Docker image, tagging it with a name and version.

docker build -t mydockerapp:v1         


Step 5: Verify the Image

Confirm that your Docker image is created successfully by listing the available images.

docker images        


Step 6: Run a Container

Now, run a container using the image you just built.

docker run -p 5000:5000 mydockerapp:v1        


Step 7: Access Your Application

Visit https://localhost:5000 in your web browser to see your application running inside a Docker container.


How to push the docker image into a private Repository ?

Step 1: Set Up AWS CLI

Ensure that you have the AWS Command Line Interface (CLI) installed and configured with the necessary credentials. The AWS configure command prompts you to enter your AWS access key, secret key, default region, and output format.

aws configure        


Step 2: Create an ECR Repository

Create an Elastic Container Registry (ECR) repository on AWS using either the AWS Management Console or the CLI. Replace <your-repository-name> with your desired repository name.

aws ecr create-repository --repository-name <your-repository-name>        


Step 3: Login to ECR Registry

Authenticate Docker to your ECR registry using the aws ecr get-login-password command. This generates a token for Docker to log in to the ECR registry securely.

aws ecr get-login-password --region <your-region> | docker login --username AWS --password-stdin <your-account-id>.dkr.ecr.<your-region>.amazonaws.com        


Step 4: Build Docker Image

Build your Docker image using the docker build command. This command takes the Dockerfile in the current directory (denoted by .) and tags the image with the name mydockerrepo:latest.

docker build -t mydockerrepo:latest .        


Step 5: Tag the Image

Tag the Docker image with the ECR repository URI. Replace <your-account-id> and <your-region> with your AWS account ID and desired AWS region.

docker tag mydockerrepo:latest <your-account-id>.dkr.ecr.<your-region>.amazonaws.com/mydockerrepo:latest        


Step 6: Push Image to ECR

Push the Docker image to your ECR repository. This command uploads the image to your private AWS ECR registry.

docker push <your-account-id>.dkr.ecr.<your-region>.amazonaws.com/mydockerrepo:latest        


Step 7: Verify Image Push

Verify the successful push by checking your ECR repository. This command describes the images in the specified repository.

aws ecr describe-images --repository-name mydockerrepo        


Docker volume

A Docker volume is a persistent data storage mechanism used to manage and store data in Docker containers. Volumes allow data to persist beyond the lifecycle of a single container and provide a way to share and manage data between containers and between the host machine and containers.


Key aspects of docker volume

  • Data persistence Docker volumes enable the persistent storage of data, allowing it to survive the container lifecycle. This is essential for scenarios where data needs to persist even if the container is stopped, removed, or replaced
  • Shared Storage Volumes allow multiple containers to share and access the same data. This is useful for scenarios where multiple containers need to read or write to a common dataset
  • Separation of concerns Volumes separate the concerns of application logic (contained in the container) and data storage. This separation makes it easier to manage and update containers without affecting the underlying data.
  • Mount points Volumes are mounted into containers as directories, which act as mount points. Data written to these directories inside the container is stored on the volume.
  • Volume types Docker supports various volume types, including named volumes, host-mounted volumes, and anonymous volumes


Different types of docker volumes

Named volumes

  • Named volumes are explicitly created and managed by Docker. They have a user-friendly name assigned by the user or Docker itself

Creation code - docker volume create mynamedvolume          
Usage in container - docker run -v mynamedvolume:/app mycontainer        

  • Some of the advantages of named volumes are User-friendly naming, Explicit creation and management, Persistence and ease of use

Host mounted volumes

  • Host-mounted volumes reference a directory on the host machine. Changes made in the container are reflected on the host, and vice versa

Usage in container - docker run -v /host/path:/container/path mycontainer        

  • Some of the advantages of host mounted volumes are Direct access to host files, Useful for development when you want changes in code to be immediately reflected in the container, Easy to understand and implement.


Anonymous Volumes

  • Anonymous volumes are automatically created by Docker and are typically used for temporary or cache storage. They are not given user-friendly names and are difficult to manage manuall

Usage in container - docker run -v /container/path mycontainer        

  • Some of the advantages of anonymous volumes are Automatic creation and management, Suitable for temporary data, Less user intervention required


Docker Security

There are four major areas to consider when reviewing Docker security:

The intrinsic security of the kernel and its support for namespaces and cgroups

  • Kernel Security: Docker relies on the underlying Linux kernel for security. The Linux kernel provides robust security features, including user and process isolation.
  • Namespaces: Docker utilizes Linux namespaces to isolate various aspects of the system, such as process IDs, network, and file systems. Each container operates in its own namespace, preventing interference with other containers.
  • cgroups (Control Groups): cgroups enable resource limitation and prioritization. Docker leverages cgroups to control and allocate system resources like CPU, memory, and I/O for each container, ensuring fair resource usage.


The attack surface of the Docker daemon itself

  • Daemon Security: The Docker daemon is a critical component that manages containers. Reducing the attack surface of the Docker daemon is crucial for overall security.
  • Restricting Access: Limit access to the Docker daemon to authorized users. Use RBAC mechanisms and enforce secure communication with the daemon, such as enabling TLS.
  • Regular Updates: Keep the Docker daemon up to date with the latest security patches to mitigate vulnerabilities.


Loopholes in the container configuration profile, either by default, or when customized by users

  • Default Profiles: Docker uses default container configuration profiles, and these should be secure by default. However, it's essential to review and understand these defaults to ensure they align with security best practices.
  • User Customization: When users customize container configurations, potential security loopholes may arise. Educate users on secure configuration practices and regularly audit container configurations for vulnerabilities.


The "hardening" security features of the kernel and how they interact with containers

  • Kernel Hardening: Linux kernels can be configured with security features like AppArmor or SELinux, enhancing overall system security.
  • AppArmor and SELinux: Docker supports security profiles provided by AppArmor and SELinux, which enforce fine-grained access controls on containers. These profiles restrict container capabilities and actions, enhancing security.
  • Seccomp Profiles: Docker allows the use of Seccomp profiles to restrict the system calls available to a container, reducing the risk of privilege escalation.


Srujan Rai

Intern @Niveussolutions | Mantrika.ai | Artifical intelligence and Machine learning | DevOps

1 年

Amazing ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了