Docker in action book summary

Docker in action book summary

This is my first try to summarize a book, I choose this book as I found it very insightful and interesting to people out there in the industry, people who are using Docker to containerize their applications, I myself have used docker but with all the blogs and stories from wonderful people on the internet that helped me a lot to start using it, I didn't feel that I fully grasp the whole functionality of this docker thing. This book helped me a lot to dive deep into docker's aspect and exposed me to different use cases

Ch 5

This chapter is mainly about the networking functionality of docker containers on the network communicate with each other, So let's start with an introduction about the networking!

You can think of the network interface as a mailbox with an address called "IP Address". A computer always has 2 interfaces, the ethernet interface that you will probably be most familiar with which is responsible for the connection with other interfaces and processes and the loopback interface which is responsible for the communications between programs on the same computer. We can't miss Ports either sender or recipient which is just a number part of the transmission control protocol(TCP) or user datagram protocol(UDP).

Docker container networking

A container attached to a docker network will get a unique IP address that is reachable by any container on the same network.

If you write in the terminal

docker network ls

this will list the 3 network types that are attached to docker by default, "bridge" that handle connections between different containers, "host" that handles connections between a container and their hosts and "null" networks and you can tell that any container that is connected to this network doesn't have a connection outside itself, in other words, nothing from outside can connect to this container

you can create a network using

docker network create \
  --driver bridge \
  --label project=dockerinaction \
  --label chapter=5 \
  --attachable \
  --scope local \
  --subnet 10.0.42.0/24 \
  --ip-range 10.0.42.128/25 \
  user-network

This creates a local network named user-network, it's type bridge, Marking the new network as attachable allows you to attach and detach containers to the network at any time. Finally, a custom subnet and assignable address range was defined for this network, 10.0.42.0/24, assigning from the upper half of the last octet (10.0.42.128/25). This means that as you add containers to this network, they will receive IP addresses in the range from 10.0.42.128 to 10.0.42.255.

Now let's attach a container to this network and see what IP it has,

docker run -it \
  --network user-network \
  --name network-explorer \
  alpine:3.8 \
    sh

This command builds, create and run an alpine container, attaching it to the network you have created earlier and run open an sh command line. Run the following command it should give the ip addresses available on this container you have created.

ip -f inet -4 -o addr

The results should look something like this:

1: lo    inet 127.0.0.1/8 scope host lo\ ...
18: eth0    inet 10.0.42.129/24 brd 10.0.42.255 scope global eth0\ ...

As you can see it has a loopback ip and ethernet ip and this ethernet ip is in the possible range of our network, you can add as many containers as you wish to the network and they will be connected to each other.

If you wanted to let your container to access the host services you can bind the container to the host network by typing

docker run --rm \
    --network host \
    alpine:3.8 ip -o addr

as you can see we used the --rm flag to remove the container when stopped as we are justing testing around it's not something we want on our computers and this "ip -o addr" to list the ip addresses for container

To manage connection bteween container and their hist you need to use port publishing, so when you run,

docker run --rm \
  -p 8088:8080/udp \
  alpine:3.8 echo "host UDP 8088 -> container UDP 8080"

the host UDP port 8088 will be listening to the container UDP 8080

However, if we bind our container to a none network as follows:

docker run --rm \
    --network none \
    alpine:3.8 ip -o add

This means the following:

  • This container has an only loopback interface
  • Any program running in the container can connect to the loopback interface
  • nothing outside the container can connect to that interface
  • No program running inside the container can reach anything outside the container

Now you might wanna ask why would I attach my container to Null network or create as it's named a closed container, for programs that doesn't require network connection for example the programs that generate passwords, these programs have to be with no connections at all to protect it from forgery

DNS is a protocol for mapping hostnames to IP addresses

Can you set DNS with a docker?

It seems that yes you can, let's take an example for this if you typed,

docker run --rm \
    --hostname barker \     
    alpine:3.8 \
    nslookup barker  

You started a container and overrided the DNS for the bridge network IP address, so the output of this command that searches for "barker" is server IP that done the mapping(IP-DNS) and the other is the bridge network IP. This is useful in containers that need to lookup their names as if you use an external DNS server, you can share those names across the network

Server:    10.0.2.3
Address 1: 10.0.2.3

Name:      barker
Address 1: 172.17.0.22 barker

Resource Allocation Optimization with docker(CH6)

Physical system resources such as memory and time on the CPU are scarce, sometimes they needed to supervised, so Docker provided a way to do that among containers as follows

docker container run -d --name ch6_mariadb \
    --memory 256m \                             
    --cpu-shares 1024 \
    --cap-drop net_raw \
    -e MYSQL_ROOT_PASSWORD=test \
    mariadb:5.5

So you limit the container memory usage to 256 megabytes

CPU time is as scarce as memory, with the docker command line you can set a bundle of CPU resource to each container so for example when you type,

docker container run -d -P --name ch6_wordpress \
--memory 512m \
--cpu-shares 512 \           
--cap-drop net_raw \
--link ch6_mariadb:mysql \
-e WORDPRESS_DB_PASSWORD=test \
wordpress:5.0.0-php7.2-apache

The most important config here is the CPU-share so for example if you have given maria-DB container a CPU share of 1024. This means that maria-DB gets 2 cycles for each 1 cycle for WordPress container.

Another way to manage how much CPU resources a container should consume

docker container run -d -P --name ch6_wordpress \
--memory 512m \
--cpus 0.75 \                
--cap-drop net_raw \
--link ch6_mariadb:mysql \
-e WORDPRESS_DB_PASSWORD=test \
wordpress:5.0.0-php7.2-apache

See 'cpus 0.75' this means that container is able to consume 75% of the CPU cores

Access to devices

If you for example running a computer vision project you might want your computer to have access to the webcam of the host device.

docker container run -it --rm \
    --device /dev/video0:/dev/video0 \     
    ubuntu:16.04 ls -al /dev

as you can see there's a mapping between the device file on the host operating system and the location inside the new container

Users in Docker

To find the default user on a specific container

docker container run --rm --entrypoint "" busybox:1.29 whoami

or

docker container run --rm --entrypoint "" busybox:1.29 id 

Run a container with nobody user,

docker container run --rm \
    --user nobody \           
    busybox:1.29 id           

Docker image is just a stack of layers so if you wanted to see this stacks run

docker image history name-of-the-image

Building images with dockerfiles(Ch7-8)

Dockerfile is a text file that contains instructions for building an image. the docker image builder executes the dockerfile from top to bottom.

After creating your dockerfile and building your container for the first time

# An example Dockerfile for installing Git on Ubuntu
FROM ubuntu:latest
LABEL maintainer="[email protected]"
RUN apt-get update && apt-get install -y git
ENTRYPOINT ["git"]

Like this

docker container run --rm ubuntu-git:auto

try after the build is finished to run this container again

docker image build --tag ubuntu-git:auto .

you will notice that some steps are cached out, you can disable it using --no-cache flag

when building an image docker copy the full content of you application whichever it was to an image however it's not always appropriate for some files to be added to your docker image like for example your virtual environment, so these files should be added to a ".dockerignore" file.

dockerfiles could be named anything but the default is "dockerfile" but if you for example named it "apple.df" you should add "apple.df" to your build command instead of the "." as we have done it earlier.

Both Entrypoint and CMD provide the startup command for our container but the difference is that CMD represent an argument to Entrypoint and is set by default to "/bin/sh" , remember that "docker inspect container-name" is used to view the metadata of either an image or a container

ADD behaves similarly to copy with two differences, ADD fetch remote source file if URL is specified and extract the file of any source determined to be an archive file

Another interesting command is ONBUILD this instruction determine what should happen if there's an image based on the current image i.e injecting build-time behavior in downstream images, so in other words when you define an ONBUILD in a dockerfile as follows

FROM busybox:latest
WORKDIR /app
RUN touch /app/base-evidence
ONBUILD RUN ls -al /app

and then build the image as follows :

docker image build -t dockerinaction/ch8_onbuild -f base.df .

Then creating an image based on the previous image as follows:

FROM dockerinaction/ch8_onbuild
RUN touch downstream-evidence
RUN ls -al .

then build it

docker image build -t dockerinaction/ch8_onbuild_down -f downstream.df .

output should look like this:

Sending build context to Docker daemon  3.072kB
Step 1/3 : FROM dockerinaction/ch8_onbuild
# Executing 1 build trigger
 ---> Running in 591f13f7a0e7
total 8
drwxr-xr-x    1 root     root          4096 Jun 18 03:12 .
drwxr-xr-x    1 root     root          4096 Jun 18 03:13 ..
-rw-r--r--    1 root     root             0 Jun 18 03:12 base-evidence
Removing intermediate container 591f13f7a0e7
 ---> 5b434b4be9d8
Step 2/3 : RUN touch downstream-evidence
 ---> Running in a42c0044d14d
Removing intermediate container a42c0044d14d
 ---> e48a5ea7b66f
Step 3/3 : RUN ls -al .
 ---> Running in 7fc9c2d3b3a2
total 8
drwxr-xr-x    1 root     root          4096 Jun 18 03:13 .
drwxr-xr-x    1 root     root          4096 Jun 18 03:13 ..
-rw-r--r--    1 root     root             0 Jun 18 03:12 base-evidence
-rw-r--r--    1 root     root             0 Jun 18 03:13 downstream-evidence
Removing intermediate container 7fc9c2d3b3a2
 ---> 46955a546cd3
Successfully built 46955a546cd3
Successfully tagged dockerinaction/ch8_onbuild_down:latest

you will see that the ONBUILD instruction in the base image is taking place in the build time of the downstream image.

Init process in docker

UNIX-based computers usually start an initialization process firts. the init process is responsible for starting all system services, you can specify an init instruction inside your docker container using

docker container run -it --init alpine:3.6 nc -l -p 3000

so now if you inspect the processes running in your container with ps -ef, you will see Docker ran /dev/init -- nc -l -p 3000 inside the container instead of just nc but what interesting now is that you can now control what processes to start and when to start and when to stop, using init process is useful in cleaning up orphaned processes, monitoring and restarted any failed processes

Health checks

Another way of monitoring the process in your container is health checks for example

including HEALTH-CHECK in an nginx server container

FROM nginx:1.13-alpine

HEALTHCHECK --interval=5s --retries=2 \
  CMD nc -vz -w 2 localhost 80 || exit 1

The command’s exit status will be used to determine the container’s health. Docker has defined the following exit statuses:

  • 0: success— The container is healthy and ready for use.
  • 1: unhealthy— The container is not working correctly.
  • 2: reserved— Do not use this exit code.

Now lets run this container and see what health check brought to us

docker image build -t dockerinaction/healthcheck .
docker container run --name healthcheck_ex -d dockerinaction/healthcheck

so if you inspect the docker container running with "docker ps" and if you wanted an improved message you will have to provide a format for ps like so

docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}'

You can tell now that you see a message stating that your container is "Healthy", by default healthy checks run every 30 seconds

Linting Dockerfile

Docker provide us with ways to parse Dockerfile and search for common mistakes using

    docker container run --rm -i hadolint/hadolint:v1.15.0 < .

Multistage Build

A multistage Dockerfile is a Dockerfile that has multiple FROM instructions each From instruction marks a new build stage, each stage is considered a downstream(the stage that depends on the earlier stage ) to the earlier one, let's take an example

#################################################
# Define a Builder stage and build app inside it
FROM golang:1-alpine as builder

# Install CA Certificates
RUN apk update && apk add ca-certificates

# Copy source into Builder
ENV HTTP_CLIENT_SRC=$GOPATH/src/dia/http-client/
COPY . $HTTP_CLIENT_SRC
WORKDIR $HTTP_CLIENT_SRC

# Build HTTP Client
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
    go build -v -o /go/bin/http-client

#################################################
# Define a stage to build a runtime image.
FROM scratch as runtime
ENV PATH="/bin"
# Copy CA certificates and application binary from builder stage
COPY --from=builder \
    /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /go/bin/http-client /http-client
ENTRYPOINT ["/http-client"]

As you can see this build has two stages, the first stage is the builder stage and the second stage as you can see it's FROM a scratch image when you do that you actually has a container with no file system so this stage will only contain what files you copy in it and exactly this what happened we copy from the builder stage a certain files and defines an Entrypoint to our container.

Image pipelines

This chapter talks about how you would design patterns for building images. There are 3 kind of patterns that's well known

  1. All in one
  2. Build plus runtime - You use a build image with a separate, slimmer runtime image to build a containerized application.
  3. Build plus multiple runtimes - You use a slim runtime image with variations for debugging and other supplemental use cases in a multi-stage build.

All-in-one-images

This type of images include the build components and the runtime component this might include packages, dependencies and binary tools, for example

FROM maven:3.6-jdk-11

ENV WORKDIR=/project
RUN mkdir -p ${WORKDIR}
COPY . ${WORKDIR}
WORKDIR ${WORKDIR}
RUN mvn -f pom.xml clean verify
RUN cp ${WORKDIR}/target/ch10-0.1.0.jar /app.jar

ENTRYPOINT ["java","-jar","/app.jar"]


This approach has downsides like it include more than needed tools so it will be over sized and more vulnerable to attackers.

Separate build and runtimes images

by creating separate build and runtime image so you can basically build an image and share this build dependencies via the host through volumes and then use those dependencies with another image and by so you decreased your image size tremendously

You can build the application with a Maven container:

docker container run -it --rm \
  -v "$(pwd)":/project/ \
  -w /project/ \
  maven:3.6-jdk-11 \
  mvn clean verify

Maven compiles and packages the application artifact into the project’s target directory:

then creating the dockerfile for the runtime image.

FROM openjdk:11-jdk-slim

COPY target/ch10-0.1.0.jar /app.jar

ENTRYPOINT ["java","-jar","/app.jar"]

the runtime image just copy artifacts shared by the previous image to the current image and defining an Entry point.

This approach saves storage and decrease the vulnerability surface area

Variations of runtime image via multi-stage builds

you might need to support different runtimes for your image for example you might need different runtimes scheme like "debugging runtime" or "production runtime" and so on.

No alt text provided for this image

As you can see there's an optional stage for the image "app-image" either to default or to debug. The book has given us an example about how the dockerfile for such scenario will look like.

# The app-image build target defines the application image
FROM openjdk:11-jdk-slim as app-image                        

ARG BUILD_ID=unknown
ARG BUILD_DATE=unknown
ARG VCS_REF=unknown

LABEL org.label-schema.version="${BUILD_ID}" \
      org.label-schema.build-date="${BUILD_DATE}" \
      org.label-schema.vcs-ref="${VCS_REF}" \

      org.label-schema.name="ch10" \
      org.label-schema.schema-version="1.0rc1"

COPY multi-stage-runtime.df /Dockerfile

COPY target/ch10-0.1.0.jar /app.jar

ENTRYPOINT ["java","-jar","/app.jar"]

FROM app-image as app-image-debug  
                        
#COPY needed debugging tools into image
ENTRYPOINT ["sh"]

FROM app-image as default          

Some take always from this dockerfile that it's always import to name a build stage "FROM app-image as default" as you can use in your build command later on as a build target and as you can see here for debugging purposes we just made "sh" as our entrypoint for the image, the last line ensuring that the stage "app-image" is the default to be run if no target is specified, now let's see how we would run this docker image i the debugging mode.

docker image build -t dockerinaction/ch10:multi-stage-runtime-debug \
    -f multi-stage-runtime.df \
    --target=app-image-debug .

Ch 11 begin with explaining what a service could mean. It states that any process or functionality that happen to be available over the network is called service, docker has it's encoding for the word and it's swarm with swarm you can create services that can easily interact with one another as follows,

docker swarm init                     
docker service create \               
    --publish 8080:80 \
    --name hello-world \
   dockerinaction/ch11_service_hw:v1

Docker services are available only when Docker is running in swarm mode, services happen to have a set of important properties so let's discover them together shall we?

Automated resurrection and replication

If you suddenly remove a service after creating it, it will be removed bu it after few seconds you check again you will notice that your service got back alive that's a crucial characteristic, you can see what services you got running with,

"docker service ls" , remove a certain container "docker rm -f".

now if you removed your container and then after a few second you ran,

"docker service ps" you will see that there are two instance of your image one stopped and another created back to live, it's easy as well to make replicas for your application by running

docker service scale hello-world=3

Automated rollout

Consider this command to update the hello-world service created earlier in this chapter: 

docker service update \
    --image dockerinaction/ch11_service_hw:v2 \     1
    --update-order stop-first \
    --update-parallelism 1 \
    --update-delay 30s \
hello-world

this tells to change a certain service called hello-world with the image "dockerinaction...." , updating each replica at a time if there are any and force a delay time between every update, but if this update failed for any reason, you remember from previous that the service will keep trying to restart itself but unfortunately if the image is cracked then this container will never start, actually we can fix this by just,

docker service update \
    --rollback \
    hello-world

with this the image just will get back to the most recent stable version,

docker service update \
    --update-failure-action rollback \
    --update-max-failure-ratio 0.6 \
    --image dockerinaction/ch11_service_hw:start-failure \
    hello-world

suppose we are running like 100 replica of a service we might want to tell docker that we will be ok with a fraction of that operating because they likely some of them may crash so that what the previous command does flag "--update-max-failure-ratio 0.6" you are ok with only 1/3 of the replicas working and if failure exceeds we rollback as flag "--update-failure-action" suggested

Docker compose

Compose files describes a stack of services that are running in the same environment and sometimes depend on each other

version: "3.7"
services:
    hello-world:
        image: dockerinaction/ch11_service_hw:v1
        ports:
            - 8080:80
        deploy:
            replicas: 3

either by defining one service with its replicas and port mappings

version: "3.7"
services:
    postgres:
        image: dockerinaction/postgres:11-alpine
        environment:
            POSTGRES_PASSWORD: example

    mariadb:
        image: dockerinaction/mariadb:10-bionic
        environment:
            MYSQL_ROOT_PASSWORD: example

    adminer:
        image: dockerinaction/adminer:4
        ports:
            - 8080:8080

or with more than one service, Now use this Compose file to create a stack, just put this compose file in a file named databases.yml and then hit

docker stack deploy -c databases.yml my-databases

When you run this command, Docker will display output like this:

Creating network my-databases_default
Creating service my-databases_postgres
Creating service my-databases_mariadb
Creating service my-databases_adminer

At this point, you can test the services by using your browser to navigate to https://localhost:8080

Now let's discover another compose file that uses volumes to restore the data and scheme of a database in case its removed.

version: "3.7"
volumes:
    pgdata: # empty definition uses volume defaults
services:
    postgres:
        image: dockerinaction/postgres:11-alpine
        volumes:
            - type: volume
              source: pgdata # The named volume above
              target: /var/lib/postgresql/data
        environment:
            POSTGRES_PASSWORD: example
    adminer:
        image: dockerinaction/adminer:4
        ports:
            - 8080:8080
        deploy:
            replicas: 1 # Scale down to 1 replica so you can test

 a volume named pgdata, and the postgres service mounts that volume at /var/lib/postgresql/data. That location is where the PostgreSQL software will store any database schema or data.

Now let's run this compose file

docker stack deploy \
  -c databases.yml \
  --prune \
  my-databases

then listing the current volumes with,

'docker volume ls' , you should see this:

DRIVER        VOLUME NAME
local         my-databases_pgdata

then remove the service

docker service remove my-databases_postgres

Then restore the service by using the Compose file:

docker stack deploy \
  -c databases.yml \
  --prune \
  my-databases

Because the data is stored in a volume, Docker is able to attach the new database replica to the original pg-data volume

要查看或添加评论,请登录

社区洞察

其他会员也浏览了