Docker 101
Gabriel M.
Linux Systems Engineer | IT Infrastructure | Security | Virtualization | Automation | AI | C and Shell Scripting
A (tiny) introduction to Docker.
If you want to get your hands dirty with some Docker content, this quick introduction may help to point you in the right direction. I will assume you are already familiar with Linux command-line and administration. Let's focus only on Docker.
Just for the sake of contextualization, Docker is a powerful way to containerize applications. It is built out of different components, such as A) Docker Engine (the core of Docker, which manages the containers); B) Docker Compose (manages multiple container applications); C) Volume Management (handles persistence outside the containers); D) Docker CLI (command-line interface to the Docker environment). To see all the components for their entire ecosystem, check the docker website.
In this article, we'll focus on installing the latest Docker on Debian, and performing some basic introductory tasks that may give you a good grasp of how it works. Keep in mind that we are not installing the packaged docker version available from the Debian repository. That version is a bit behind, so let's get the latest version from the official Docker website. Let's go!
Install the system dependencies:
sudo apt update
sudo apt install -y ca-certificates curl gnupg
Then, add the docker repository to your system:
# Docker official GPG key.
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo tee /etc/apt/keyrings/docker.asc > /dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Docker official APT repository:
echo "deb [signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Now, let's update the packages metadata and install Docker:
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Then, let's start Docker:
# This will enable AND start the docker service:
sudo systemctl enable --now docker
Once that is done, you should be able to check the installed Docker version.
Also, docker info gives you an overview about your docker setup:
Optionally, in case you want to allow non-root users to run docker, just add the desired user to the docker group, logout and log back in. In this example, I'll add my own user to that group:
sudo usermod -aG docker gabriel
Testing the Docker installation.
With every programming language, your first program should be the famous "Hello, world!". Why would that be different with Docker? ; ) Let's test it using the "hello, world" container.
docker run hello-world
You should see something similar to this:
In the terminal output above, we can see that you asked docker to run a container named "hello-world". Since it was the first time running that container, the system let you know that by printing "Unable to find image 'hello-world:latest' locally. To fix that, it then downloaded that image from the Docker hub library.
Once that image was downloaded, it could then be invoked to be run as a container. In other words, a container is roughly an instance of an image on disk.
What happens now?
Well, that image is still available on your disk. To list the downloaded images, you can use docker ps -a, like this (if you omit the -a, it only shows the running containers):
Now, let's learn how to delete an image from disk, since we don't need that hello-world image anymore. We will first remove the container (which references that image), so we can finally remove the image itself:
As we can see, after the reference was removed, the local image files were deleted.
Now, let's try running a container that is closer to real-world applications: nginx.
docker run -d --name my-nginx -p 8088:80 nginx
The above command is asking Docker to create a new container that is an instance of the nginx image. The container will be named "my-nginx", will run in detached mode (-d), and will map its 80/TCP port to Docker' server 8088/TCP port. This way, when we browse to https://docker-server-ip:8088/ we should see Nginx's welcome page.
And the container now shows in the docker ps' output:
That is neat! What if we want to log into that container and check its local file system?
Well, that can be achieved with the following command:
What happens if you make any changes and then restart that container? Well, you will lose your data, because there is no persistence! Let's re-create that same nginx container, but this time adding persistence. To achieve that, we will create a volume in the docker host and map it to the container.
# Stop the container:
docker stop my-nginx
# Remove the container
docker rm my-nginx
# Recreate the container, but now using persistence (-v : volume):
docker run -d --name my-nginx -p 8088:80 -v my-nginx-data:/usr/share/nginx/html nginx
The last command above is creating the same my-nginx, but this time it is telling the container to map Docker's my-nginx-data to /usr/share/nginx/html inside the container.
Optionally, this could also be achieved with the use of a docker-compose file. Let's assume the following docker-compose.yml file inside a ~/my-docker-files/my-nginx directory:
version: '3.8'
services:
nginx:
image: nginx
container_name: my-nginx
ports:
- "8088:80"
volumes:
- my-nginx-data:/usr/share/nginx/html
restart: always
volumes:
my-nginx-data:
In case you are wondering where the persistence is stored inside the docker server, well that my-nginx-data can be found under the following path:
/var/lib/docker/volumes/my-nginx-data/_data/
Also, we have created our compose file, that's how we get to use it:
Now, let's check the persistence and the mapping. For that, we will have to log into the container and modify the index.html file
Then, let's restart the container and check the Nginx welcome page:
And then we can see the file still persists:
Now, let's assume we need a solution that is made of two different containers: a wordpress server, and a mysql database server. In order to achieve that, we can use another docker-compose.yml file to achieve that. Let's also make sure persistence is in place, so we don't lose any important data across restarts.
In order to do that, let's create the compose file inside our ~/my-docker-files/my-wp-mysql/docker-compose.yml . For the sake of this exercise, yes, the passwords are really weak.
Now, we start it as we did with the my-nginx container:
cd ~/my-docker-files/my-wp-mysql
docker compose -f docker-compose.yml up -d
Now, before we browse the container welcome page, let's "reset" the configuration and force a "new wordpress setup", so we can actually see it in action. To achieve that, we'll enter the container and remove the wp-config.php file out of the way:
Then, we can finally browse the container IP:PORT (1978 this time), and see the container asking for a new configuration for WordPress:
Now, let's go through the setup and make sure the communication between WP and MySQL is working fine.
After you hit that run the installation button, it will let you create a username and password in order to access the WP interface. Once you are done, you should see something like this:
I hope this was useful to give you a (quick) introduction to Docker and its capabilities.
Happy "containerizing"! =)