Cruising the Docker Learning Curve: Your Roadmap to Container Mastery
Designed by Ddhruv Arora

Cruising the Docker Learning Curve: Your Roadmap to Container Mastery

Introduction to Docker

Docker is a powerful platform for developing, shipping and running applications inside containers. Containers are lightweight, isolated environments that package applications and their dependencies together. Here's more information about Docker, along with its benefits:

What is Docker?

Docker is an open-source platform that automates the deployment of applications within containers. Containers are a form of virtualization but differ from traditional virtual machines (VMs) in several key ways:

  1. Lightweight: Containers share the host OS kernel, which makes them much smaller and more resource-efficient than VMs. They start quickly and use fewer system resources.
  2. Isolation: Containers provide process and file system isolation, ensuring that applications running in one container do not interfere with applications in another.
  3. Consistency: Docker containers package an application and its dependencies into a single unit, guaranteeing consistency across different environments, such as development, testing, and production.
  4. Portability: Docker containers can run on any system that supports Docker, making it easy to move applications between different cloud providers or on-premises infrastructure.
  5. Scalability: Containers are designed to be easily scalable. You can replicate and scale containers to meet application demand quickly.

Benefits of Docker:

  1. Consistency: Docker ensures that the same environment is used throughout the entire software development lifecycle, from development to testing and production. This reduces "it works on my machine" issues.
  2. Efficiency: Containers use fewer resources than traditional VMs, allowing you to maximize server utilization and reduce infrastructure costs.
  3. Portability: Docker containers are platform-agnostic. You can build an image on your local machine and run it anywhere that supports Docker, from your laptop to a cloud data center.
  4. Version Control: Docker images can be versioned, making it easy to roll back to previous versions if issues arise.
  5. Isolation: Containers provide process and file system isolation, enhancing security and preventing application conflicts.
  6. Rapid Deployment: Containers can be started in milliseconds, enabling rapid application deployment and scaling to meet changing demands.
  7. Microservices: Docker is well-suited for microservices architectures, where applications are broken down into smaller, independently deployable components.
  8. Ecosystem: Docker has a rich ecosystem of tools and services, such as Docker Compose for managing multi-container applications, and Kubernetes for container orchestration.
  9. Community and Support: Docker has a large and active community, which means there are plenty of resources, documentation, and community-contributed images available. Docker also offers commercial support options.
  10. DevOps and Continuous Integration: Docker facilitates DevOps practices by providing a consistent environment for development and operations teams. It integrates seamlessly with CI/CD pipelines.
  11. Resource Isolation: Docker uses cgroups and namespaces to isolate containers, ensuring that resource-intensive containers don't negatively impact others on the same host.
  12. Security: Docker has built-in security features, such as seccomp, AppArmor, and capabilities, that help protect the host and other containers.

Overall, Docker simplifies the process of building, shipping, and running applications, making it easier for developers and operations teams to collaborate effectively, reduce infrastructure costs, and deliver software reliably and quickly across different environments. Its numerous benefits have made it a cornerstone technology in modern software development and deployment.

Agenda

In this comprehensive guide, we will embark on a journey to master Docker, the powerful containerization platform that's transforming the world of software development and deployment. Below is a sneak peek of what lies ahead:

Section 1: Setting Sail with Docker- Installing Docker on RHEL 9

  • Installing Docker
  • Starting and enabling Docker services
  • Verifying Docker installation

Section 2: Crafting Your Docker Vessel - Configuring a Web Server Inside a CentOS 7 Container

  • Creating a Dockerfile
  • Building and running your web server container
  • Accessing your custom web page

Section 3: Navigating the Docker Hub Waters - Creating an Account on hub.docker.com

  • Logging in to your Docker Hub account from the command line
  • Tagging and pushing your Docker image to Docker Hub
  • Verifying your image on Docker Hub

Let's Dive into the First Hands-On Activity: Installing Docker on RHEL 9

Welcome aboard, fellow container enthusiast! If you're reading this, you're on a voyage to becoming a Docker master, and I'm here to be your captain on this exciting journey. Docker, the game-changing technology that has redefined how we deploy applications, is at your fingertips. We'll break down the intricacies of Docker into three hands-on activities, each one bringing you closer to Docker mastery.

Our first port of call is to get Docker up and running on a Red Hat Enterprise Linux 9 (RHEL 9) system. Think of this as setting sail on our Docker adventure. In this segment, we'll explore the ins and outs of Docker installation on RHEL 9, step by step. Whether you're a seasoned developer or just dipping your toes into the world of containers, you're in the right place.

So, grab your captain's hat, and let's embark on our Docker journey by diving headfirst into the first hands-on activity: Installing Docker on RHEL 9.

Step 1: Install a file editor like `Vim`

Use the following command to install:

sudo yum install vim -y        
No alt text provided for this image
Installing Vim on RHEL9

Step 2: Create a yum repo to install the docker

Use the following command for creating a custom yum repo, to install the docker community edition:

sudo vim /etc/yum.repos.d/docker-ce.repo        

Once the Vim editor is opened, add the following lines to the file:

[docker-ce]
baseurl=https://download.docker.com/linux/centos/9/x86_64/stable
gpgcheck=0        

The file should look something like this, after adding the above lines to it:

No alt text provided for this image
The yum repo for Docker-CE

Let's break down what each line is doing:

  1. [docker-ce]: This line defines a repository section or repository ID, enclosed in square brackets. It's a label used to identify this specific repository configuration. In this case, it's named "docker-ce."
  2. baseurl=https://download.docker.com/linux/centos/9/x86_64/stable: This line specifies the base URL where YUM should look for Docker CE packages for CentOS 9 on x86_64 architecture. It points to the stable release of Docker CE packages provided by Docker. When you run yum install docker-ce, YUM will fetch packages from this URL.
  3. gpgcheck=0: This line sets the value of the "gpgcheck" option to 0, which means that YUM will not perform GPG signature checks on the packages downloaded from this repository.

Step 3: Install the Docker-CE

The installation is pretty straightforward, use the following command:

sudo yum install docker-ce --nobest --skip-broken -y        

Let's break down what each part of this command does:

  1. sudo: This is used to run the command with superuser (administrator) privileges, which are often required for installing software and making system-level changes.
  2. yum: This is the package manager for Red Hat-based Linux distributions, including CentOS. It's used to manage the installation, removal, and updating of software packages.
  3. install docker-ce: This part of the command specifies the action to install the "docker-ce" package. Docker CE is the Docker Community Edition, which is a popular containerization platform.
  4. --nobest: This is an option that tells YUM not to install the "best" version of the package if multiple versions are available. It will install the version that meets the dependencies but might not necessarily be the newest or most feature-rich version.
  5. --skip-broken: This option tells YUM to skip packages with unresolved dependencies rather than failing the installation. It's useful when you have conflicting or missing dependencies for certain packages, and you want to proceed with the installation of other packages that don't have these issues.


No alt text provided for this image
Installation process!

Step 4: Start and Enable Docker

Once docker is successfully installed, start the docker services and check the status to verify the same, use the following commands

sudo systemctl start docker
sudo systemctl status docker        

The expected output:

No alt text provided for this image
Docker is UP and Running!

Congratulations, you have successfully installed and configured docker on your system.

Step 5: Add Your User to the Docker Group [Optional]

By default, Docker commands require root privileges. To allow your user to run Docker commands without using sudo, you can add your user to the docker group. Replace your-username with your actual username:

sudo usermod -aG docker your-username        
No alt text provided for this image
Adding User to Docker Group

Use the following command to apply the group changes without logging out:

su - your-username        

Now, you should be able to run commands without using sudo again and again.

Moving on to the Second Activity: Crafting Your Docker Vessel

We've set sail on our Docker journey, and it's time to steer our course toward the second exciting activity on our list. Our current destination: crafting your very own Docker vessel by configuring a web server inside a CentOS 7 container.

In our first activity, we successfully installed Docker on a Red Hat Enterprise Linux 9 system, and now we're ready to take the next step. In this hands-on segment, we'll dive deep into the heart of Docker, where you'll learn how to create and customize containers to suit your needs.

Imagine Docker containers as vessels, each capable of carrying a unique cargo—your applications, dependencies, and configurations—all neatly packaged and ready to sail. Now, we'll build one such vessel by configuring a web server inside a CentOS 7 container using a dockerfile.

So, secure your life jacket and prepare to embark on our second activity: Crafting Your Docker Vessel.

But before diving into the activity of configuring an Apache HTTP Server (httpd) within a Docker container using a Dockerfile, it's essential to understand what a Dockerfile is and its significance in containerization.

Understanding the Dockerfile

A Dockerfile is like the blueprint for a Docker container. It's a plain-text configuration file that contains a set of instructions, in a specific format, for building a Docker image. When you build an image using a Dockerfile, these instructions are executed one by one, creating layers that form the final image. Let's break down the key elements of a Dockerfile:

  1. Base Image: Every Dockerfile starts with a base image, typically from a public registry like Docker Hub. This base image provides the foundation for your container. For example, you might use an image like centos:7 as your starting point.
  2. Instructions: Dockerfiles consist of a series of instructions, each on a separate line. Some common instructions include:

  • FROM: Specifies the base image.
  • RUN: Runs commands during the image build process. For example, installing software or configuring settings.
  • COPY or ADD: Copies files from your local machine into the image.
  • WORKDIR: Sets the working directory inside the container.
  • EXPOSE: Informs Docker that the container listens on specific ports at runtime.
  • CMD or ENTRYPOINT: Defines the command that runs when the container starts.

  1. Layering: Docker uses a layered filesystem for images. Each instruction in the Dockerfile creates a new layer. Layers are cached, so if you change an instruction, only the affected layer and those above it need rebuilding. This speeds up image building and reduces redundancy.
  2. Best Practices: Dockerfiles should follow best practices, like minimizing the number of layers, cleaning up temporary files, and keeping images as small as possible. This makes your containers more efficient and secure.

Why Dockerfiles are Important

Dockerfiles play a crucial role in standardizing container creation and ensuring consistency across different environments. Here's why they are essential:

  1. Reproducibility: Dockerfiles provide a clear and repeatable process for building an image. Anyone with the Dockerfile can recreate the same container, ensuring consistency between development, testing, and production environments.
  2. Version Control: Dockerfiles can be versioned alongside your code, allowing you to track changes and roll back to previous configurations if needed.
  3. Customization: Dockerfiles enable you to customize the container image to suit your application's requirements. You can install software, configure settings, and add files.
  4. Security: By controlling what goes into your container via the Dockerfile, you can minimize the attack surface and keep containers up to date with security patches.
  5. Collaboration: Dockerfiles make it easy to collaborate with others. Team members can contribute to the Dockerfile, ensuring that the container is built consistently.

Now that we have a good understanding of Dockerfiles, let's apply this knowledge in our second activity by creating a Dockerfile to configure the httpd web server inside a CentOS 7 container and add a custom web page.

Step 1: Create the Dockerfile and Web Page

Note: Two things to keep in mind before creating the dockerfile

  1. File Name: The Dockerfile should be named Dockerfile, all in uppercase, with no file extension. Docker relies on this specific naming convention to recognize the file.
  2. Location: The Dockerfile should be located in the same directory where you run the docker build command. This directory is referred to as the "build context." All files in the build context, including the Dockerfile, can be used in the image build process.

The code that we will be utilizing for Dockerfile:

# Use the official CentOS 7 base image from Docker Hub
FROM centos:7

RUN yum -y install httpd

COPY index.html /var/www/html/

EXPOSE 80

# Start the Apache service when the container runs
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]        

let's break down the Dockerfile code step by step:

  • FROM: This is the starting point of your Dockerfile. It specifies the base image that you're building upon. In this case, you're using the official CentOS 7 base image from Docker Hub. This image provides the CentOS 7 operating system as the foundation for your container.
  • RUN: This instruction executes a command during the image build process. Here, you are using the yum package manager to install the Apache HTTP Server (httpd) inside the container. The -y flag is used to automatically answer "yes" to any prompts, allowing for an unattended installation.
  • COPY: This instruction copies files or directories from your local machine into the image. In this line, you are copying an index.html file from your local directory into the /var/www/html/ directory within the container. This is where Apache serves its web content by default.
  • EXPOSE: This instruction informs Docker that the container will listen on port 80 at runtime. It doesn't actually publish the port; it's a way to document that the container will use this port. To actually publish and map the port to the host, you'd use the -p option when running the container.
  • CMD: This instruction defines the default command to run when the container starts. Here, you're starting the Apache HTTP Server with the FOREGROUND option. This option tells Apache to run in the foreground, which is necessary for Docker containers because they need a foreground process to keep them running. The /usr/sbin/httpd path is the location of the Apache binary inside the container.

The code for the index.html file is given below, it will simply display Hello World in the center of the web page.

<!DOCTYPE html
<html>
<head>
? ? <title>Hello World</title>
? ? <style>
? ? ? ? body {
? ? ? ? ? ? background-color: black;
? ? ? ? ? ? color: white;
? ? ? ? ? ? text-align: center;
? ? ? ? ? ? display: flex;
? ? ? ? ? ? justify-content: center;
? ? ? ? ? ? align-items: center;
? ? ? ? ? ? height: 100vh;
? ? ? ? ? ? margin: 0;
? ? ? ? }
? ? </style>
</head>
<body>
? ? <div>
? ? ? ? <h1>Hello World</h1>
? ? </div>
</body>
</html>>        

Step 2: Build the Dockerfile

To build the Docker image using the Dockerfile you've created, follow these steps:

  1. Open a terminal window.
  2. Navigate to the directory where your Dockerfile is located.
  3. once you are in the correct directory, you can use the docker build command to build the Docker image. Specify a name for your image using the -t flag (replace my-web-server-image with your preferred name):

docker build -t my-web-server-image .        

  • -t my-web-server-image: This assigns the name my-web-server-image to your Docker image.
  • The dot . at the end of the command specifies the build context, which is the current directory where the Dockerfile is located.

Docker will execute the instructions in your Dockerfile and build the image layer by layer. You will see output messages for each step as Docker progresses. Once the build process is complete, you should see a message indicating that the image was successfully built.

It should look something like this:

No alt text provided for this image
The build process for the new docker image

Step 3: Start the container!

Once the image is built successfully, start the container using the following command:

docker run -d -p 8080:80 <image_name>        

In my case, it looks like this:

No alt text provided for this image
Image name: d-apache-server

Let's break down the command:

  • docker run: This is the command used to run a Docker container.
  • -d: This is a flag that stands for "detached mode." When you run a container in detached mode, it runs in the background, and you get your command prompt back in the terminal. This is useful when you don't want the container's output to be displayed in your terminal, and you want to continue using the terminal for other tasks.
  • -p 8080:80: This flag is used to publish, or map, a port from the host machine to the container. In this case, it's mapping port 8080 on the host to port 80 in the container. This means that if you access port 8080 on your host machine, it will be forwarded to port 80 inside the container where your web server is running.
  • <image_name>: This is where you specify the name of the Docker image you want to use to create the container. Replace <image_name> with the actual name of your Docker image, such as my-web-server-image in your case.

Step 4: Visit the Web Page

Head on to `localhost:8080` to see the results, it should load a web page displaying "Hello World" at the center.

The output:

No alt text provided for this image
Hello World

Congratulations! ??

You've successfully set up and configured Apache HTTP Server inside a Docker container. It's no small feat, and you've taken a big step towards mastering Docker and web server management. Your custom web page is now ready to be accessed, and you've unlocked the power of containerization.

Note: Installing the Apache HTTP Server (httpd) on a CentOS 7 image can be a resource-intensive task, and the time it takes to complete may vary depending on the hardware and resources available on your system. If you're working on an AWS EC2 instance with limited resources, this process might take up to several hours.

For the sake of simplicity and exploration, we recommend utilizing Docker Desktop for Windows if you have access to it. Docker Desktop provides a seamless and efficient containerization environment, and it's well-suited for development and testing purposes. During our exploration, we found it to be an excellent choice for setting up and experimenting with containers.

With Docker Desktop, you can enjoy a smoother experience and faster container deployment, allowing you to focus more on learning and experimenting with Docker and less on resource constraints.

Finally, We Have the Last Activity: Sharing Your Docker Creation with the World

You've come a long way on this Docker journey, and now it's time for the grand finale. In our last activity, we'll navigate the waters of Docker Hub, the online hub for Docker images, and learn how to share your Docker creation with the world.

Docker Hub is where the global Docker community gathers to share, collaborate, and discover containers. It's your gateway to showcasing your containerized masterpiece to fellow developers and enthusiasts worldwide.

But before we embark on this final leg of our journey, ensure you've completed the previous activities. You should have Docker up and running on your system and a Docker container with Apache web server and your custom web page ready to go.

So, fasten your seatbelts, and let's set sail into the third and last activity: Sharing Your Docker Creation with the World.

Step 1: Create a Docker Hub account

No alt text provided for this image
Docker hub Website

  • Click on the "Sign Up" button in the upper right corner of the page.
  • Select Sign up with email, and provide the information like email address, password, and username.
  • After providing the required information, click the "Sign Up" button.
  • Docker Hub will send a verification email to the address you provided. Go to your email inbox and look for an email from Docker Hub. If you don't see it in your inbox, check your spam or junk folder.
  • Once your email is verified, you will be redirected to Docker Hub, and your account will be active and ready to use.

If you already have an account, proceed to login

No alt text provided for this image
Login with username

Once login/Signup is completed, you should see the Docker Hub Dashboard:

No alt text provided for this image
My Docker Hub Dashboard

Step 2: Login to Docker Hub:

Open your terminal or command prompt and run the following command to log in to Docker Hub using your Docker Hub credentials (username and password):

docker login         

You'll be prompted to enter your Docker Hub username and password.

No alt text provided for this image
Logging in to the docker hub

Note: You need to type the password manually, you can't use copy-paste for the password field.

Step 3: Tag Your Docker Image

Before you can push an image to Docker Hub, you need to tag it with your Docker Hub username and the desired repository name. Suppose you have an image named my-web-server-image that you want to push to Docker Hub:

docker tag my-image your-docker-hub-username/my-web-server-image        

Replace your-docker-hub-username with your actual Docker Hub username.

So, after all the necessary changes, it should look similar to this:

No alt text provided for this image
See the docker tag command

Step 4: Push the Docker Image

After tagging your image, you can push it to Docker Hub using the docker push command:

docker push your-docker-hub-username/my-web-server-image         

Docker will upload your image to your Docker Hub account.

No alt text provided for this image
Uploading in process!

Note: The push process may take some time, depending on the size of your image and your internet connection speed. Docker will display progress information in the terminal.

Step 5: Verification

After the push is complete, you can visit your Docker Hub account on the Docker Hub website (https://hub.docker.com/) to verify that your image has been successfully pushed and is visible in your repository.

The result:

No alt text provided for this image
The image was pushed Successfully

Congratulations! ??

You've achieved a significant milestone on your Docker journey by successfully pushing your Docker image to Docker Hub. Your containerized creation is now accessible to the global Docker community, ready to inspire and assist developers worldwide.

By sharing your image on Docker Hub, you've not only showcased your skills but also contributed to the thriving ecosystem of containerized applications and services. Your dedication to learning and mastering Docker is truly commendable.

In Conclusion: Navigating the Docker Way!

As we approach the end of our Docker journey, it's time to reflect on the incredible adventure we've embarked upon. From setting up Docker, to crafting custom containers and sharing your creations with the world, you've taken significant steps towards Docker mastery. We hope this guide has been an invaluable resource in your quest to become a Docker virtuoso.

We'd like to extend our heartfelt thanks to you, the reader, for joining us on this voyage. Your curiosity and determination are the driving forces behind every successful Docker deployment, and we're thrilled to have been part of your learning journey.

In addition, I want to express my deepest gratitude to Vimal Daga Sir, whose guidance and mentorship have been a guiding light in the world of containerization. His commitment to knowledge sharing and dedication to empowering learners have inspired countless individuals, including us. Thank you, Sir, for your exceptional mentorship.

As you continue your exploration of Docker and container technologies, remember that the Dockerverse is ever-evolving. New challenges and opportunities await, and we encourage you to keep pushing the boundaries of what's possible with containers.

We bid you fair winds and following seas on your Docker adventures. May your containers run smoothly, your applications thrive, and your journey towards technical excellence continue to flourish.

#DockerMastery #ContainerizationExperts #DockerJourney #ContainerDeployment #DockerLearnin #ContainerizationSkills #DockerAdventures

Lalit Khera

Founder & CEO at Aaptatt

1 年

?? Nailing it with this post! Docker's role in enabling consistency across different environments is truly remarkable. This technology is paving the way for smoother workflows. Cheers to progress! ?? #DockerAdvantage #TechEvolution

要查看或添加评论,请登录

Ddhruv Arora的更多文章

社区洞察

其他会员也浏览了