Configuring Docker-Compose for Container-Based Dev Environment
Photo by Rubaitul Azad on Unsplash

Configuring Docker-Compose for Container-Based Dev Environment

Configuring Docker-Compose for Container-Based Dev Environment

When we start working on a new project, one of the preliminary tasks is figuring out which tech stacks are needed for the project and how to set up and manage different versions of those tech stacks so that the project can continue to perform as intended.

Q. So, what if we could instantly access the resources we needed for each project, whether or not we went through the installation process or installed tech stacks on local or virtual machines?

Q. What if we could just work on our projects inside an environment that supported Java or Go, and everything would function as expected? Or, in the case of a tech stack like Python, where we may have to deal with different versions, what if we could do it quickly and not have to worry about things like installing or maintaining different versions of that tech stack?

Q. What if, when we have finished configuring our environment, we can actually share that setup with others, like we could share it on GitHub? As a result, our team can start working on the same project in the same development environment while having access to all identical versions.

Q. What if we could import the infrastructure remotely on any local or virtual machine and keep running our databases?

That’s exactly what we are going to do today with the help of?Docker.

Docker?is a set of platform-as-a-service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.

Today we’ll run a?Postgresql?database in?PgAdmin4?and?JupyterLab?without installing the database and IDLE locally on our machine.

What is Docker Compose:

Docker-compose helps us run and manage multiple containers with a single command. Docker-compose files written in the?YAML?language. So let’s start writing the YAML file.

No alt text provided for this image

Create a docker-compose.yml file in a new folder and copy the below code. We’ll go through the code in a few seconds.

version: '2.12.2'
services:
  pgadmin:
    image: dpage/pgadmin4:latest
    container_name: pgadmin
    ports:
      - "80:80"
    environment:
      PGADMIN_DEFAULT_EMAIL: [email protected]
      PGADMIN_DEFAULT_PASSWORD: your_password
    volumes:
      - pgadmin:/var/lib/pgadmin
  postgres:
    image: postgres:15.1
    container_name: postgresql
    ports:
     - "5432:5432"
    environment:
      POSTGRES_USER: your_username
      POSTGRES_PASSWORD: your_password
      POSTGRES_DB: docker_postgres_db
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - postgres:/var/lib/postgresql/data/pgdata
  jupyterlab:
    image: jupyter/base-notebook
    container_name: jupyterlab
    ports:
      - "8888:8888"
    environment:
      - JUPYTER_ENABLE_LAB=1
    volumes:
      - ./notebooks/:/notebooks
    command: start-notebook.sh --NotebookApp.notebook_dir=/notebooks --NotebookApp.token='' --NotebookApp.password=''
volumes:
  postgres:
  pgadmin:        

Open PowerShell or Terminal, and go to the docker-compose.yml location.

docker-compose up        

Run the “docker-compose up” command from PowerShell, and it will start all the services defined in docker-compose.yml.?Let’s go through the code.

  1. First, we define the docker-compose?version. You can check the version on your machine by querying “docker-compose version” in PowerShell.
  2. After that, we will mention the?services?that we’re going to use.
  3. 3?? line, define the?name?of the service.
  4. image: Docker image of the service. You can search for the images on the?docker hub. Choose the one with the most downloads and ratings.
  5. container_name: The name of the container.
  6. ports:?As per the?pgadmin documentation, it listens on port 80. So we’re mapping the port of the container with our localhost port.
  7. environment:?We need to define the environments like email and password.?See their documentation?to know more about the environment variables.
  8. volumes: pgAdmin container map files and directories in?/var/lib/pgadmin?path. We can attach the volumes differently (see documentation) and reuse the same volume. We’re using volume with PgAdmin4 and PostgreSQL because volumes are completely managed by Docker, while bind mounts depend on the host machine's directory structure and OS.

The PgAdmin4 container will operate with that setting going forward. I am confident that you will also be able to comprehend the PostgreSQL configuration. Moving on to the next service, the changes in the JupyterLab configuration are the “volumes” and “command”. Please let me explain these changes.

  1. volumes:?We are mounting the local directory with the container directory so any change in the local directory will also be shown in the container directory and vice verse. We can then share the files on GitHub.
  2. command:?You can identify the command that configures the environment by consulting the documentation provided for Jupyter. We will save ourselves the trouble of jotting down the password and token each time the container is restarted by including them in the command itself. To utilize JupyterLab, you won’t need a token or a password any longer.

Now, open two new tabs on your choice search engine. In one of the tabs, conduct a search for “localhost:80” and in the other, search for “localhost:8888”.

You can see the GUI of PgAdmin4 at?port 80

No alt text provided for this image

and JupyterLab at?port 8888.

No alt text provided for this image
The final step is to establish a connection between the PostgreSQL database and the PgAdmin4 server.

  1. Log in to PgAdmin4 using the Email and Password defined in the docker-compose file.
  2. Right-click on the “server”.
  3. Select the “server” option.
  4. Write the Host name, port, username, and password — all mentioned in the docker-compose file.
  5. Click on “Save”.

No alt text provided for this image

As you can see, the database has been successfully connected.

No alt text provided for this image

We’re successfully running the containers without installing JupyterLab, PgAdmin4, and PostgreSQL locally. We can share the docker-compose file and containers on GitHub. Moreover, the containers interact with each other, and we can write in the PostgreSQL database via JupyterLab and query it via PgAdmin4.

I hope this article helps you realize the power of containers and how to deploy the containers successfully.

Thank you for reading.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了