Node.js Dev series (1 of 6)

Tackling a new challenge is thrilling—it’s what developers live for, after all. On the other hand, starting from scratch comes with the maddening realization that you don’t even know what you don’t know. So here I am, writing this. It should be like a journal of a project, written with the goal of not being overly technical, so it can help beginners as much as seasoned developers.

The topic I’ve chosen is Node.js backend development, a field I’ve been focused on heavily in recent years. So let’s dive into developing our Node.js microservices project.

Getting Started with Our Node.js Microservices Project

Before we start coding, you’ll need to install a few essential tools:

  • Git – for version control and collaboration
  • Docker – for managing development environments and dependencies
  • Postman – for testing API endpoints
  • Visual Studio Code – a simple yet powerful IDE for coding your project

You might think this is a lot, but in reality, it’s a minimal setup. Thanks to Docker (which we’ll cover in detail later), we can skip the hassle of manually installing and configuring many additional tools and platforms.

Create a Git repository: we’re going to set up a development project in there and run a Node.js server inside a Docker container.

First Things First: Setting Up Branches

If you haven’t already created a develop branch, now’s the time to do it. Branch out from main (or master, depending on what your first branch is called):

git checkout -b develop main        

Next, create a feature branch from develop—let’s call it dev_docker, but feel free to choose any meaningful name that makes sense to you. The format of the branch name isn’t as important as its purpose—this branch is where we’ll be working with Docker.

git checkout -b dev_docker develop        

Note: if you don’t understand how to work with Git let me know in a message. I’ll do my best to write a post about version control and Git workflow.

Configuring Visual Studio Code for Node.js Development

Now, let's set up Visual Studio Code for our Node.js development environment.

1. Open Visual Studio Code.

2. Navigate to the File menu and select "Open Folder".

3. Choose the folder where your project repository is located.

?

Installing Prettier: A Must-Have Extension for Visual Studio Code

Alright, time to install our first extension in Visual Studio Code!

Wait? Didn’t I say that Docker eliminates the need for "weird" installations? True, for most cases, but this extension is totally worth installing.

Here’s the one we’re going to install: Prettier - Code Formatter.

As developers, we all know how important it is to maintain code readability. Clean, consistent formatting makes it easier to understand and work with code, whether it’s yours or someone else’s. However, maintaining that consistency can be... challenging.

It’s not just a matter of discipline—it’s genuinely hard to stay on top of formatting when you’re focused on solving problems or meeting deadlines. However, if you configure it correctly, Prettier does two amazing things every time you save a file:

  1. Checks your code to ensure it’s error-free.
  2. Automatically formats your code for readability.

This saves you time and mental effort, so you can focus on actual coding instead of worrying about formatting. Additionally, when working in a team, using Prettier also:

  • Streamlines code reviews: Reviewers don’t have to waste time checking formatting and can focus on functionality instead.
  • Reduces friction: Everyone’s code looks the same, which eliminates debates over formatting preferences.

To get Prettier up and running, you need to enable it and set it as the default formatter for your workspace. Here's how:

1. In Visual Studio Code go to the File menu.

2. Select Preferences → Settings.

3. In the Settings tab, make sure to switch to the Workspace scope instead of User so the extension applies only to your project.

4. Navigate to Text Editor and scroll down until you find the Default Formatter option.

5. Select Prettier as your default formatter.

A new folder called .vscode should now be created in your project directory. Inside it, you'll find a file named settings.json. This file should look similar to the one shown in the following image.

You can go ahead and commit this file to your repository. Doing so ensures that anyone working on this project in the future will automatically use Prettier as their default formatter, maintaining code consistency across the team.

Anyway, with that in place, we’re ready to start setting up our Node.js server inside a Docker container.

Streamlining Local Development with Docker

In the realm of software development, ensuring that applications run consistently across various environments can be a significant challenge. Docker was a platform designed to simplify the development, shipping, and deployment of applications by creating consistent environments. Developing an application often requires installing multiple dependencies, such as libraries, frameworks, and specific configurations needed to run the application. Managing these dependencies manually can be time-consuming and error-prone, but Docker simplifies this process using Docker images.

A Docker image is a read-only template that contains all the instructions needed to set up an application, including its dependencies and configurations. These images are created using a Dockerfile, a simple yet powerful script that defines how the image should be built.

Once an image is created, it can be stored in a repository, shared, and reused across different environments. In fact, there are many preconfigured images available in the Docker Public Hub for a wide range of purposes, such as web servers, database servers, and development environments. We’ll be leveraging some of these in our project to streamline our setup.

You can instruct Docker to build a Docker image from a Dockerfile using the following command:

docker build        

When you run this command, Docker will automatically download any required dependencies and store them in a safe location for future use, making it much easier to manage your application's environment.

Now, let's talk about Docker containers. A container is essentially a running instance of a Docker image.

When you execute an image, it becomes a container. In simple terms, the command:

docker run        

creates and starts a container from a Docker image. This command takes the image and runs it as an isolated environment, allowing you to interact with the application within the container.


Containers are isolated environments, each with its own filesystem, memory, and process space. Think of a container as similar to a virtual machine, but unlike VMs, containers share the host operating system's kernel. This makes them much more lightweight and efficient in terms of performance.

This isolation is one of Docker's key benefits: it ensures that applications behave consistently, regardless of where they're deployed—whether it's on a developer's laptop, in a testing environment, or in production. For organizations, this consistency is a major selling point.

For our use case, containers help us avoid the hassle of installing numerous dependencies directly on our personal computers, allowing us to work in a clean, controlled environment without cluttering up our system.

Creating a docker image

Let's jump into a practical example. Inside your project directory, create a folder called services—this will be the home for all our microservices.

Next, inside services, create a subfolder named products. This will be the first microservice we develop.

Within the products folder, create a file named Dockerfile (not a folder). This file will define the image for our Node.js server, specifying all the necessary dependencies and configurations to run the service inside a container.


Explaining the Dockerfile

Let's break down what each line in our Dockerfile does:

  1. FROM node:14 Remember how I mentioned there are thousands of preconfigured images available in the Docker Hub? This one is an official Node.js 14 image. The FROM instruction tells Docker that we are building our image based on this existing one. Similar to class inheritance in object-oriented programming, we can extend and customize an existing image while benefiting from its base functionality.
  2. WORKDIR /app The WORKDIR instruction sets the working directory inside the container for all subsequent RUN, CMD, ENTRYPOINT, COPY, and ADD commands. This ensures that when we copy or install files, they are placed inside the /app directory.
  3. COPY package.json . The package.json file is essential for configuring dependencies and metadata for our Node.js application. This command copies the package.json file from our project directory into the /app directory inside the container.
  4. RUN npm install This runs npm install inside the container, installing all dependencies listed in package.json. Since it's happening within the containerized environment, we avoid cluttering our local machine with unnecessary dependencies—one of the main benefits of using Docker!
  5. COPY . . This command copies all the files from our local project directory into the container’s /app directory. This includes our application code, configuration files, and any other necessary assets.
  6. CMD ["node", "app.js"] The CMD instruction defines the default command to run once the container starts. In this case, it tells Docker to execute node app.js, which will launch our Node.js application inside the container.

Before we can actually run anything, we need to create the necessary Node.js code files and a package.json file. That’s exactly what we’ll be doing next. Create the package.json file in the same products folder you have created the Dockerfile.

Understanding package.json

package.json is the configuration file for a Node.js application. It defines metadata, dependencies, and scripts required to run the project.

Unlike a JavaScript object literal, package.json is a pure JSON file. This means it must follow strict JSON formatting (e.g., using double quotes for keys and values where necessary).

Key Fields in package.json

1.???? name & version These two fields are required. If the package is ever published, name and version together act as a unique identifier in the npm registry.

2.???? main This field specifies the entry point of the application. By default, it’s often index.js or app.js. While we aren't explicitly using it in this setup, it helps in defining which file should be executed when the package is imported elsewhere.

3.???? scripts Although we’re not including custom scripts here, this field allows defining multiple execution commands for different environments. For example, you could define separate scripts for development, testing, and production. Example:

? "scripts": {
     "start": "node app.js",
     "dev": "nodemon app.js",
     "test": "jest"
  }        

4.???? dependencies This section lists all the external libraries that our project requires. In this case, we’re adding:

  • mysql: Since our microservice will be communicating with a MySQL server, we need this package to handle database connections.
  • express: Express is the go-to framework for building backend applications in Node.js. In most Node apps, you'll likely find it included.

Starting Script of Our Node.js (Sort of) "Hello World" App

Let's work on the starting script of our Node.js application. Create a file called app.js in the same place.

Let's break down the key components of our app.js file step by step:

1. Requiring Dependencies

const express = require("express");        

  • require is a Node.js function that allows us to import external modules into our code.
  • Here, we import the Express framework, which simplifies handling HTTP requests and responses.
  • We assign the imported module to a constant named express, which will contain all the functionality we need from Express.

2. Initializing the Express Application

const app = express();        

  • Calling express() creates an instance of an Express application and assigns it to the app constant.
  • This app will act as our main server and handle incoming HTTP requests.

3. Enabling JSON Parsing

app.use(express.json());        

  • This tells the Express app to automatically parse JSON in incoming HTTP requests.
  • Before Express version 4.16, we had to use an external middleware called body-parser for this functionality.
  • Since then, Express has included JSON parsing natively, making this step more straightforward.

4. Setting the Port

const PORT = process.env.PORT || 3000;        

  • We define a constant PORT and assign it a value.
  • The value is taken from the environment variable PORT (e.g., set by the hosting service).
  • If no environment variable is found, we use port 3000 as the default.
  • This approach makes our app more flexible and easier to deploy in different environments.

5. Starting the Server

app.listen(PORT, () => { 
   console.log(`Server is running on port ${PORT}`);
});        

  • This tells Express to start listening for incoming requests on the specified PORT.
  • When the server is successfully running, an arrow function (() => {}) is executed, printing a message to the console:

·??????? Server is running on port 3000?         

  • This log message helps us confirm that our server has started correctly.

Summary

This simple Express setup does the following:

  • Imports the Express framework
  • Initializes an Express application
  • Enables automatic JSON request parsing
  • Sets a flexible port number
  • Starts the server and logs its status

Running the Node.js Server with Docker

Now that we’ve set up our Dockerfile and Node.js application, it's time to run the server inside a Docker container.

Step 1: Build the Docker Image

Open a terminal and navigate to the directory where your Dockerfile and Node.js files are located. Then, run the following command:

docker build --tag service-products .        

This command builds a Docker image using the Dockerfile and assigns it a tag name (service-products).

Step 2: Verify the Built Image

To check all the available images in Docker, use:

docker image ls        

This will display a list of all locally stored Docker images.

If you no longer need an image (to free up disk space), you can remove it using:

docker image rmi [IMAGE ID]        

Replace [IMAGE ID] with the actual ID of the image you want to remove. If you encounter issues removing an image, try using the --force flag:

docker image rmi --force [IMAGE ID]        

Step 3: Run the Docker Container

Now, let’s run our container using:

docker run service-products        

Once the container starts, you should see the following output in the terminal:

Server is running on port 3000        

At this point, your Node.js server is running inside a Docker container. It’s not handling any requests yet—that’s something we’ll cover in the second post of this series.

Step 4: Stopping the Container

To stop the server, press Ctrl + C (or Command + C on macOS).

However, the container may still be running in the background. To check running containers, use:

docker container ls        


This will list all active containers. To stop a specific container, run:

docker stop [CONTAINER ID]        

Replace [CONTAINER ID] with the actual ID of the container. Then, remove it using:

docker container rm [CONTAINER ID]        

Step 5: Clean Up

Since we’ll be using Docker Compose in the next section, we no longer need this standalone image. You can remove it by running:

docker image rmi service-products --force        

Before moving on to Docker Compose, you can now commit the Dockerfile, package.json, and app.js files to your local repository.

Managing Multi-Container Applications with Docker Compose

Modern applications often consist of multiple services working in tandem. For instance, a web application might include a frontend service, a backend API, and a database. Managing these interconnected services is where Docker Compose comes into play.

Docker Compose is a tool that allows developers to define and manage multi-container Docker applications. If you tried to manage multiple containers manually without Docker Compose, you’d have to start each one individually, configure their networks, and set up shared volumes to ensure they work together properly. This process can quickly become tedious and error-prone.

The best workaround would probably be writing a bash script to automate container startup, but even that depends on your operating system and can get messy.

Docker Compose simplifies this by providing a standardized YAML configuration file (docker-compose.yml), where you can define multiple containers, their dependencies, network settings, and storage configurations all in one place. This makes it much easier to spin up an entire multi-container environment with a single command.

Time to Practice: Creating a docker-compose.yml File

Let's set up docker-compose.yml to manage our Docker containers efficiently. In the base directory of your project, create the file and open it in your editor.

YAML (Yet Another Markup Language) follows an indentation-based structure. Each section is defined using a name followed by a colon (:), and all indented lines beneath it belong to that section.


We will define two main sections in our docker-compose.yml file:

  1. Networks – We'll create a network called local-network, where all our containers will communicate.
  2. Services – This will include:


mysql: A MySQL database container that our service will interact with. We'll work with this in the third post of this series. For the moment, we're just defining it. Anyway, let's take a closer look at the properties of the MySQL service in our docker-compose.yml file:

  1. image: "mysql:5.7". Unlike our products service, which uses a Dockerfile, this property directly specifies an image from Docker Hub. In this case, mysql:5.7 tells Docker Compose to pull and use MySQL version 5.7 from the public Docker Hub repository.
  2. environment: This section allows us to define environment variables for the MySQL container, such as: MYSQL_ROOT_PASSWORD that sets the root password and MYSQL_DATABASE that defines the initial database to be created.
  3. networks: this ensures the MySQL container operates within our predefined local-network, allowing seamless communication with other services (such as our Node.js products service).
  4. expose vs ports:

  • expose: [3306]: This makes port 3306 available only to other services within the same Docker network (i.e., local-network).
  • ports: ["3306:3306"]: This explicitly maps port 3306 from the Docker container to the host machine, allowing external access using that port number.



products: Our Node.js microservice.

The products service in our docker-compose.yml file contains several interesting properties that help define how our Node.js microservice interacts with Docker. Let's break them down:

  • build Property Instead of pulling an existing image from the public Docker Hub, we use the build property to specify a path where Docker can find a Dockerfile to build the image. ?
  • depends_on Property This property ensures that the products service only starts after the mysql service is up and running. However, note that depends_on only ensures container startup order, it does not guarantee that MySQL is fully ready to accept connections. Some different approaches would be to use a health check or wait-for-it script if needed. As an example, we'll be implementing those approaches for some services in a future post.
  • volumes Property Volumes allow us to sync files between our host machine and the container. This eliminates the need to rebuild the container every time we modify code. Example:

First volume:

./services/products:/app:cached

Maps our local services/products directory to /app inside the container. Any changes made locally will instantly reflect inside the container.

Second volume:

/app/node_modules

This excludes the node_modules folder from being synced with the host machine. Since dependencies are installed inside the container, we don’t need to see them on our local filesystem.



That’s it for the Docker Compose configuration! Now, you can use the command:

docker-compose build        

to build the images and

docker-compose up        

to start the containers.?

Go ahead and commit your latest updates, then push them to the remote repository. Now is a good time to merge your branch into the develop branch.

In the next post, we’ll:

  • Dive deeper into Node.js and build a complete Mock API.
  • Use Postman to manually test our API endpoints.

要查看或添加评论,请登录

Maximiliano Goffman的更多文章

  • Node.js Dev series (6 of 6)

    Node.js Dev series (6 of 6)

    In this post, we’re going to dive into building a file management service using AWS S3. This will be the final post.

  • Node.js Dev series (5 of 6)

    Node.js Dev series (5 of 6)

    In our last session, we built a microservice that interacts with MongoDB. This time, we’re shifting our focus to…

  • Node.js Dev series (4 of 6)

    Node.js Dev series (4 of 6)

    In this post, we will focus on creating a microservice that interact with a NoSQL database. Since we've already built a…

  • Node.js Dev series (3 of 6)

    Node.js Dev series (3 of 6)

    Last time, we set up some endpoints with mock responses. Now, we’re going to take the next step by actually connecting…

  • Node.js Dev series (2 of 6)

    Node.js Dev series (2 of 6)

    Last time, we successfully set up a simple Node.js server that logs a message when running inside a Docker container.

社区洞察

其他会员也浏览了