Node.js Dev series (1 of 6)
Tackling a new challenge is thrilling—it’s what developers live for, after all. On the other hand, starting from scratch comes with the maddening realization that you don’t even know what you don’t know. So here I am, writing this. It should be like a journal of a project, written with the goal of not being overly technical, so it can help beginners as much as seasoned developers.
The topic I’ve chosen is Node.js backend development, a field I’ve been focused on heavily in recent years. So let’s dive into developing our Node.js microservices project.
Getting Started with Our Node.js Microservices Project
Before we start coding, you’ll need to install a few essential tools:
You might think this is a lot, but in reality, it’s a minimal setup. Thanks to Docker (which we’ll cover in detail later), we can skip the hassle of manually installing and configuring many additional tools and platforms.
Create a Git repository: we’re going to set up a development project in there and run a Node.js server inside a Docker container.
First Things First: Setting Up Branches
If you haven’t already created a develop branch, now’s the time to do it. Branch out from main (or master, depending on what your first branch is called):
git checkout -b develop main
Next, create a feature branch from develop—let’s call it dev_docker, but feel free to choose any meaningful name that makes sense to you. The format of the branch name isn’t as important as its purpose—this branch is where we’ll be working with Docker.
git checkout -b dev_docker develop
Note: if you don’t understand how to work with Git let me know in a message. I’ll do my best to write a post about version control and Git workflow.
Configuring Visual Studio Code for Node.js Development
Now, let's set up Visual Studio Code for our Node.js development environment.
1. Open Visual Studio Code.
2. Navigate to the File menu and select "Open Folder".
3. Choose the folder where your project repository is located.
?
Installing Prettier: A Must-Have Extension for Visual Studio Code
Alright, time to install our first extension in Visual Studio Code!
Wait? Didn’t I say that Docker eliminates the need for "weird" installations? True, for most cases, but this extension is totally worth installing.
Here’s the one we’re going to install: Prettier - Code Formatter.
As developers, we all know how important it is to maintain code readability. Clean, consistent formatting makes it easier to understand and work with code, whether it’s yours or someone else’s. However, maintaining that consistency can be... challenging.
It’s not just a matter of discipline—it’s genuinely hard to stay on top of formatting when you’re focused on solving problems or meeting deadlines. However, if you configure it correctly, Prettier does two amazing things every time you save a file:
This saves you time and mental effort, so you can focus on actual coding instead of worrying about formatting. Additionally, when working in a team, using Prettier also:
To get Prettier up and running, you need to enable it and set it as the default formatter for your workspace. Here's how:
1. In Visual Studio Code go to the File menu.
2. Select Preferences → Settings.
3. In the Settings tab, make sure to switch to the Workspace scope instead of User so the extension applies only to your project.
4. Navigate to Text Editor and scroll down until you find the Default Formatter option.
5. Select Prettier as your default formatter.
A new folder called .vscode should now be created in your project directory. Inside it, you'll find a file named settings.json. This file should look similar to the one shown in the following image.
You can go ahead and commit this file to your repository. Doing so ensures that anyone working on this project in the future will automatically use Prettier as their default formatter, maintaining code consistency across the team.
Anyway, with that in place, we’re ready to start setting up our Node.js server inside a Docker container.
Streamlining Local Development with Docker
In the realm of software development, ensuring that applications run consistently across various environments can be a significant challenge. Docker was a platform designed to simplify the development, shipping, and deployment of applications by creating consistent environments. Developing an application often requires installing multiple dependencies, such as libraries, frameworks, and specific configurations needed to run the application. Managing these dependencies manually can be time-consuming and error-prone, but Docker simplifies this process using Docker images.
A Docker image is a read-only template that contains all the instructions needed to set up an application, including its dependencies and configurations. These images are created using a Dockerfile, a simple yet powerful script that defines how the image should be built.
Once an image is created, it can be stored in a repository, shared, and reused across different environments. In fact, there are many preconfigured images available in the Docker Public Hub for a wide range of purposes, such as web servers, database servers, and development environments. We’ll be leveraging some of these in our project to streamline our setup.
You can instruct Docker to build a Docker image from a Dockerfile using the following command:
docker build
When you run this command, Docker will automatically download any required dependencies and store them in a safe location for future use, making it much easier to manage your application's environment.
Now, let's talk about Docker containers. A container is essentially a running instance of a Docker image.
When you execute an image, it becomes a container. In simple terms, the command:
docker run
creates and starts a container from a Docker image. This command takes the image and runs it as an isolated environment, allowing you to interact with the application within the container.
Containers are isolated environments, each with its own filesystem, memory, and process space. Think of a container as similar to a virtual machine, but unlike VMs, containers share the host operating system's kernel. This makes them much more lightweight and efficient in terms of performance.
This isolation is one of Docker's key benefits: it ensures that applications behave consistently, regardless of where they're deployed—whether it's on a developer's laptop, in a testing environment, or in production. For organizations, this consistency is a major selling point.
For our use case, containers help us avoid the hassle of installing numerous dependencies directly on our personal computers, allowing us to work in a clean, controlled environment without cluttering up our system.
Creating a docker image
Let's jump into a practical example. Inside your project directory, create a folder called services—this will be the home for all our microservices.
Next, inside services, create a subfolder named products. This will be the first microservice we develop.
Within the products folder, create a file named Dockerfile (not a folder). This file will define the image for our Node.js server, specifying all the necessary dependencies and configurations to run the service inside a container.
Explaining the Dockerfile
Let's break down what each line in our Dockerfile does:
Before we can actually run anything, we need to create the necessary Node.js code files and a package.json file. That’s exactly what we’ll be doing next. Create the package.json file in the same products folder you have created the Dockerfile.
Understanding package.json
package.json is the configuration file for a Node.js application. It defines metadata, dependencies, and scripts required to run the project.
Unlike a JavaScript object literal, package.json is a pure JSON file. This means it must follow strict JSON formatting (e.g., using double quotes for keys and values where necessary).
Key Fields in package.json
1.???? name & version These two fields are required. If the package is ever published, name and version together act as a unique identifier in the npm registry.
2.???? main This field specifies the entry point of the application. By default, it’s often index.js or app.js. While we aren't explicitly using it in this setup, it helps in defining which file should be executed when the package is imported elsewhere.
3.???? scripts Although we’re not including custom scripts here, this field allows defining multiple execution commands for different environments. For example, you could define separate scripts for development, testing, and production. Example:
? "scripts": {
"start": "node app.js",
"dev": "nodemon app.js",
"test": "jest"
}
4.???? dependencies This section lists all the external libraries that our project requires. In this case, we’re adding:
Starting Script of Our Node.js (Sort of) "Hello World" App
Let's work on the starting script of our Node.js application. Create a file called app.js in the same place.
Let's break down the key components of our app.js file step by step:
1. Requiring Dependencies
const express = require("express");
领英推荐
2. Initializing the Express Application
const app = express();
3. Enabling JSON Parsing
app.use(express.json());
4. Setting the Port
const PORT = process.env.PORT || 3000;
5. Starting the Server
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
·??????? Server is running on port 3000?
Summary
This simple Express setup does the following:
Running the Node.js Server with Docker
Now that we’ve set up our Dockerfile and Node.js application, it's time to run the server inside a Docker container.
Step 1: Build the Docker Image
Open a terminal and navigate to the directory where your Dockerfile and Node.js files are located. Then, run the following command:
docker build --tag service-products .
This command builds a Docker image using the Dockerfile and assigns it a tag name (service-products).
Step 2: Verify the Built Image
To check all the available images in Docker, use:
docker image ls
This will display a list of all locally stored Docker images.
If you no longer need an image (to free up disk space), you can remove it using:
docker image rmi [IMAGE ID]
Replace [IMAGE ID] with the actual ID of the image you want to remove. If you encounter issues removing an image, try using the --force flag:
docker image rmi --force [IMAGE ID]
Step 3: Run the Docker Container
Now, let’s run our container using:
docker run service-products
Once the container starts, you should see the following output in the terminal:
Server is running on port 3000
At this point, your Node.js server is running inside a Docker container. It’s not handling any requests yet—that’s something we’ll cover in the second post of this series.
Step 4: Stopping the Container
To stop the server, press Ctrl + C (or Command + C on macOS).
However, the container may still be running in the background. To check running containers, use:
docker container ls
This will list all active containers. To stop a specific container, run:
docker stop [CONTAINER ID]
Replace [CONTAINER ID] with the actual ID of the container. Then, remove it using:
docker container rm [CONTAINER ID]
Step 5: Clean Up
Since we’ll be using Docker Compose in the next section, we no longer need this standalone image. You can remove it by running:
docker image rmi service-products --force
Before moving on to Docker Compose, you can now commit the Dockerfile, package.json, and app.js files to your local repository.
Managing Multi-Container Applications with Docker Compose
Modern applications often consist of multiple services working in tandem. For instance, a web application might include a frontend service, a backend API, and a database. Managing these interconnected services is where Docker Compose comes into play.
Docker Compose is a tool that allows developers to define and manage multi-container Docker applications. If you tried to manage multiple containers manually without Docker Compose, you’d have to start each one individually, configure their networks, and set up shared volumes to ensure they work together properly. This process can quickly become tedious and error-prone.
The best workaround would probably be writing a bash script to automate container startup, but even that depends on your operating system and can get messy.
Docker Compose simplifies this by providing a standardized YAML configuration file (docker-compose.yml), where you can define multiple containers, their dependencies, network settings, and storage configurations all in one place. This makes it much easier to spin up an entire multi-container environment with a single command.
Time to Practice: Creating a docker-compose.yml File
Let's set up docker-compose.yml to manage our Docker containers efficiently. In the base directory of your project, create the file and open it in your editor.
YAML (Yet Another Markup Language) follows an indentation-based structure. Each section is defined using a name followed by a colon (:), and all indented lines beneath it belong to that section.
We will define two main sections in our docker-compose.yml file:
mysql: A MySQL database container that our service will interact with. We'll work with this in the third post of this series. For the moment, we're just defining it. Anyway, let's take a closer look at the properties of the MySQL service in our docker-compose.yml file:
products: Our Node.js microservice.
The products service in our docker-compose.yml file contains several interesting properties that help define how our Node.js microservice interacts with Docker. Let's break them down:
First volume:
./services/products:/app:cached
Maps our local services/products directory to /app inside the container. Any changes made locally will instantly reflect inside the container.
Second volume:
/app/node_modules
This excludes the node_modules folder from being synced with the host machine. Since dependencies are installed inside the container, we don’t need to see them on our local filesystem.
That’s it for the Docker Compose configuration! Now, you can use the command:
docker-compose build
to build the images and
docker-compose up
to start the containers.?
Go ahead and commit your latest updates, then push them to the remote repository. Now is a good time to merge your branch into the develop branch.
In the next post, we’ll: