Node.js Dev series (5 of 6)

In our last session, we built a microservice that interacts with MongoDB. This time, we’re shifting our focus to another NoSQL database: Amazon DynamoDB.

As always, I remind you that if you want to check out the full GitHub repository for this series, you can find it here: GitHub Repo.

What is DynamoDB?

DynamoDB is a fully managed, serverless NoSQL database service provided by Amazon Web Services (AWS). It’s designed for high-performance, scalable applications that require fast transactions and real-time data retrieval.

Key Characteristics of DynamoDB

  • Optimized for Speed & Scalability

o?? DynamoDB excels at single-record lookups and updates.

o?? It’s ideal for use cases requiring high-speed transactions at scale.

  • Not Designed for Complex Data Analysis

o?? Unlike relational databases or MongoDB, DynamoDB isn't optimized for analytics.

o?? It lacks powerful querying and aggregation features.

  • Amazon’s Recommended Approach: Single-Table Design

o?? Amazon encourages using a single table for an entire application.

o?? Instead of multiple relational tables, data is structured hierarchically using partition keys and sort keys.



Setting Up the Clients microservice with DynamoDB

In our previous service, we developed a Suppliers service with MongoDB. Since our last service was called suppliers, we will follow a similar approach, but this time for clients service that interacts with Amazon DynamoDB.


1. Setting Up the Folder

  • Navigate to the services directory.
  • Create a new folder called clients.
  • Copy the contents of the suppliers folder into clients.
  • We'll modify only what’s necessary instead of writing everything from scratch.

2. Updating package.json

Modify services/clients/package.json to reflect the new service:


Changes made:

? Updated the name to clients-service.

? Replaced MongoDB dependencies with AWS SDK to work with DynamoDB.

3. Configuring DynamoDB Connection

Modify services/clients/config.js with the following:


What’s happening here?

? region: Defines the AWS region. An AWS region is a geographically distinct data center location where Amazon Web Services (AWS) hosts its infrastructure. Each region consists of multiple, isolated Availability Zones (AZs) to ensure redundancy and high availability.

? tableName: Specifies the DynamoDB table we will interact with.

? accessKeyId & secretAccessKey: Dummy credentials for local development (we won’t use real AWS credentials). If you want to understand how a real AWS application manages credentials securely, you should look into AWS Identity and Access Management (IAM). IAM is a critical service that controls access to AWS resources. However, since this course is meant to cover the basics, I won’t be diving deep into IAM, as it is a more AWS-specific topic.

? endpoint: Points to our Local DynamoDB, which we’ll set up in Docker later.

4. Setting Up the Database Connection

Now that we’ve configured DynamoDB in config.js, it's time to set up the actual connection and update our main application file. Navigate to services/clients/models/connection.js, remove all the existing code, and replace it with:


What's going on here?

  • The log statement "Dynamo DB config" shows the configuration being loaded.
  • DocumentClient is then initialized.
  • dynamodb is exported for use in other parts of the service.

Note: What about logs? Logging is essential for debugging, monitoring, and maintaining a healthy production system. Any serious application should have a robust logging strategy, with tools that capture and analyze logs in real-time. Proper logging includes different log levels (e.g., debug, info, warning, error) and environment-specific logs to distinguish between development, staging, and production data. Tracking logs effectively can help identify issues, improve system performance, and even prevent failures before they happen.

Modern logging tools like Datadog or AWS CloudWatch are widely used in production environments to centralize logs and provide valuable insights.

Unfortunately, since this is a basic course, we won’t cover logging in depth, but I strongly encourage you to explore it on your own—it’s a skill that will save you a lot of headaches down the road.

5. Updating app.js in services/clients

Now, navigate to services/clients/app.js and replace everything with this:


We are updating all names and paths from supplier to client.

6. Setting up routes

Before any code updates, update the name of the file in the routes folder from suppliers.js to clients.js. Now you can proceed with updating the routing code. Let's start with all the declarations in routes/clients.js:


Here, we import the necessary modules for setting up our Express routes. We also require the DynamoDB configuration and connection.

Additionally, we use the crypto module to generate random UUIDs, which serve as unique string-based IDs for documents in the DynamoDB table.

Helper Function

Lastly, we define the helper function mapValidKeysToString. This function processes object keys and maps them to the correct update format required by DynamoDB. Since "name" is a reserved keyword in DynamoDB, the function prepends # to it, ensuring it can still be used as a field name. You might have noticed that I use helper functions in every post. That’s intentional! My goal is to help you get comfortable with modern JavaScript syntax. By using arrow functions, string interpolation, and powerful array methods like map(), filter(), and join(), you can make your code more concise, and efficient.

6.1. Creating a New Client (POST /clients/)


What this does?

  • This endpoint creates a new client in the DynamoDB table.
  • It extracts name and email from the request body.
  • A unique id is generated using crypto.randomUUID() and prefixed with "ID|".
  • A params object is created, specifying the DynamoDB table name and the client data to store.
  • The dynamodb.put() method is used to store the new client in the table.
  • If successful, it responds with 201 Created and the newly created client data.
  • If an error occurs, it sends a 500 Internal Server Error response.


DynamoDB design patterns

If you're not experienced with DynamoDB, you might not have noticed that I'm only using a Partition Key (PK) here, even though it's very common to use both Partition Keys and Sort Keys (SKs) in DynamoDB. I'm keeping it simple for now, but it's important to understand that PKs and SKs together allow for more efficient querying and data organization. In DynamoDB's Single Table Design (STD), you structure your Primary Key (PK) and Sort Key (SK) combinations to support multiple entity types and query patterns efficiently. The goal is to store different types of data in a single table while ensuring quick access based on your access patterns.

PK                                | SK
-------------------------|-------------------
USER|12345                 | COMMAND#67890
USER|12345                 | COMMAND#67891
COMMAND|67890       | ITEM#001
COMMAND|67890       | ITEM#002
CLIENT|98765              | ORDER#2024-03-01#12345
CLIENT|98765              | ORDER#2024-03-02#67890        

More than with any other database, the first thing you need to consider with DynamoDB is your access patterns. Unlike relational databases, where you design your schema first and then optimize queries, DynamoDB requires you to design your table around how you will query the data from the start.

Once you define your access patterns, a common best practice is to use entity type prefixes (e.g., USER| or USER#) in your Partition Keys (PKs) and Sort Keys (SKs). This helps keep your data well-structured and easily queryable.


6.2. Retrieving All Clients (GET /clients/)


What's happening here?

  • This endpoint fetches all clients from the DynamoDB table.
  • The params object specifies the table name.
  • The dynamodb.scan() method retrieves all records.
  • If successful, it responds with the list of clients.
  • If an error occurs, it sends a 500 Internal Server Error response.

Note: You shouldn't rely too much on DynamoDB Scan operations in real-world applications. Scan retrieves every item in a table, making it inefficient and costly, especially as your dataset grows.

DynamoDB is designed for fast, targeted reads of single records or small batches of records, leveraging Partition Keys (PKs) and Sort Keys (SKs) for efficient lookups. This is why I said defining your access patterns upfront is crucial.

6.3. Retrieving a Single Client by ID (GET /clients/:id)

What's going on?

  • This endpoint retrieves a specific client by its unique ID.
  • The params object includes the table name and the client ID as the primary key.
  • The dynamodb.get() method is used to fetch the client from the table.
  • If the client is found, it responds with the client data.
  • If the client does not exist, it sends a 404 Not Found response.
  • If an error occurs, it returns a 500 Internal Server Error response.

6.4. Updating a Client by ID (PUT /clients/:id)

What this does?

  • This endpoint updates an existing client based on its ID.
  • It extracts the id, name, and email from the request.
  • The UpdateExpression dynamically updates the fields while ensuring "name" (a reserved word in DynamoDB) is handled correctly.
  • The dynamodb.update() method performs the update operation.
  • If successful, it returns the updated client data.
  • If an error occurs, it sends a 500 Internal Server Error response.

6.5. Deleting a Client by ID (DELETE /clients/:id)

What's happening here?

  • This endpoint deletes a client by its ID.
  • The params object specifies the table name and the ID of the client to delete.
  • The dynamodb.delete() method removes the item from the database.
  • If the deletion is successful, it returns a 204 No Content response.
  • If an error occurs, it sends a 500 Internal Server Error response.

7. Set up the database fixtures

Now that the code is complete, we need to create the database fixtures.

7.1. Define the Database Schema

  1. Navigate to the project root and go to the setup directory.
  2. Inside setup, create a new subfolder called dynamodb.
  3. Inside dynamodb, create another subfolder called schemas.
  4. In the schemas folder, create a file named Clients.json and add the following content:


  • This defines a DynamoDB table called Clients.
  • The partition key (id) is set as the HASH key, meaning it uniquely identifies each client.
  • The AttributeDefinitions specify that id is a string (S).
  • The ProvisionedThroughput section sets up the read and write capacity, allowing 5 read and 5 write operations per second. You don’t need to worry too much about Provisioned Throughput when working in a local development environment, as it doesn’t impact performance there. However, in production, it's crucial to monitor your database usage and adjust throughput settings accordingly. If you're using on-demand capacity mode, DynamoDB automatically scales based on traffic, but this can lead to unpredictable costs. AWS can be quite expensive if you're not careful.

7.2. Insert a Sample Document

Now, let's add a test document to the database.

  1. Inside setup/dynamodb, create a new subfolder called data.
  2. Inside data, create a file named Clients.json with the following content:

What this does?

  • This is a sample client entry that will be inserted into the Clients table.
  • The id follows the format "ID|<UUID>", which aligns with how we generate unique IDs in our application.
  • The name and email fields are stored as string attributes (S), following DynamoDB's required format.

8. Setting Up Docker Compose

Before testing in Postman, we need to set up our Docker environment.

  1. Open the docker-compose.yml file.
  2. Add the following services to the configuration:


Explanation of the Configuration

  • Uses the amazon/dynamodb-local image, which runs a local version of Amazon DynamoDB.
  • Runs as root to avoid potential permission issues.
  • Includes a health check: Uses curl to check if the service responds with HTTP 400, which is expected for an empty request. Runs every 10 seconds with a timeout of 10 seconds and retries 10 times.


Docker healthcheck

Oh ho ho! What’s this healthcheck thing? First time we’re using it? Well, I thought it was about time to introduce an important Docker concept: healthchecks.

In Docker Compose, the healthcheck tag allows us to define a way to monitor whether a service inside a container is running correctly. Instead of just assuming a service is ready as soon as the container starts, Docker can perform periodic checks to confirm it's actually healthy and operational. When you define a healthcheck, you specify a command that runs inside the container at regular intervals. The result of this command determines the container's health status:

  • "healthy" – The service is working as expected.
  • "unhealthy" – The service is not responding correctly.
  • "starting" – The service is still in its startup phase.


How are we going to leverage the healthcheck?

We are going to create a service that checks if DynamoDB is healthy and, once confirmed, executes the data initialization process.

What this does?

  • Ensures DynamoDB is ready before running by using depends_on with condition: service_healthy.
  • Uses the amazon/aws-cli image to interact with DynamoDB from within Docker using AWS Command Line Interface.
  • Mounts two volumes:

  1. schemas: links the JSON files in setup/dynamodb/schemas that define DynamoDB table structures inside the container.
  2. data: links the JSON files in setup/dynamodb/data with preloaded test data inside the container.

  • Provides fake AWS credentials, as they are required for DynamoDB, even when running locally.
  • Command runs a shell script (bash -c) that:

  1. Creates tables from JSON schema files in /tmp/dynamoschemas/.
  2. Inserts test data from JSON files in /tmp/dynamodata/, using the table name derived from the filename.

The command line is too long to show it in the image so I'll share it here:

'-c "for f in /tmp/dynamoschemas/*.json; do aws dynamodb create-table --endpoint-url "https://dynamodb-local:8000" --cli-input-json file://"$${f#./}"; done && for f in /tmp/dynamodata/*.json; do aws dynamodb put-item --endpoint-url "https://dynamodb-local:8000" --table-name $(basename "$${f%.*}") --item file://"$${f#./}"; done"'        

Clients Service definition

The final step before running the application is configuring the clients service in docker-compose.yml.


If you have been following this series previous posts nothing in here should surprise. Anyway, I'll explain it just in case:

  • build.context: ./services This tells Docker to build the service from the services directory. It passes SERVICE=clients as a build argument.
  • expose and ports Exposes port 5002 internally and maps it to 5002 externally.
  • environment Defines environment variables: NODE_ENV=development (sets the environment to development mode). PORT=5002 (ensures the app runs on port 5002).
  • depends_on Ensures DynamoDB Local is started before the clients service.
  • volumes Mounts the clients service directory inside the container with cached mode for performance. Includes /app/node_modules to persist dependencies.
  • command Uses supervisor to watch for changes and restart the app automatically.
  • networks Connects the service to local-network, allowing communication with dynamodb container.

Once everything is configured, you can build and start the services.

9. Testing in Postman

To test the Clients API, you can duplicate the Suppliers API requests but make the following changes:

Change the port:

Change all occurrences of suppliers to clients in the API endpoints.

Update the id when performing GET (by ID), UPDATE, and DELETE requests:

  • Use the test ID ID|675c2056919dd884bfe9496a, as defined in the database fixture.


With this, we’ve reached the end of this lesson. Next time, we’ll build a microservice for handling file storage using AWS S3. That will be the final post in this series.

I'd love to keep talking about multiple topics, here are some examples:

  • API Gateway – Managing and securing API requests.
  • Serverless with AWS Lambda – Running microservices without managing infrastructure.
  • Inter-service communication – Exploring techniques like queues, notification systems, and event-driven architectures.
  • Authorization & Authentication – Implementing secure access control using OAuth, JSON Web Tokens (JWT), and other industry best practices.
  • Logging & Monitoring – using logs to identify issues, improve system performance, and even prevent failures before they happen.
  • Design Patterns...
  • GraphQL...

The truth is, I hesitate to continue writing a series unless I’m confident it’s meaningful to a significant number of people. If I find the time in the future, I might write more, but for now, I’ll focus on other projects. However, if any of these topics interest you, let me know!

要查看或添加评论,请登录

Maximiliano Goffman的更多文章

  • Node.js Dev series (6 of 6)

    Node.js Dev series (6 of 6)

    In this post, we’re going to dive into building a file management service using AWS S3. This will be the final post.

  • Node.js Dev series (4 of 6)

    Node.js Dev series (4 of 6)

    In this post, we will focus on creating a microservice that interact with a NoSQL database. Since we've already built a…

  • Node.js Dev series (3 of 6)

    Node.js Dev series (3 of 6)

    Last time, we set up some endpoints with mock responses. Now, we’re going to take the next step by actually connecting…

  • Node.js Dev series (2 of 6)

    Node.js Dev series (2 of 6)

    Last time, we successfully set up a simple Node.js server that logs a message when running inside a Docker container.

  • Node.js Dev series (1 of 6)

    Node.js Dev series (1 of 6)

    Tackling a new challenge is thrilling—it’s what developers live for, after all. On the other hand, starting from…

社区洞察