Node.js Dev series (5 of 6)
In our last session, we built a microservice that interacts with MongoDB. This time, we’re shifting our focus to another NoSQL database: Amazon DynamoDB.
As always, I remind you that if you want to check out the full GitHub repository for this series, you can find it here: GitHub Repo.
What is DynamoDB?
DynamoDB is a fully managed, serverless NoSQL database service provided by Amazon Web Services (AWS). It’s designed for high-performance, scalable applications that require fast transactions and real-time data retrieval.
Key Characteristics of DynamoDB
o?? DynamoDB excels at single-record lookups and updates.
o?? It’s ideal for use cases requiring high-speed transactions at scale.
o?? Unlike relational databases or MongoDB, DynamoDB isn't optimized for analytics.
o?? It lacks powerful querying and aggregation features.
o?? Amazon encourages using a single table for an entire application.
o?? Instead of multiple relational tables, data is structured hierarchically using partition keys and sort keys.
Setting Up the Clients microservice with DynamoDB
In our previous service, we developed a Suppliers service with MongoDB. Since our last service was called suppliers, we will follow a similar approach, but this time for clients service that interacts with Amazon DynamoDB.
1. Setting Up the Folder
2. Updating package.json
Modify services/clients/package.json to reflect the new service:
Changes made:
? Updated the name to clients-service.
? Replaced MongoDB dependencies with AWS SDK to work with DynamoDB.
3. Configuring DynamoDB Connection
Modify services/clients/config.js with the following:
What’s happening here?
? region: Defines the AWS region. An AWS region is a geographically distinct data center location where Amazon Web Services (AWS) hosts its infrastructure. Each region consists of multiple, isolated Availability Zones (AZs) to ensure redundancy and high availability.
? tableName: Specifies the DynamoDB table we will interact with.
? accessKeyId & secretAccessKey: Dummy credentials for local development (we won’t use real AWS credentials). If you want to understand how a real AWS application manages credentials securely, you should look into AWS Identity and Access Management (IAM). IAM is a critical service that controls access to AWS resources. However, since this course is meant to cover the basics, I won’t be diving deep into IAM, as it is a more AWS-specific topic.
? endpoint: Points to our Local DynamoDB, which we’ll set up in Docker later.
4. Setting Up the Database Connection
Now that we’ve configured DynamoDB in config.js, it's time to set up the actual connection and update our main application file. Navigate to services/clients/models/connection.js, remove all the existing code, and replace it with:
What's going on here?
Note: What about logs? Logging is essential for debugging, monitoring, and maintaining a healthy production system. Any serious application should have a robust logging strategy, with tools that capture and analyze logs in real-time. Proper logging includes different log levels (e.g., debug, info, warning, error) and environment-specific logs to distinguish between development, staging, and production data. Tracking logs effectively can help identify issues, improve system performance, and even prevent failures before they happen.
Modern logging tools like Datadog or AWS CloudWatch are widely used in production environments to centralize logs and provide valuable insights.
Unfortunately, since this is a basic course, we won’t cover logging in depth, but I strongly encourage you to explore it on your own—it’s a skill that will save you a lot of headaches down the road.
5. Updating app.js in services/clients
Now, navigate to services/clients/app.js and replace everything with this:
We are updating all names and paths from supplier to client.
6. Setting up routes
Before any code updates, update the name of the file in the routes folder from suppliers.js to clients.js. Now you can proceed with updating the routing code. Let's start with all the declarations in routes/clients.js:
Here, we import the necessary modules for setting up our Express routes. We also require the DynamoDB configuration and connection.
Additionally, we use the crypto module to generate random UUIDs, which serve as unique string-based IDs for documents in the DynamoDB table.
Helper Function
Lastly, we define the helper function mapValidKeysToString. This function processes object keys and maps them to the correct update format required by DynamoDB. Since "name" is a reserved keyword in DynamoDB, the function prepends # to it, ensuring it can still be used as a field name. You might have noticed that I use helper functions in every post. That’s intentional! My goal is to help you get comfortable with modern JavaScript syntax. By using arrow functions, string interpolation, and powerful array methods like map(), filter(), and join(), you can make your code more concise, and efficient.
6.1. Creating a New Client (POST /clients/)
What this does?
DynamoDB design patterns
If you're not experienced with DynamoDB, you might not have noticed that I'm only using a Partition Key (PK) here, even though it's very common to use both Partition Keys and Sort Keys (SKs) in DynamoDB. I'm keeping it simple for now, but it's important to understand that PKs and SKs together allow for more efficient querying and data organization. In DynamoDB's Single Table Design (STD), you structure your Primary Key (PK) and Sort Key (SK) combinations to support multiple entity types and query patterns efficiently. The goal is to store different types of data in a single table while ensuring quick access based on your access patterns.
PK | SK
-------------------------|-------------------
USER|12345 | COMMAND#67890
USER|12345 | COMMAND#67891
COMMAND|67890 | ITEM#001
COMMAND|67890 | ITEM#002
CLIENT|98765 | ORDER#2024-03-01#12345
CLIENT|98765 | ORDER#2024-03-02#67890
More than with any other database, the first thing you need to consider with DynamoDB is your access patterns. Unlike relational databases, where you design your schema first and then optimize queries, DynamoDB requires you to design your table around how you will query the data from the start.
Once you define your access patterns, a common best practice is to use entity type prefixes (e.g., USER| or USER#) in your Partition Keys (PKs) and Sort Keys (SKs). This helps keep your data well-structured and easily queryable.
6.2. Retrieving All Clients (GET /clients/)
What's happening here?
Note: You shouldn't rely too much on DynamoDB Scan operations in real-world applications. Scan retrieves every item in a table, making it inefficient and costly, especially as your dataset grows.
DynamoDB is designed for fast, targeted reads of single records or small batches of records, leveraging Partition Keys (PKs) and Sort Keys (SKs) for efficient lookups. This is why I said defining your access patterns upfront is crucial.
6.3. Retrieving a Single Client by ID (GET /clients/:id)
What's going on?
6.4. Updating a Client by ID (PUT /clients/:id)
What this does?
6.5. Deleting a Client by ID (DELETE /clients/:id)
What's happening here?
7. Set up the database fixtures
Now that the code is complete, we need to create the database fixtures.
7.1. Define the Database Schema
7.2. Insert a Sample Document
Now, let's add a test document to the database.
What this does?
8. Setting Up Docker Compose
Before testing in Postman, we need to set up our Docker environment.
Explanation of the Configuration
Docker healthcheck
Oh ho ho! What’s this healthcheck thing? First time we’re using it? Well, I thought it was about time to introduce an important Docker concept: healthchecks.
In Docker Compose, the healthcheck tag allows us to define a way to monitor whether a service inside a container is running correctly. Instead of just assuming a service is ready as soon as the container starts, Docker can perform periodic checks to confirm it's actually healthy and operational. When you define a healthcheck, you specify a command that runs inside the container at regular intervals. The result of this command determines the container's health status:
How are we going to leverage the healthcheck?
We are going to create a service that checks if DynamoDB is healthy and, once confirmed, executes the data initialization process.
What this does?
The command line is too long to show it in the image so I'll share it here:
'-c "for f in /tmp/dynamoschemas/*.json; do aws dynamodb create-table --endpoint-url "https://dynamodb-local:8000" --cli-input-json file://"$${f#./}"; done && for f in /tmp/dynamodata/*.json; do aws dynamodb put-item --endpoint-url "https://dynamodb-local:8000" --table-name $(basename "$${f%.*}") --item file://"$${f#./}"; done"'
Clients Service definition
The final step before running the application is configuring the clients service in docker-compose.yml.
If you have been following this series previous posts nothing in here should surprise. Anyway, I'll explain it just in case:
Once everything is configured, you can build and start the services.
9. Testing in Postman
To test the Clients API, you can duplicate the Suppliers API requests but make the following changes:
Change the port:
Change all occurrences of suppliers to clients in the API endpoints.
Update the id when performing GET (by ID), UPDATE, and DELETE requests:
With this, we’ve reached the end of this lesson. Next time, we’ll build a microservice for handling file storage using AWS S3. That will be the final post in this series.
I'd love to keep talking about multiple topics, here are some examples:
The truth is, I hesitate to continue writing a series unless I’m confident it’s meaningful to a significant number of people. If I find the time in the future, I might write more, but for now, I’ll focus on other projects. However, if any of these topics interest you, let me know!