Node.js Dev series (6 of 6)

In this post, we’re going to dive into building a file management service using AWS S3. This will be the final post.

If you want to check out the full GitHub repository for this series, you can find it here: GitHub Repo

As I mentioned last time, there are always more topics to explore, such as inter-service communication, serverless technologies, or authentication and authorization. I could even cover TypeScript, GraphQL, or various frameworks. But writing these posts takes time, so I’d only continue if I have free time and it’s truly useful for others.



What is AWS S3?

Amazon Simple Storage Service (S3) is a scalable object storage service that allows you to store and retrieve data from anywhere on the web. It’s widely used for:

  • Static file storage (e.g., images, videos, backups).
  • Hosting static websites.
  • Data lakes and big data analytics.
  • Disaster recovery and backup solutions.

For the final post, I wanted to do something a little different. This will introduce you to a new type of cloud service while also showcasing a powerful tool for AWS local development: LocalStack.

Let's start working on the code.


Setting Up the Documents Service

First, go to the services directory and create a new folder called documents. We can copy the contents from our previously developed service for convenience like we did before.

However, unlike before, we won’t need a models folder because the connection configuration for AWS S3 is simple enough to be handled in a single line of code. That said, I still created a models folder in the other services, even though I’m not actually using "models" in this project.

I said it in a previous post, but in a larger project, handling data access and conversion within route files would be a poor design choice. Ideally, we would follow a clean architecture with properly structured service layers. However, for the sake of simplicity, I’m keeping the familiar models and routes folder structure, even though I'm not strictly following the right design pattern.

Updating package.json

Now, let’s update the package.json file:


Here’s what’s new:

  • aws-sdk: This allows us to interact with AWS S3 for file storage.
  • multer: A middleware for handling multipart/form-data, which is essential for file uploads.

Setting Up config.js for AWS S3 Communication

Update config.js with the following:


Here's what each property represents:

  • region: The AWS region where our S3 bucket is hosted (in this case, "us-east-1"). In our previous post, I explained that an AWS region is a geographically distinct data center location where AWS hosts its infrastructure.
  • bucketName: The name of the S3 bucket where we’ll store files ("documents-bucket"). Basically, kind of like a folder in your computer.
  • accessKeyId & secretAccessKey: Fake credentials for local development. When deploying to AWS, we use IAM roles instead.
  • endpoint: Points to LocalStack, which simulates AWS services locally. This allows us to develop and test without needing a real AWS account.



What is LocalStack?

LocalStack is a fully functional local AWS cloud stack that allows developers to spin up and interact with AWS services on their local machine, mimicking AWS cloud services like S3, DynamoDB, Lambda, and more. It's an open-source tool that simplifies the development and testing of cloud applications without the need for an active internet connection or the expense of using live AWS services.

Why Use LocalStack?

  1. Cost-Efficiency: Using AWS services directly can quickly become expensive, especially for small-scale projects, testing, or development. LocalStack lets you simulate a variety of AWS services locally, which reduces the costs associated with API calls, storage, and other AWS resources during development.
  2. Speed: Developing against real AWS resources can be slow due to network latency. With LocalStack, all interactions happen locally on your machine, providing much faster response times. This makes it ideal for rapid iteration during development.
  3. Offline Development: LocalStack allows you to continue developing your application without needing an internet connection. This is especially useful when you're working in environments with limited or unreliable connectivity.
  4. Easy Integration with Docker: LocalStack can be run easily within a Docker container, which allows you to integrate it into your local development environment seamlessly. This also ensures that your setup remains portable across different systems.


Updating app.js

Now, let’s configure our main Express app. If you copied it from a previous service (like clients), simply update all mentions of client to document.


Implementing Document Management Endpoints

Now that we have set up our documents service, it's time to configure the routes. Navigate to the routes subfolder and rename the existing file to documents.js to reflect the new service name. Open the file. Let’s start is the code to initialize Express, AWS S3, and Multer for handling file uploads:


Explanation of new Components

  • AWS SDK – Enables interaction with AWS S3.
  • multer – Middleware for handling file uploads.
  • config.js – Stores the S3 connection settings.
  • s3ForcePathStyle: true – Ensures compatibility with LocalStack, which emulates AWS services locally.
  • multer.memoryStorage() – Stores files in memory before uploading them to S3.

This setup prepares us to define upload, retrieve, and delete operations for document management.

Now that we've set up AWS S3 and configured Multer for file uploads, we can define the API endpoints to handle uploading, retrieving, and deleting documents.

1. Upload a Document

Explanation:

  • upload.single("file") – Multer middleware extracts the uploaded file from the request.
  • Bucket: config.s3.bucketName – Specifies the S3 bucket where the file will be stored. A bucket is like a folder in your file system.
  • Key: req.file.originalname – The filename is used as the unique identifier in S3. We can retrieve the document later using this.
  • Body: req.file.buffer – The file's contents are read from memory and sent to S3. I assume you know what is a buffer. If not, I suggest you strengthen your basic knowledge on information technology.
  • If successful, S3 returns the file's URL and metadata.

2. Retrieve a List of All Documents

Explanation:

  • This fetches all documents stored in the S3 bucket.
  • s3.listObjectsV2(params) – Retrieves up to 1,000 objects in a single call. That should be sufficient for our test.
  • The response includes metadata such as file names, sizes, and last modified dates.

3. Download a Specific Document

Explanation:

  • req.params.key – Retrieves the requested document using its unique key (filename).
  • s3.getObject(params) – Fetches the document from the bucket.
  • The response contains the document’s raw binary data, which is sent back as the response body.

4. Delete a Document

Explanation:

  • Deletes a document from the S3 bucket using its filename (key).
  • s3.deleteObject(params) – Removes the specified object from S3.
  • Returns HTTP 204 (No Content) when successful.

With these endpoints in place, we now have a fully functional file management microservice that leverages AWS S3 for cloud storage. You might be wondering why there’s no update endpoint this time. The reason is simple—if you think about it logically, the correct approach to updating a file is to delete the existing one and upload a new version in its place. This ensures consistency and avoids potential issues with partial updates or file corruption.


Setting up some Fixtures

With the code ready, it's time to set up some fixtures to help us with testing. We'll create sample data and an initialization script to ensure our local S3 storage is correctly configured before running our service.

Step 1: Create the Necessary Folders

  1. Navigate to the setup folder in your project.
  2. Inside setup, create a new subfolder named s3.
  3. Within s3, create two subfolders:

  • data/ → This will store test files.
  • initscript/ → This will contain the script to initialize S3.

Step 2: Add a Sample Document

  • Inside setup/s3/data/, create a text file named initial-doc.txt
  • This file will act as a sample document that gets uploaded when we initialize S3.

Step 3: Create the Initialization Script

  • Inside setup/s3/initscript/, create a file called init-s3.sh and add the following Bash script:


  • awslocal s3api create-bucket --bucket documents-bucket Creates a local S3 bucket named documents-bucket (used for testing).
  • awslocal s3 cp /data/initial-doc.txt s3://documents-bucket/initial-doc.txt Uploads the sample initial-doc.txt file into the S3 bucket.

By setting up these fixtures, we ensure that:

? The S3 bucket is available for our service.

? There's already a sample document in the bucket for testing.

? Our API can interact with S3 storage right away.

Note: While I believe I've configured this correctly in the repository, if you're encountering issues with the bash script on a local Windows environment, it might be due to your file using Windows line endings. This can cause various problems in Docker, as the container runs in a Unix environment. To avoid this, ensure that your file uses Unix line endings. In Visual Studio Code, you can easily check this in the bottom-right corner of the window. It should display "LF" (Line Feed) rather than "CRLF" (Carriage Return + Line Feed). If it says "CRLF," you can change it by clicking on that label and selecting "LF."



Setting Up Docker Compose

Before testing our service, we need to configure Docker Compose to orchestrate our environment. This includes:

  1. LocalStack – A local AWS services simulator.
  2. The Node.js document service – Our API for file management.

LocalStack mocks AWS services so we can develop and test locally without needing a real AWS account. Below is the docker-compose.yml configuration:


Explanation

  • SERVICES: s3,sqs,sns,lambda,apigateway

  1. This defines the AWS services available in LocalStack.
  2. We are mainly using S3, but other services (SQS, SNS, Lambda, API Gateway) are included for possible future use.

  • volumes

  1. Mounts our initialization script (init-s3.sh) so that it runs automatically when LocalStack starts.
  2. Mounts the /data folder, which contains our sample file.

Next, we define the document service, which runs our API:


Explanation

Well, there's nothing particularly new here. After building three Node.js services, the configuration in this one remains quite similar.

So, after adding this to docker-compose.yml, we can start everything with:

docker-compose up --build        

Postman

Now, we are ready to test the document service in Postman:

  • Upload a document → POST https://localhost:5004/documents
  • List documents → GET https://localhost:5004/documents
  • Download a document → GET https://localhost:5004/documents/{filename}
  • Delete a document → DELETE https://localhost:5004/documents/{filename}

A note on uploading a document using Postman: Since this is a new action, I thought it would be helpful to add an extra note on configuring Postman.

First, you'll need to set a custom header. In the Headers section, add a key called Content-Type with the value multipart/form-data.


Then, in the Body section, select form-data. Add a key called file, set the type to file, and choose a file from your computer for the value.



With this setup, we have a fully functional document storage microservice running locally using AWS S3 via LocalStack.


Final notes

And with that, we’ve reached the end of this series! Writing these posts has been a great experience, but I have to admit—it took a lot of time and effort. Since many concepts kept repeating, I found myself speeding through the last few posts, but I still tried to cover everything as thoroughly as possible. Sometimes, I tried to add a bit of context to spice things up and I think it wasn't that bad.

At the end of the day, my goal was to provide practical, real-world insights into building microservices with Node.js, Docker, and AWS—and I hope you found it valuable.

Now, it’s time for me to get back to coding! I can’t stay in writing mode forever. ??

Again, if you want to check out the full GitHub repository for this series, you can find it here: GitHub Repo

If you have any questions, comments, or feedback, feel free to reach out! I can’t promise an instant response, but I’ll do my best to help.

Thanks for following along, and happy coding!

要查看或添加评论,请登录

Maximiliano Goffman的更多文章

  • Node.js Dev series (5 of 6)

    Node.js Dev series (5 of 6)

    In our last session, we built a microservice that interacts with MongoDB. This time, we’re shifting our focus to…

  • Node.js Dev series (4 of 6)

    Node.js Dev series (4 of 6)

    In this post, we will focus on creating a microservice that interact with a NoSQL database. Since we've already built a…

  • Node.js Dev series (3 of 6)

    Node.js Dev series (3 of 6)

    Last time, we set up some endpoints with mock responses. Now, we’re going to take the next step by actually connecting…

  • Node.js Dev series (2 of 6)

    Node.js Dev series (2 of 6)

    Last time, we successfully set up a simple Node.js server that logs a message when running inside a Docker container.

  • Node.js Dev series (1 of 6)

    Node.js Dev series (1 of 6)

    Tackling a new challenge is thrilling—it’s what developers live for, after all. On the other hand, starting from…

社区洞察