In this article, we’ll explore how to deploy three microservices on an Amazon ECS cluster using Terraform for infrastructure creation and GitHub Actions for deployment automation. We’ll leverage ECS Service Discovery to enable seamless communication between microservices. Additionally, we’ll design an efficient repository structure and demonstrate how to create the infrastructure effectively.
Required AWS Infrastructure
To deploy an application on ECS, the following AWS resources are essential:
- VPC (Virtual Private Cloud): Provides networking for your resources, with public and private subnets for segregation.
- ECS Cluster: Manages and schedules tasks for services. A single ECS cluster per environment is sufficient for cost-effectiveness and manageability.
- ECS Services: Ensure high availability and scaling of microservices running in ECS.
- Application Load Balancer (ALB): Distributes incoming traffic to ECS services.
- Target Groups: Define routing rules for traffic directed to ECS tasks by the ALB.
- ALB Security Group: Controls access to the ALB for secure communication.
- CloudMap: Enables service discovery using DNS names for microservice communication.
- IAM Roles: Grant ECS tasks, ALB, and other services appropriate permissions.
- ECS Security Groups: Ensure secure access control for ECS tasks.
Shared vs. Service-Specific Resources
Now that we have identified the resources required for deploying any service, let’s distinguish between shared resources and service-specific resources.
These resources are common across all microservices within an environment, reducing redundancy and ensuring consistency:
- VPC: A single VPC is created per environment to provide networking for all resources, with public and private subnets for segregation.
- ECS Cluster: A single ECS cluster is created per environment (e.g., dev, prod) to host all microservices. While there is no direct cost to create multiple clusters, managing one per environment is a best practice for simplicity and scalability.
- ALB: A single ALB per environment reduces costs significantly. By using path-based routing, traffic can be routed to different microservices, avoiding multiple ALBs.
- CloudMap: A single CloudMap namespace is set up per environment with the same name to enable service discovery, providing a consistent naming scheme and seamless inter-service communication.
Shared infrastructure resources like these are designed once, stored in a shared repository, and deployed independently of microservices.
Designing the Repository for Shared Infrastructure
To efficiently manage and deploy shared infrastructure, we need to design our repository and set up GitHub Actions for deployment automation. Here’s a refined approach:
Repository Structure
First, we design our repository to avoid redundancy and ensure consistency across environments. We’ll create a shared infrastructure repository with the following components:
Now that we have designed our repository, let’s discuss how to deploy it using GitHub Actions. Here are the basic requirements to run a GitHub workflow:
- AWS Credentials: You can create GitHub secrets and use the AWS Access Key and Secret Key of the IAM user.
- Matrix Deployment: This allows us to run the same workflow across multiple environments.
- Terraform Commands: Use terraform init, terraform plan, and terraform apply.
- Concurrency Group: Ensure that if a build is running, other builds will be in a pending state to avoid conflicts.
With these requirements in place, we have successfully deployed all the shared infrastructure in our AWS account using GitHub Actions.
Advantages of Using This Repository Design
This approach is similar to a monolithic setup where all your AWS resources are tied to a single Terraform state file and deployed simultaneously. Here are the key benefits and drawbacks:
Advantages
- Unified Deployment: All infrastructure is deployed at the same time, which simplifies the setup process.
- Quick Setup: By providing AWS secret keys in GitHub Secrets, you can set up your AWS account and deploy services on the ECS cluster in minutes.
- Efficiency: This approach is highly efficient for small applications with fewer resources, as everything is managed in one place.
Drawbacks
- Monolithic Nature: If there’s an issue with one resource, it can impact the entire deployment. This can be problematic as the number of resources grows.
- Scalability Concerns: For larger setups, such as those including RDS instances, the monolithic approach can become cumbersome. Issues with one resource, like RDS, can affect the entire infrastructure.
Alternative Approach
To mitigate the drawbacks, some prefer a modular approach with separate folders and Terraform state files for each resource. This way, resources are independent of each other, and issues with one (e.g., RDS) do not impact others. Here’s a comparison:
- Monolithic Approach: Best for small applications with fewer resources. Quick and easy setup.
- Modular Approach: Better for larger applications with more complex infrastructure. Provides isolation and reduces the risk of widespread impact from individual resource issues.
Service Specific Resources
These resources are unique across all microservices:
Now that we have three microservices, we’ll use a monorepo setup for deployment. Here’s what each service needs to deploy on the ECS cluster:
- ECS Task Definition: Defines the container specifications and resources for your microservice.
- ECS Service: Manages the deployment and scaling of your ECS tasks.
- Target Groups: Directs traffic to the appropriate ECS tasks based on routing rules.
- ECS IAM Role: Grants necessary permissions for ECS tasks to interact with other AWS services.
- ECS Security Groups: Controls inbound and outbound traffic to ensure secure communication.
- ALB Listener Rules: Routes incoming traffic to the correct target groups based on specified conditions.
To reduce duplication and simplify maintenance, we’ll use reusable Terraform modules. This approach allows us to use the same code across all microservices, making future updates easier. Here’s how we’ll structure it:
- Modules Folder: This folder will contain all the code needed to deploy your service on the ECS cluster and use the datasource to fetch the shared AWS resources. The module code is designed to use variables, allowing us to pass different values when calling the module. This approach ensures flexibility and reusability, enabling the same module to be used across multiple services with varying configurations.
- Microservices Folders: We’ll create a folder for each microservice. Inside each folder, we’ll have a Terraform folder that calls the module, passing all the necessary values for the service to be deployed on the ECS cluster successfully.
GitHub Actions for Deployment
Next, let’s set up GitHub Actions for deployment. We’ll start with Continuous Integration (CI) for our Java project built using Maven:
- Build the Maven Project: Use GitHub Actions to build the Maven project and specify the Java version.
- Upload the Artifact: Upload the artifact and set a retention period to avoid keeping it indefinitely. This artifact is used temporarily to build the Docker image, which will be published in ECR.
- Download the Artifact: Download the artifact for use in the Docker build process.
- Docker Build: Build the Docker image.
- Push to ECR: Push the Docker image to public ECR using a version tag based on github.sha. This practice ensures that we always deploy using a specific version rather than the latest tag.
For deployment, we’ll use Terraform commands:
- Terraform Init: Initialize Terraform.
- Terraform Plan: Plan the deployment.
- Terraform Apply: Apply the deployment, passing the container version using TF_VAR as an environment variable from GitHub Actions.
By following these steps, we ensure that all shared infrastructure is successfully deployed in your AWS account, and each microservice is efficiently managed and deployed using reusable Terraform modules and GitHub Actions.
Advantages of This Approach
- Automation: Everything is automated, ensuring a smooth and efficient deployment process.
- Effective Management: This design allows you to manage your shared infrastructure properly and separately from your microservices.
- Quick Deployment: With this setup, everything can be deployed successfully in just 2–3 minutes. Any new microservice can be integrated into this process with minimal effort and time.
- Rapid Environment Setup: Any new environment deployment can be set up in 2–3 minutes, making it easy to scale and adapt to new requirements.