Containerizing and Deploying Applications with Docker and Terraform on AWS

Containerizing and Deploying Applications with Docker and Terraform on AWS

This hands-on project aimed at modernizing, containerizing, and deploying the HumanGov application to AWS, using Docker, AWS ECS, and Terraform to create a scalable, cloud-native solution. The goal was to ensure high availability, scalability, and resilience by leveraging containerization best practices and infrastructure-as-code. The diagram below illustrates the architecture, showing how Docker containers, AWS services, and the infrastructure provisioned with Terraform interact in the system.



Project Breakdown

Step 1: Preparing the Infrastructure

The first step was to prepare the AWS infrastructure for deploying the containers. I began by creating the necessary IAM roles that allow ECS tasks to interact with other AWS services like S3 and DynamoDB.

  • IAM Role: HumanGovECSExecutionRole Trusted Entity: AWS Service (Elastic Container Service Task) Permissions: AmazonS3FullAccess – To access the S3 bucket for storage operations. AmazonDynamoDBFullAccess – To interact with the DynamoDB table for application state management. AmazonECSTaskExecutionRolePolicy – To allow ECS tasks to pull container images from Amazon ECR.

resource "aws_iam_role" "humangov_ecs_execution_role" {
  name = "HumanGovECSExecutionRole"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Principal = {
        Service = "ecs-tasks.amazonaws.com"
      }
      Effect = "Allow"
      Sid = ""
    }]
  })
}

resource "aws_iam_role_policy_attachment" "ecs_execution_policy" {
  role       = aws_iam_role.humangov_ecs_execution_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"

        

Step 2: Containerizing the HumanGov Application

Next, I focused on containerizing the HumanGov application. The first container was for the Flask app. To do this, I created a Dockerfile to install dependencies and configure Gunicorn as the application server.

Create Dockerfile for Flask App:

# Use official Python 3.8 image as base
FROM python:3.8-slim-buster

# Set working directory
WORKDIR /app

# Copy the requirements file and install dependencies
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

# Copy the application code into the container
COPY . /app

# Run Gunicorn as the WSGI server, binding to 0.0.0.0:8000 to listen on all network interfaces
CMD ["gunicorn", "--workers", "1", "--bind", "0.0.0.0:8000", "humangov:app"]
        

  1. Build and Push Docker Image to AWS ECR: Created a public ECR repository for the Flask app. Built and pushed the Docker image to the ECR repository using Docker CLI.


# Authenticate Docker to AWS ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com

# Build Docker image
docker build -t humangov-app .

# Tag image
docker tag humangov-app:latest <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/humangov-app:latest

# Push the image to ECR
docker push <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/humangov-app:latest
        

Next, I containerized the NGINX application by creating a Dockerfile and an NGINX configuration file.

NGINX Configuration Files (nginx.conf)

server {
    listen 80;
    server_name humangov www.humangov;

    # Proxy requests to the Flask app running on port 8000
    location / {
        include proxy_params;
        proxy_pass <https://localhost:8000>;
    }
}        

NGINX Proxy Parameters (proxy_params)

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        

Create Dockerfile for NGINX:

# Use official NGINX image as base
FROM nginx:alpine

# Remove the default NGINX configuration
RUN rm /etc/nginx/conf.d/default.conf

# Copy custom configuration files into the container
COPY nginx.conf /etc/nginx/conf.d
COPY proxy_params /etc/nginx/proxy_params

# Expose port 80 for HTTP traffic
EXPOSE 80

# Run NGINX in the foreground
CMD ["nginx", "-g", "daemon off;"]
        

  1. Build and Push NGINX Docker Image to ECR: Created a public ECR repository named humangov-nginx. Built and pushed the Docker image for NGINX to the ECR repository.


Step 3: Provisioning AWS Resources Using Terraform

To automate infrastructure provisioning, I used Terraform to create several AWS resources, including an S3 bucket, DynamoDB table, and ECS cluster with task definitions.

Provisioning AWS S3 and DynamoDB with Terraform

resource "aws_s3_bucket" "humangov_bucket" {
  bucket = "humangov-application-bucket"
  acl    = "private"
}

resource "aws_dynamodb_table" "humangov_table" {
  name           = "humangov-state"
  hash_key       = "StateId"
  read_capacity  = 5
  write_capacity = 5
  attribute {
    name = "StateId"
    type = "S"
  }
}        

ECS Cluster and Task Definitions

resource "aws_ecs_cluster" "humangov_cluster" {
  name = "humangov-cluster"
}

resource "aws_ecs_task_definition" "humangov_task" {
  family                   = "humangov-task"
  container_definitions    = jsonencode([{
    name      = "flask-app"
    image     = "<aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/humangov-app:latest"
    essential = true
    memory    = 512
    cpu       = 256
  }])
}        

Step 4: Deploying the HumanGov Application to AWS ECS

After provisioning the infrastructure, I deployed the HumanGov application to AWS ECS using Fargate for serverless container management. Fargate eliminates the need to manage EC2 instances directly, allowing focus on scaling and application architecture.

resource "aws_ecs_service" "humangov_service" {
  name            = "humangov-service"
  cluster         = aws_ecs_cluster.humangov_cluster.id
  task_definition = aws_ecs_task_definition.humangov_task.id
  desired_count   = 1
  launch_type     = "FARGATE"

  network_configuration {
    subnets = ["subnet-12345"]
    security_groups = ["sg-12345"]
    assign_public_ip = true
  }
}
        

Step 5: Cleaning Up Resources

Once the deployment was successful, I made sure to clean up the resources to avoid incurring additional costs. This was done by stopping the ECS service and deleting the resources using Terraform commands.

# Destroy resources with Terraform
terraform destroy        

Lessons Learned

  • Containerization Best Practices: Docker was instrumental in containerizing both the Flask and NGINX applications. This ensured that they could run efficiently and consistently in the cloud.
  • Automating Infrastructure: Terraform provided an effective way to automate the provisioning of infrastructure, making the deployment process repeatable and manageable.
  • ECS and Fargate: AWS Fargate simplified the ECS deployment, enabling me to focus on application development and scaling without worrying about managing EC2 instances.
  • Security and IAM Roles: Proper IAM roles and policies were essential for enabling secure communication between AWS services like S3, DynamoDB, and ECS.

This project was a comprehensive learning experience in cloud architecture, containerization, and automation using AWS services like ECS, S3, DynamoDB, and Terraform.

Andrii Furmanets

Founder @ FolioFlux | Full-Stack Developer | React, TypeScript, Next.js | Ruby on Rails

1 个月

That’s a fantastic hands-on project! Combining Docker with Terraform for AWS ECS deployment really showcases the power of infrastructure as code. I’d love to know how you manage environment-specific configs and secrets. This kind of automation is key to building scalable, cloud-native solutions. Great work!

Ronilson Silva

Software Engineer | .NET Software Engineer | Full Stack | C# | .NET Core | Angular | React | Blazor | Azure | MVC | SQL | Mongo DB | JavaScript | TypeScript

1 个月

Excellent informations!

回复
Patrick Cunha

Lead Fullstack Engineer | Typescript Software Engineer | Nestjs | Nodejs | Reactjs | AWS

1 个月

This is a well-documented project showcasing a solid understanding of cloud-native principles and infrastructure-as-code. The detailed breakdown and lessons learned are particularly insightful!

回复
Guilherme Luiz Maia Pinto

Back End Engineer | Software Engineer | TypeScript | NodeJS | ReactJS | AWS | MERN | GraphQL | Jenkins | Docker

1 个月

Thanks for sharing ??

Jardel Moraes

Data Engineer | Python | SQL | PySpark | Databricks | Azure Certified: 5x

1 个月

Thanks for putting this out there! ??

要查看或添加评论,请登录

Mauricio Camilo的更多文章

社区洞察

其他会员也浏览了