Automating AWS 3-tier Infrastructure Deployment with Terraform

Automating AWS 3-tier Infrastructure Deployment with Terraform

Hello everyone, I hope you're all doing well. Thank you for taking the time to read my blog. I'm Premsagar PC, and I'm a Cloud and DevOps enthusiast. In this blog post, I will take you through the process of automating the deployment of a 3-tier infrastructure on AWS using Terraform.

What is 3-tier infrastructure?

A 3-tier architecture in AWS refers to the deployment of an application across three separate layers, each with a specific purpose:

  1. Presentation tier (also known as the front-end): This is the user-facing layer that provides the interface for users to interact with the application. It typically consists of a web server or load balancer that handles incoming requests from users.
  2. Application tier (also known as the middle tier): This layer is responsible for processing requests from the presentation tier and performing the necessary business logic to generate a response. It typically consists of one or more application servers that run the application code and communicate with a database or other backend services to retrieve and store data.
  3. Data tier (also known as the backend): This layer is responsible for storing and managing the application's data. It typically consists of one or more databases that store data in a structured format, such as a relational database or NoSQL database.

By separating the application into these three distinct layers, it becomes easier to manage and scale the application. Each layer can be scaled independently, allowing you to handle more traffic and users as needed. Additionally, the separation of concerns makes it easier to develop and maintain the application over time. AWS provides a range of services that can be used to implement each of these layers, such as Amazon EC2 for hosting application servers, Amazon RDS for hosting databases, and Amazon S3 for storing static assets.

What is Terraform and why Terraform?

Terraform is an open-source infrastructure as code (IaC) tool that allows you to manage your cloud infrastructure and resources as code. With Terraform, you can write configuration files in a declarative language that describe the desired state of your infrastructure, and then use Terraform to automatically create and manage those resources on various cloud providers, including AWS, Microsoft Azure, Google Cloud, and others.

Here are a few reasons why you might choose to use Terraform:

  1. Infrastructure as code: Terraform allows you to define your infrastructure as code, which means you can version-control your infrastructure just like you version-control your application code. This makes it easier to track changes, collaborate with other team members, and maintain a history of your infrastructure over time.
  2. Multi-cloud support: Terraform is provider-agnostic, which means you can use it to manage resources on multiple cloud providers, including AWS, Azure, Google Cloud, and others. This allows you to build a hybrid or multi-cloud architecture without being locked into a specific cloud provider.
  3. Automation: Terraform automates the process of provisioning and managing infrastructure resources, reducing the risk of human error and ensuring that your infrastructure is consistent and reproducible. It also allows you to define complex infrastructure topologies with ease, using reusable modules and templates.
  4. Scalability: Terraform allows you to manage infrastructure resources at scale, making it easier to manage large deployments and complex infrastructure topologies. It also supports modularization, allowing you to break down your infrastructure into smaller, reusable components that can be managed independently.

Overall, Terraform provides a powerful tool for managing infrastructure as code, making it easier to build, maintain, and scale complex cloud infrastructure.


Having gained an understanding of 3-tier architecture, as well as the importance and benefits of using Terraform, we can now delve into the Terraform configuration process and create an AWS 3-tier architecture, step-by-step.

Step 1: The first step in automating the deployment of a 3-tier infrastructure using Terraform is to install Terraform on your local machine. Once installed, you'll need to choose an AWS cloud provider to host your infrastructure.

No alt text provided for this image
Terraform with AWS

Step 2: Create a new directory for your Terraform project and open it in VS Code.

Step 3: Write a configuration file for Terraform, which specifies the AWS provider version constraint and the backend storage for the terraform.tfstate file. This file is commonly referred to as the Terraform configuration file, which uses the "*.tf" extension and is written in HashiCorp Configuration Language (HCL). The configuration file contains all the necessary information for Terraform to create and manage resources in AWS.

terraform.tfstate is a file that Terraform uses to keep track of the current state of the infrastructure it manages. It contains information about the resources that Terraform has created, as well as their current configuration.

No alt text provided for this image

The Terraform state file (terraform.tfstate) is a file that keeps track of the current state of infrastructure managed by Terraform. It contains information about the resources that Terraform has created, updated, or destroyed.

The state file is stored in a backend, which is a remote storage location for the state file. One of the most commonly used backends is Amazon S3, which is a highly available and scalable object storage service provided by AWS. Storing the state file in S3 enables collaboration among multiple users and helps maintain the state file's integrity.

Using locking is essential to prevent conflicts when multiple users are working on the same infrastructure simultaneously. It ensures that only one user at a time can make changes to the infrastructure. Terraform provides a feature called "state locking," which is used to lock the state file while changes are being made. It works by creating a lock on the state file in the backend, preventing any other users from modifying the state file until the lock is released. This feature ensures that changes made by one user don't interfere with changes made by another user, minimizing the risk of conflicts and data corruption.

Step 4: Create Virtual Private Cloud (VPC) with 6 subnets ( 2 public subnet for web-tier and 2 private subnets for app-tier and 2 private subnet for database-tier)

No alt text provided for this image


This Terraform code creates a Virtual Private Cloud (VPC) on Amazon Web Services (AWS) with two public subnets and four private subnets distributed across two availability zones. Here is a breakdown of what each resource does:

  • The aws_vpc resource creates a new VPC with the specified CIDR block and instance tenancy. It also assigns a Name tag to the VPC for easy identification.
  • The aws_subnet resource creates two public subnets with public IP addressing enabled, one in each of the two availability zones specified in the locals block. These subnets are used for resources that need to be publicly accessible, such as a web server or load balancer.
  • The aws_subnet resource also creates four private subnets, two in each of the two availability zones. These subnets are used for resources that should not be publicly accessible, such as a database server or backend service.

Note that the count parameter is used to create multiple subnets with a single resource block, based on the length of the local.subnet_cidrs list. If the list is empty, the resources are not created. Additionally, the element function is used to select a specific subnet CIDR block and availability zone based on the current count index.

Step 5: Create Internet Gateway and Nat Gateway

No alt text provided for this image

The above Terraform code creates Internet Gateway and NAT Gateways for private subnets in AWS. The first resource block creates an Internet Gateway named "terraform_igw" and attaches it to the VPC created in previous steps.

The second resource block creates two NAT Gateways, one for each Availability Zone. The count parameter is used to create two NAT Gateways only if there are private subnets available. The allocation ID for each NAT Gateway is taken from the Elastic IP created earlier. The subnet_id parameter is set to the corresponding public subnet's ID, which allows the NAT Gateway to access the internet.

The third resource block creates two Elastic IPs, one for each NAT Gateway. The count parameter is used here as well to create two Elastic IPs only if there are private subnets available. These Elastic IPs are used to assign a static IP address to the NAT Gateway.

The depends_on parameter is used in all three resource blocks to create dependencies between resources. This ensures that the Internet Gateway is created before the NAT Gateways and Elastic IPs are created.


Step 6: Create Route Table and Associate with subnets

No alt text provided for this image

The code above creates route tables for both public and private subnets. Two public subnets are associated with the public route tables, and four private subnets are associated with the private route tables. The count parameter is used to conditionally create resources based on the length of the corresponding subnet array. If the length is greater than 0, the resource is created with a count of 2 for public subnet route tables, and 4 for private subnet route tables. The association between subnets and route tables is based on the count.index parameter, which specifies the index of the current resource being created.

Step 7: Create Load Balancer and Auto Scalling

No alt text provided for this image

This Terraform code creates an autoscaling group of EC2 instances and configures a load balancer to distribute traffic to those instances across 2 Availability zones. The code also defines scaling policies and CloudWatch alarms to automatically increase or decrease the number of instances based on CPU utilization.

The code creates an AWS launch configuration for the EC2 instances, specifying an Ubuntu AMI, t2.micro instance type, and a security group that allows inbound traffic on port 80. It also specifies a user data script that will be executed when each instance starts.

Next, the code creates an autoscaling group with a minimum of 2 instances, a maximum of 4 instances, and a desired capacity of 2 instances. The launch configuration created earlier is used to create new instances as needed. The autoscaling group is associated with a target group for the load balancer and a health check type is specified.

Two scaling policies are defined: one to increase the number of instances when CPU utilization is above 80%, and another to decrease the number of instances when CPU utilization is below 20%. CloudWatch alarms are created to trigger these scaling policies when the CPU utilization metric meets certain thresholds.

Finally, the code creates an application load balancer and a target group for the load balancer. The autoscaling group is attached to the target group and a listener is defined to forward incoming HTTP traffic to the target group.

Step 8: Create Security groups for Load Balancer, EC2 instances and RDS DB Instance

No alt text provided for this image

This Terraform code creates three AWS security groups, one for ELB, one for public instances, and one for RDS instances.

The lb_sg security group allows inbound traffic on ports 80 and 443 from any IP address, and allows outbound traffic to any IP address on any port.

The public_sg security group allows inbound SSH traffic on port 22 from any IP address, and inbound HTTP and HTTPS traffic on ports 80 and 443 respectively, but only from the security group ID of the lb_sg security group. It also allows outbound traffic to any IP address on any port.

The rds_private_sg security group allows inbound traffic on port 3306 from the security group ID of the public_sg security group, and allows outbound traffic to any IP address on any port.

Overall, these security groups are intended to allow appropriate communication between the ELB, public instances, and RDS instances while limiting access from unauthorized sources.

Step 9: Create Variables

No alt text provided for this image

variables.tf is a file in Terraform that is used to define input variables for the Terraform module or configuration. Variables are used to pass values or parameters to the Terraform configuration at runtime, which makes the configuration reusable, flexible, and easier to maintain.

These variables and locals are used to define the CIDR blocks for the VPC and subnets that will be created.

The aws_region variable sets the default AWS region to us-east-1.

The cidr_blocks variable is a list of CIDR blocks that will be used for the VPC and subnets. The first CIDR block in the list (var.cidr_blocks[0]) will be used for the VPC, while the remaining CIDR blocks (slice(var.cidr_blocks, 1, length(var.cidr_blocks) - 1)) will be used for the subnets.

The locals block creates two local variables based on the cidr_blocks variable. vpc_cidr is set to the first CIDR block in the list, which will be used for the VPC. subnet_cidrs is set to a subset of the cidr_blocks list that excludes the first element, which will be used for the subnets. subnet_count is set to the number of subnets that will be created, which is the length of the subnet_cidrs list.

Step 10: Create Output

No alt text provided for this image

These are output blocks in a Terraform configuration file. They define values that Terraform should output after creating or modifying infrastructure, which can be helpful for understanding what was created or modified and for passing information to other scripts or tools.

In this particular example, there are three output blocks defined:

  • ami: This outputs the ID of an Ubuntu Amazon Machine Image (AMI) that was obtained from the data.aws_ami data source.
  • rds_primary_instance_endpoint: This outputs the endpoint URL of a primary RDS instance that was created using the aws_db_instance resource.
  • load_balancer_dns: This outputs the DNS name of a public-facing load balancer that was created using the aws_lb resource.

These outputs could be used by other scripts or tools in your infrastructure to connect to the RDS instance or load balancer, or to modify or destroy the infrastructure.

I have created a Github repository for the source code of this project, which you can access and download from the following link: https://github.com/Premsagarpc1/terraform-aws-3tier-project.git


After writing the above configuration file, run the terraform init command, which downloads the necessary provider plugins. Then, the terraform validate command should be run to validate the syntax of the configuration file. Next, run terraform plan to get a preview of the infrastructure changes that will be made, and finally, run terraform apply to provision the infrastructure in the AWS cloud.

When the terraform init command is executed, a .terraform folder will be created within the working directory, which holds information about the provider plugins.

After running terraform apply, a terraform.tfstate file will be created in the backend S3 bucket, which holds information about the current infrastructure.


Conclusion: This project showcases the successful implementation of an infrastructure for a 3-tier web application on AWS using Terraform. The infrastructure consists of a VPC, subnets across multiple availability zones, an RDS instance with a standby, an auto-scaling group with EC2 instances running a web server, and an Application Load Balancer for traffic distribution.

Moreover, we have implemented security groups to control inbound and outbound traffic and utilized meta-arguments to enhance the code's modularity and simplicity.

The outputs defined in output.tf enable us to easily retrieve vital information about the infrastructure, such as the AMI utilized for the EC2 instance, the RDS instance's endpoint, and the load balancer's DNS name.

This project effectively demonstrates the efficacy and adaptability of Terraform in the creation and management of infrastructure as code on AWS.


In my upcoming blogs, I will be exploring more projects related to popular DevOps tools such as Docker, Jenkins, Kubernetes, Ansible, Terraform, Grafana, GitHub, and various cloud platforms. As a Cloud and DevOps enthusiast, I'm always excited to share my knowledge and experience with others while seeking to learn continuously. So stay tuned for more informative content, and thank you for your time.

Regards,

Premsagar PC

Vipul Anand

Founder, Director, CEO

1 年

Sounds good ??

Praveen Hosahalli Varadaraju ,CEng ,CRL,CMRP

Maintenance Engineer @ Quant Service | CRL, CMRP Certified

1 年

Good Start !!! Prem ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了