Automating web application on AWS with EFS using Terraform

Automating web application on AWS with EFS using Terraform

In this article, we will looking at how to host website on AWS using Terraform. i used AWS services like EC2, VPC, Gateway, Route table, EFS, S3, Cloudfront. here i am using terraform, terraform is open source " infrastructure as a code " tool,  It enables users to build and provision data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language (HCL), it can help with multi-cloud by having one workflow for all clouds.

Why EFS (elastic file storage)?

In my previous article i used EBS storage to provide persistent storage to web server stored in EC2 instance. however EBS is used when web pages are accessible by only one EC2 instance and in your particular region, but by using EFS it can access by multiple instances and multiple region at same time. there are some addition benefit of EFS:

  • Performance that scales to support any workload: EFS offers the throughput changing workloads need.
  • Energetic elasticity: Automatically scale your file system storage up or down. Remove or add files and never disturb applications.
  • Accessible file storage: On-premises servers and EC2 instances can access shared file systems concurrently. EC2 instances can also access EFS file systems located in other AWS regions through VPC peering.
  • Comprehensive managed service: EFS is a complete managed service, meaning your firm will never have to patch, deploy, or maintain your file system.
  • Cost savings: The only storage you’ll pay for is exactly what you use, as there’s no advance provisioning, up-front fees, or commitments.
  • Tighter security and compliance: You can securely access the file system with your current security solution, or control access to EFS file systems
  • It also offer data encryption and different level to access file system by user. this increases security of data.

Problem Statement:

  1. Create VPC, Subnet, Internet gateway, Route table
  2. Create the key & Create Security group which allow the port 80 for http, 22 for ssh, 2049 for NFS
  3. Launch EC2 instance.
  4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
  5. Developer have uploded the code into github repo also the repo has some images.
  6. Copy the github repo code into /var/www/html
  7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

First download AWS CLI and terraform from hyperlink given below

AWS CLI

Terraform CLI

Follow this steps in order to first create environment in your machine

  1. In AWS management console, under services go to the IAM and add a new user. After adding the user, we will get access credentials, save that CSV file of credentials.
  2. Now open windows command prompt and write the command as below: (put access and secret key from csv file, also add whatever region your going to use for infrastructure & use output format are json
  3. Create workspace for terraform code then cd into your workspace, and notepad to write code or you can use visual studio code

After this follow this steps to create infrastructure in aws and launch one website through terraform:

Step 1: After creating IAM user with administrator access and configure AWS CLI

Use credential of the IAM User account that you have created before and the region that you’re going to use for this infrastructure

No alt text provided for this image

Step 2: Creating a Key-pair

Here we create a key-pair. Basically a key pair is used to control login access to EC2 instances.

provider "aws" {
	region  = "ap-south-1"
	profile = "shinde"
}
resource "tls_private_key" "t1key" {
  algorithm   = "RSA"
}
resource "aws_key_pair" "gen_key" {
  key_name   = "t1key" 
  public_key = "${tls_private_key.t1key.public_key_openssh}"
}
resource "local_file" "key-file" {
  	content  = "${tls_private_key.t1key.private_key_pem}"
  	filename = "t1key.pem"
}

it will create key in aws

No alt text provided for this image

Step 3: Create VPC and VPC Subnet, Internet gateway, Route table

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range

resource "aws_vpc" "newvpc" {
  cidr_block = "10.1.0.0/16"
  instance_tenancy = "default"
  
  tags = {
	name = "newvpc"
	}
}
resource "aws_subnet" "newsub" {
  vpc_id     = aws_vpc.newvpc.id
  availability_zone = "ap-south-1a"
  cidr_block = "10.1.0.0/24"
  map_public_ip_on_launch = true


  tags = {
    Name = "newsub"
  }
}
resource "aws_internet_gateway" "newgw" {
  vpc_id = aws_vpc.newvpc.id


  tags = {
    Name = "newgw"
  }
}
resource "aws_route_table" "newroute" {
  vpc_id = aws_vpc.newvpc.id


  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.newgw.id
  }


  tags = {
    Name = "newroute"
  }
}


resource "aws_route_table_association" "a" {
  subnet_id      = aws_subnet.newsub.id
  route_table_id = aws_route_table.newroute.id
}

Subnet is for creating your own lab/datacenter in AWS

No alt text provided for this image

Internet gateway to establish connection through internet

No alt text provided for this image

Route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed.

No alt text provided for this image

Each subnet in your VPC must be associated with a route table. A subnet can be explicitly associated with custom route table, or implicitly or explicitly associated with the main route table.

Step 4: Creating security group to allow ssh on port 22, http on port 80 and nfs on port 2049

resource "aws_security_group" "tasksecgrp" {
  name        = "tasksecgrp"
  description = "sec group for ssh and httpd"
  vpc_id      = aws_vpc.newvpc.id


    ingress {
    description = "SSH Port"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "HTTP Port"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
   ingress {
    description = "NFS"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "tasksecgrp"
  }
}
  
  • Here we are allowing SSH Port & HTTP Port to access via remote login and using webserver, i have given name as ‘tasksecgrp’
  • Network File System (NFS) is a distributed file system protocol that lets users access files over a network similar to the way they access local storage.

The rules of a security group control the inbound traffic that's allowed to reach the instances that are associated with the security group. The rules also control the outbound traffic that's allowed to leave them.

Step 5. Creating instance

  • Next step is to launch the EC2 instance using the above-created security group and keys. We have to give ami-id and the instance type to launch that instance.
  • I am using free-tier amazon linux for this task and network setup 't2.micro'. The ami id and instance type for the same is mentioned in the code.
resource "aws_instance"  "task2instance"  {
 	ami = "ami-0447a12f28fddb066" 
  	instance_type = "t2.micro"
  	key_name = "t1key"
  	security_groups = [ "${aws_security_group.tasksecgrp.id}" ]
        availability_zone = "ap-south-1a"
	subnet_id = "${aws_subnet.newsub.id}"
   tags = {
    	  Name = "terraos" 
  	}
        
	
}

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications.

No alt text provided for this image

Step 6: Create an EFS Volume

resource "aws_efs_file_system" "newefs" {
  creation_token = "newefs"
  performance_mode = "generalPurpose"


  tags = {
    Name = "new-efs"
  }
}


resource "aws_efs_mount_target" "alpha" {
  file_system_id = aws_efs_file_system.newefs.id
  subnet_id      = aws_subnet.newsub.id
  security_groups = ["${aws_security_group.tasksecgrp.id}"]
}

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Then launch one Volume (EFS) and mount that volume into /var/www/html

Also, as we are setting a web server, we need the httpd server running in the instance. And some HTML files to host/deploy on the server. So I have created one html file. I uploaded those files to Github and we will pull that files/code and copy it into the var/www/html folder.

No alt text provided for this image

Step 7: Adding connection and provisioner to the configuration file.

resource "null_resource" "mount_efs_volume" {

	connection {
  	  type     = "ssh"
   	  user     = "ec2-user"
   	  private_key = "${tls_private_key.t1key.private_key_pem}" 
   	  host = "${aws_instance.task2instance.public_ip}"
  }


 	provisioner "remote-exec" {
    inline = [
      "sudo yum update -y",
      "sudo yum install httpd php git amazon-efs-utils nfs-utils -y",
      "sudo setenforce 0",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
      "sudo echo '${aws_efs_file_system.newefs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab",
      "sudo mount ${aws_efs_file_system.newefs.id}:/ /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/snehal3099/web3.git /var/www/html/"
	]
  }
}

We used a “connection” block to remote log in and provisioner block to execute commands inside the instance. The above code will make partitions, format, and mount the volume to html folder and will also clone the GitHub repository there. It will also install httpd and git services.

Step 8: Creating S3 bucket, uploading images to S3 bucket and giving public access to those images

resource "aws_s3_bucket" "taskbucket3095" {
  bucket = "taskbucket3095"
  acl    = "private"
 tags = {
    Namterre        = "taskbucket3095"
  }
 
}


resource "aws_s3_bucket_public_access_block" "access_to_bucket" {
  bucket = aws_s3_bucket.taskbucket3095.id


  block_public_acls   = true
  block_public_policy = true
  restrict_public_buckets = true
}
resource "aws_s3_bucket_object" "taskobject" {
  for_each		 = fileset("C:/Users/Snehal/Desktop/terraform_code/task2", "**/*.jpg")
  bucket                 = "${aws_s3_bucket.taskbucket3095.bucket}"
  key                    = "cloud.jpg"
  source                 = "C:/Users/Snehal/Desktop/terraform_code/task2/cloud.jpg"
  content_type 		 = "image/jpg"


}
locals {
	s3_origin_id = "tasks3origin"
}

it will create the S3 bucket named taskbucket3095. Make sure whatever name you choose for bucket must be in small letters and unique . Here we are also allowing public access.

Also check images uploaded in your S3 bucket

No alt text provided for this image
No alt text provided for this image

See my github repo link given at end of this article where i have uploaded image and html code for webserver.

Step 9: Cloudfront distribution setup

resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
	comment = "taskbucket3095"
}


resource "aws_cloudfront_distribution" "s3distribution" {


  origin {
    domain_name = "${aws_s3_bucket.taskbucket3095.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"
    s3_origin_config {
      origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
    }
}
  enabled             = true
  is_ipv6_enabled     = true
  comment             = "accessforTask1"
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false
	cookies {
        	forward = "none"
 	    }
    }
    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


// Cache behavior with precedence 0
    ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false
      headers      = ["Origin"]


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
   allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["IN"]
    }
  }


  tags = {
    Name = "taskdistribution"
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
retain_on_delete = true


depends_on=[
	aws_s3_bucket.taskbucket3095
    ]
 }

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery

No alt text provided for this image

Now to Run code below command to create webserver and launching AWS Infratsture in just one click

> terraform init
> terraform apply --auto-approve

No alt text provided for this image
No alt text provided for this image

Now let’s check website

No alt text provided for this image

To see your website use IPv4 Public IP/filename (filename is html file which you have to upload in github)

No alt text provided for this image

Run the below command to destroy the entire infrastructure.

> terraform destroy --auto-approve

Finalllly its done !!!!

Thanks for reading!!

No alt text provided for this image

Here is link of my complete code in Github:

My_git_hub_Repo






要查看或添加评论,请登录

社区洞察

其他会员也浏览了