Launching an application in EC2 instance using EFS as centralized storage and Terraform as an automation tool

Launching an application in EC2 instance using EFS as centralized storage and Terraform as an automation tool


Task Details

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer has uploded the code into github repo also the repo has some images.

6. Copy the GitHub repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

In this task we are using EFS for storing code rather than EBS because:

  • In EBS if we have various servers for our client then if we want to update our code then we need to go to every EBS volume and update it but using EFS we just need to make changes one time and it will get updated everywhere since it has centralized storage.
  • EBS volume, we can attach to only that instance that is in the same datacentre or availability zone as EBS volume whereas EFS is a regional service we can match it to any instance in the same regional zone.

To access our AWS account

//Account Access

provider "aws" {

region= "ap-south-1"

profile="muskan"

}

Creating a Security Group

resource "aws_security_group" "terrasg1" {

name = "terrasg1"

description = "allow ssh and http traffic"

ingress {

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}


ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

ingress {

from_port = 2049

to_port = 2049

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}




egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}

}

resource "aws_security_group" "terrasg2" {




name = "terrasg2"

description = "allow nfs"




ingress {

from_port = 2049

to_port = 2049

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}


}

Creating Key

resource "tls_private_key" "terrakey-new" {

algorithm = "RSA"

rsa_bits = 4096

}




resource "aws_key_pair" "generated_key" {

key_name = "terrakey-new"




//Using unique id of above



public_key = "${tls_private_key.terrakey-new.public_key_openssh}"

}

output "out2" {

value=tls_private_key.terrakey-new.private_key_pem

}

Launching ec2 instance

resource "aws_instance" "terraos-new1" {

ami = "ami-5b673c34"

instance_type = "t2.micro"

security_groups = ["${aws_security_group.terrasg1.name}"]

key_name = "terrakey-new"

provisioner "remote-exec" {

connection {

type = "ssh"

user = "ec2-user"

private_key = "${file("C:/Users/muska/Downloads/terrakey-new.pem")}"

host = aws_instance.terraos-new1.public_ip

}

inline =[

"sudo yum install httpd -y",

"sudo systemctl start httpd",

"sudo systemctl enable httpd"

]

}

tags = {

Name = "terraos1"

}

}

resource "null_resource" "nullr"{

depends_on = [ aws_instance.terraos-new1,aws_efs_mount_target.alpha,]


provisioner "remote-exec" {

connection {

type = "ssh"

user = "ec2-user"

private_key = "${file("C:/Users/muska/Downloads/terrakey-new.pem")}"

host = aws_instance.terraos-new1.public_ip

}

inline=[


"sudo yum install -y nfs-utils",

"sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport ${aws_efs_mount_target.alpha.dns_name}:/ /var/www/html",

"sudo cd /var/www/html",

"sudo yum install -y git",

"sudo git clone https://github.com/muskan399/first.git /var/www/html",

"sudo setenforce 0",

"sudo systemctl start httpd"

]

}

}

output "out1" {

value=aws_instance.terraos-new1.public_ip

}

Creating File system using EFS

resource "aws_efs_file_system" "foo" {

creation_token = "my-product"

}

resource "aws_efs_mount_target" "alpha" {

file_system_id = "${aws_efs_file_system.foo.id}"

subnet_id = "${aws_instance.terraos-new1.subnet_id}"

security_groups = ["${aws_security_group.terrasg2.id}"]

}

Mounting the file system on top of /var/www/html

resource "aws_efs_mount_target" "alpha" {

file_system_id = "${aws_efs_file_system.foo.id}"

subnet_id = "${aws_instance.terraos-new1.subnet_id}"

security_groups = ["${aws_security_group.terrasg2.id}"]

}

Creating S3 bucket

resource "aws_s3_bucket" "b1" {

bucket = "terra-new"




tags = {

Name = "terra-new"

}

region ="ap-south-1"




}




locals {

s3_origin_id = "myS3Origin"

}

resource "aws_s3_bucket_object" "object" {

bucket = "terra-new"

key = "blog.jpg"

source = "C:/Users/muska/Desktop/blogpng.png"

acl = "public-read"

content_type ="image/jpg"




}




resource "aws_s3_bucket_public_access_block" "example" {

bucket = "${aws_s3_bucket.b1.id}"




block_public_acls = false

block_public_policy = false

}

Creating CloudFront distribution

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.b1.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"
  }

  enabled             = true
  is_ipv6_enabled     = true

  default_cache_behavior {
   allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
}


viewer_protocol_policy = "redirect-to-https"

    
}

restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

   viewer_certificate {
    cloudfront_default_certificate = true
  }
}

Before running the Terraform code we first need to initialize the directory in which they are stored using this command.

terraform init

To run these files we need to use

terraform apply

Now we need to update the code of our webpage

<!DOCTYPE html>

<html lang="en" dir="ltr">

<head>

<meta charset="utf-8">

<link rel="stylesheet" href="blog1c.css">

<title></title>

</head>

<body>

<h1>Welcome to Free Blog!</h1>

<p id="a1">Thanks so much for visiting this site!</p>

<img src="https://terra94251.s3.ap-south-1.amazonaws.com/blog-image" alt="Missing">

</body>

</html>

Here in place of S3 URL in image source use the Cloudfront URL/image_name.

Now using the instance public IP/name_of_webpage.html we can access our webpage. Here we are getting code from EFS and static content from Cloudfront which is internally getting it from S3.

We have output the instance private key and public IP so using them we can see our webpage and also we can SSH to our instance.

No alt text provided for this image
No alt text provided for this image

For more information refer the below Github repository

Thank you Everyone!!


要查看或添加评论,请登录

Muskan Modi的更多文章

社区洞察

其他会员也浏览了