Automated Infrastructure setup of AWS EC2 and EFS by using Terraform

Automated Infrastructure setup of AWS EC2 and EFS by using Terraform

Objective:- Launch ec2 instance on aws and mount efs as storage to instance using terraform.

Task-descriptions

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html/

AWS’s Storage Services:

Amazon Web Service consists of three types of storage , with each type having different or say unique use cases.

i) Block Storage: Elastic Block Storage(EBS)

ii)Object Storage: Simple Storage Service ( or S3)

iii) File Storage: Elastic File Storage(EFS)

In this task ,we use EFS instead of EBS .Because EBS is a region based service .So to use EBS volume ,both EC2 instance and EBS volume are in same region.But EFS is a centralized storage .So we can use it in any region.

Let's begin

Step-1 First initialize terraform

As Terraform is a plugin based application we have to install the plugins for the cloud service .By this we tell terraform about our provider . command "terraform init"

provider "aws" { 
   region = "ap-south-1"   
   profile = "raja"
 
}

Step-2

we create key-pair to login into instance

# Create key-pair
resource "tls_private_key" "test" {
  algorithm   = "RSA"
}

# locally store
resource "local_file" "web" {
    content     = tls_private_key.test.public_key_openssh
    filename = "mykey.pem"
    //file_permission = 0400
}
 
# Create new aws key_pair
resource "aws_key_pair" "test_key" {
  key_name   = "mykey"
  public_key = tls_private_key.test.public_key_openssh
}


Once that’s code run , then it will generate key-pair , and downloaded into the working directory with the assigned name and “.pem” extension .

Step-3

We create security group to launch our instance in aws .

resource "aws_security_group" "wordgroup" {
   name = "my security gp"
  ingress {
    protocol  = "tcp"
    from_port = 22
    to_port   = 22
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    protocol  = "tcp"
    from_port = 80
    to_port   = 80
    cidr_blocks = ["0.0.0.0/0"]
  }
 ingress {
    protocol  = "tcp"
    from_port = 2049
    to_port   = 2049
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
 }
tags = {
    Name = "allow_tcp and nfs"
  }
}


A security group is essential for an instance as it acts like a virtual firewall in order to control all incoming and outgoing traffic.In this ,we allow http ,ssh and nfs port .So we can easily use their services.

Step-4 Launch ec2 instance

#launch instance
resource "aws_instance" "myin" {
 
 ami           = "ami-0732b62d310b80e97"
 instance_type = "t2.micro"
 key_name = aws_key_pair.test_key.key_name
 security_groups = ["${aws_security_group.wordgroup.name}"]

tags = {
  Name = "EFSOs"
  }
 connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.test.private_key_pem
    host     = aws_instance.myin.public_ip
  }
 provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl start httpd",
      "sudo systemctl enable httpd" 
    ]
  }
}


  • After launch instance we login using ssh and install required software like httpd, php, git.
  • Here we use depends_on "aws_ security_group " .Because after generation of security group we can login into instance using security group.

Step-5 Create EFS

#Create efs
resource "aws_efs_file_system" "myefs" {
  creation_token = "my-efs"
tags = {
    Name = "Task2-efs"
  }
 depends_on = [ aws_security_group.wordgroup, aws_instance.myin, ]
}
resource "aws_efs_mount_target" "alpha" {
  file_system_id = aws_efs_file_system.myefs.id
  subnet_id      = aws_instance.myin.subnet_id
  security_groups = ["${aws_security_group.wordgroup.id}"]
depends_on = [ aws_efs_file_system.myefs,]
}

#Mount EFS volume in EC2 instance and clone code from GitHub
resource "null_resource" "local2" {
 depends_on = [
    aws_efs_mount_target.alpha,
  ]
 connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.test.private_key_pem
    host     = aws_instance.myin.public_ip
  }
 provisioner "remote-exec" {
    inline = [
      "sudo yum install -y amazon-efs-utils",
"sudo mount -t efs ${aws_efs_file_system.myefs.id}:/ /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo curl https://github.com/Ds123-wq/cloudtask-2/blob/master/index.html  > index.html ",
        "sudo cp index.html   /var/www/html/" ,
    ]
  }
}

In this step, we create EFS but, before going further we need to do a very small thing. By default amazon nodes do not have utility(amazon-efs-utils) to connect with EFS. We need to login to instance and install it. This efs must be launched in same VPC where the instance is launched. After that mount efs with /var/www/html/ directory of instance.

Step-6 Create S3 bucket with public access

# Create s3 bucket
resource "aws_s3_bucket" "bucket" {
  depends_on = [ null_resource.local2, ]
  bucket = "mybuckets312"
  acl = "public-read" 
}


# Give public access to S3 bucket
resource "aws_s3_bucket_public_access_block" "example" {
  bucket = aws_s3_bucket.bucket.id
}


# Upload image downloaded from gihub repo to S3 bucket
resource "aws_s3_bucket_object" "object" {
  bucket = aws_s3_bucket.bucket.id
  key    = "apache-web-server.png"
  source = "C:/Users/Dell/Desktop/terra/task-2/apache-web-server.png"
  content_type = "image/jpeg"
  acl = "public-read"
depends_on = [ 
      null_resource.local2,
      aws_s3_bucket.bucket,
  ]
}

  • I used resource aws_s3_bucket for making the s3 bucket
  • I copied the image from working directory to s3 using resource i.e, aws_s3_bucket_object
  • Make sure, the value of the bucket is always unique so that S3 can obtain the public access to share that data. 

Step-7 Create Cloudfront

Now I create cloudfront .So decrease latency.

# Create CloudFront  for S3 bucket
resource "aws_cloudfront_origin_access_identity" "oai" {
    comment = "cloudfront creation"
}

locals {
  s3_origin_id = "S3-${aws_s3_bucket.bucket.bucket}"
}

# Create CloudFront distribution with S3 bucket as origin
resource "aws_cloudfront_distribution" "s3_distribution" {
 
  origin {
    domain_name = aws_s3_bucket.bucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
    }
  }
enabled             = true
default_root_object ="index.html"
default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
forwarded_values {
      query_string = false
cookies {
        forward = "none"
      }
    }
viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 9600
    max_ttl                = 86400
  }

restrictions {
    geo_restriction {
      restriction_type = "blacklist"
      locations        = ["US", "CA", "GB", "DE"]
    }
 }
  
viewer_certificate {
    cloudfront_default_certificate = true
  }
}

  • For creating CloudFront I used resource aws_cloudfront_distribution, this needs domain_name, origin_id for CDN(Content Delivery Network).
  • We use S3 as origin for cloudfront.
  • restriction is used to restrict the access of cloudfront. In restriction ,there are two options :-

a. whitelist(able to access site)

b. blacklist (Not able to access site)

As our requirement ,we can select one of the option .

Step-8 Login to cloudfront and update code

resource "null_resource" "loca1" {
connection {
        type    = "ssh"
        user    = "ec2-user"
        host    = aws_instance.myin.public_ip
        port    = 22
        private_key =tls_private_key.test.private_key_pem
    }


provisioner "remote-exec" {
        inline  = [
            "sudo su << EOF",
             "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.object.key}' width = '300' height = '200'>\" >> /var/www/html/index.html",
            "EOF"
        ]
   }

depends_on = [
  aws_cloudfront_distribution.s3_distribution,
  ]


provisioner "local-exec" {
  command = "chrome ${aws_instance.myin.public_ip}"


  }

}
  • Here I saved the CloudFront URL using resource i.e, null_resource because we need to update the CloudFront URL on the website in the img tag.
  • Now we can see our website in chrome after successfully run the terraform code.

Thanks for reading....



要查看或添加评论,请登录

Deepak Sharma的更多文章

  • Jenkins Dynamic Provisioning

    Jenkins Dynamic Provisioning

    Objectives In this article , We will see how we can create dynamic slave on the fly when job come and attach to the…

    1 条评论
  • OSPF Routing Protocol using Dijkstra Algorithm

    OSPF Routing Protocol using Dijkstra Algorithm

    Objectives:- In this article, We will learn about Dijkstra Algorithm and Open Short Path First(OSPF) Routing Protocol .…

    1 条评论
  • MongoDB Case study: Forbes

    MongoDB Case study: Forbes

    Objective In this article , we see how MongoDB Cloud Migration Helps World's Biggest Media Brand Continue To Set…

  • Vehicle’s Number Plate Detection using CNN model using python and Flask API…

    Vehicle’s Number Plate Detection using CNN model using python and Flask API…

    In this article, I am going to show you how you can create CNN Model or Deep Learning Model for Vehicle’s Number Plate…

    8 条评论
  • K-means Clustering and its real use cases in security domain

    K-means Clustering and its real use cases in security domain

    Objectives:- In this article, we will see about the Kmean algorithm and how Kmean algorithm helps in security domain to…

  • JavaScript:- Industry Use-cases

    JavaScript:- Industry Use-cases

    Objective In this article , we will learn about the JavaScript and the use-cases of JavaScript. How Industries utilizes…

  • Confusion Matrix and Cyber Security

    Confusion Matrix and Cyber Security

    Objectives:- In this article , we will see about confusion matrix and the use of confusion matrix . Also we see how…

  • Self-Reflection of MongoDB-Workshop

    Self-Reflection of MongoDB-Workshop

    # Day1 (1st May 2021) ?? Introduction of the file system? ??The data we will stored in file and that file we basically…

  • OpenShift case study:- Cisco

    OpenShift case study:- Cisco

    Cisco’s success depends on its ability to quickly deliver innovative IT products and solutions to customers. Delays can…

  • Industry Use cases of Jenkins:- Prepl

    Industry Use cases of Jenkins:- Prepl

    In 2021, When industries are running towards automation, adopting different DevOps tools to solve their industrial…

社区洞察

其他会员也浏览了