AWS,Terraform, Full Deployment with Cloudfront

AWS,Terraform, Full Deployment with Cloudfront

Since I have Already given a brief description on the basic terminologies on the previous task so won't be wasting much time of your's in this .So here's the link of previous task if anybody have to look over it


Task Description-

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

So Let's get Started directlyFirst of all let's create a security group which allows port 22 for SSH , port 80 for http , port 2049 for NFS allowance

resource "aws_security_group" "efs_http_securitygroup" {
  name        = "efs_http_securitygroup"
  vpc_id      = "vpc-1e958876"


  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "allow_tls"
  }

}

And here starting an instance by making use of the formed security group and applying a precreated key for getting logged into my instance.

resource "aws_instance" "myec2" {
  ami             = "ami-07a8c73a650069cf3"
  instance_type   = "t2.micro"
  security_groups  = [ "efs_http_securitygroup" ]
  key_name        = "aws_terra_key"


  tags = {
    Name = "aws_terra_ec2"
  }
  depends_on = [ aws_security_group.efs_http_securitygroup ]
}

So let the terraform login into our ec2 instance inorder to install some of the prerequisites

resource "null_resource" "nullexec1"{ 
  connection {
      type        = "ssh"
      user        = "ec2-user"
      private_key = file("C:/Users/nisha/Desktop/terraform/task2/aws_terra_key.pem")
      host        = aws_instance.myec2.public_ip
    }


  provisioner "remote-exec" {
    inline = [
    "sudo yum install httpd php -y",
    "sudo systemctl restart httpd",
    "sudo systemctl enable httpd",
    "sudo yum install git -y",
    "sudo yum install amazon-efs-utils -y",
    ]
  }
  depends_on = [ aws_instance.myec2 ]
}

Here we are installing amazon-efs-utils in addition because we want to mount our efs filesystem that we have created

So create an EFS file system that we will be going to mount with all the subnets and here we will be using that security group which will allow to access all the

#Before creation of EFS we need policy 
resource "aws_efs_file_system" "ec2" {
  creation_token = "storageefs"

  tags = {
    Name = "efs"
  }
  depends_on = [ aws_security_group.efs_http_securitygroup ]
}


resource "aws_efs_mount_target" "efsmount" {
  file_system_id  = "${aws_efs_file_system.ec2.id}"
  subnet_id       = "subnet-e86902a4"
  security_groups = [ aws_security_group.efs_http_securitygroup.id ]
}  


resource "aws_efs_mount_target" "efsmount2" {
  file_system_id  = "${aws_efs_file_system.ec2.id}"
  subnet_id       = "subnet-51fec439"
  security_groups = [ aws_security_group.efs_http_securitygroup.id ]
}


resource "aws_efs_mount_target" "efsmount3" {
  file_system_id  = "${aws_efs_file_system.ec2.id}"
  subnet_id       = "subnet-f2ee5389"
  security_groups = [ aws_security_group.efs_http_securitygroup.id ]
}

Then here we will be mounting our EFS file system with our EC@ instance running with https server.

resource "null_resource" "nullexec2"{
  provisioner "remote-exec"{
    inline = [
    "efs_id=${aws_efs_file_system.ec2.id}",
    "sudo mount -t efs -o tls $efs_id:/ /var/www/html/",
    "sudo rm -rf /var/www/html/* ",
    "sudo git clone https://github.com/SSJNM/php_code.git /var/www/html",
    ]


    connection {
      type        = "ssh"
      user        = "ec2-user"
      private_key = file("C:/Users/nisha/Desktop/terraform/task2/aws_terra_key.pem")
      host        = aws_instance.myec2.public_ip
    }
  }
  depends_on = [ 
                 aws_efs_file_system.ec2,
                 aws_efs_mount_target.efsmount3,
                 aws_efs_mount_target.efsmount2,
                 aws_efs_mount_target.efsmount3,
                 null_resource.nullexec1, 
               ]  
}

Following the sequence and creating a S3 Bucket followed by allowing access to public world and also giving our Cannonical user power to update the policies.

# Creating the s3 bucket


data "aws_canonical_user_id" "current_user" {}


resource "aws_s3_bucket" "mybucket" {
  bucket = "ssjnm"
  tags = {
    Name        = "ssjnm_bucket"
    Environment = "Dev"
  }
  grant {
    id          = "${data.aws_canonical_user_id.current_user.id}"
    type        = "CanonicalUser"
    permissions = ["FULL_CONTROL"]
  }


  grant {
    type        = "Group"
    permissions = ["READ", "WRITE"]
    uri         = "https://acs.amazonaws.com/groups/s3/LogDelivery"
  }
  force_destroy = true
}
#Public-access Control S3 
 resource "aws_s3_bucket_public_access_block" "example" {
  bucket = "${aws_s3_bucket.mybucket.id}"


  block_public_acls   = false
  block_public_policy = false
  
}

Then Its time to create cloudfront and here I also want that cloudfront should update my S3 bucket policies to Get the access to the objects that will be put into the bucket so, we can they can transfer all these objects to the edge locations when we will need.

And So we will be creating an origin access identity for that.

#Making CloudFront Origin access Identity
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
  depends_on = [ aws_s3_bucket.mybucket ]
}


#Updating IAM policies in bucket
data "aws_iam_policy_document" "s3_policy" {
  statement {
    actions   = ["s3:GetObject"]
    resources = ["${aws_s3_bucket.mybucket.arn}/*"]


    principals {
      type        = "AWS"
      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
    }
  }


  statement {
    actions   = ["s3:ListBucket"]
    resources = ["${aws_s3_bucket.mybucket.arn}"]


    principals {
      type        = "AWS"
      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
    }
  }
  depends_on = [ aws_cloudfront_origin_access_identity.origin_access_identity ]
}


#Updating Bucket Policies
resource "aws_s3_bucket_policy" "example" {
  bucket = "${aws_s3_bucket.mybucket.id}"
  policy = "${data.aws_iam_policy_document.s3_policy.json}"
  depends_on = [ aws_cloudfront_origin_access_identity.origin_access_identity ]
}

Finally Creating the CloudFront Since all my prerequisites are over.

#Creating CloudFront

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = aws_s3_bucket.mybucket.bucket_domain_name
    origin_id   = aws_s3_bucket.mybucket.id
    s3_origin_config {
      origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
      }
  }
  enabled             = true
//  default_root_object = "image1.jpg"
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST",   "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = aws_s3_bucket.mybucket.id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    viewer_protocol_policy = "allow-all"
  }


  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
  depends_on = [ aws_s3_bucket_policy.example ]
}

And then Lets copy the codes given by developer to be uploaded into our local system

resource "null_resource" "github"{
  provisioner "local-exec"{
    command = "git clone https://github.com/SSJNM/php_code.git C:/Users/nisha/Desktop/terraform/task2/php_code/"
  }
}

So here the first part is over and now Lets try it

No alt text provided for this image


As always for applying any changes to our deployment , we always use terraform apply cmmand and Since my code is already been tested by me so, I will be using

terraform apply --auto-approve

Once we have started our deployment, the terraform window will look something like this

No alt text provided for this image

Here we can see that my EFS storage is automatically mounted onto the my httpd instance and also https software is also installed as well as started and so here's the ouput.

No alt text provided for this image

Now I'm going to check whether the git repository has been cloned or not and here I can see clearly that code has been copied from GitHub.

No alt text provided for this image

Since Everything is configured Now So Lets update the code in our Cloudfront So here I'm changing my directory and going to the folder which contains image.tf file

No alt text provided for this image

Atually here is where I have updated my code when compared to my task 1 and also this is the most interesting part

From the project.tf file we will be getting the output by using the output commands in our project file

output "efs_id" {
  value = "${aws_efs_file_system.ec2.id}"
}

output "domain_name" {
 value = aws_cloudfront_distribution.s3_distribution.domain_name
}


output "IPAddress"{
  value = aws_instance.myec2.public_ip
}


Now we will be using the image.tf file to upload as many image as we need according to code uploaded by developer.

No alt text provided for this image

Here we have told our developer that whereever you want to put images , you should put those in the form like imagename1,imagename2 and so on.....

image.tf

provider "aws"{
  region  = "ap-south-1"
  profile = "default"
}


#Now lets add the GitHub photos


variable "image_name" {}
variable "code" {}


variable "host_ip" {}  
variable "cloudfront_ip" {}


resource "aws_s3_bucket_object" "mybucket" {
  bucket = "ssjnm"
  key    = "${var.image_name}"
  acl    = "public-read"
  source = "C:/Users/nisha/Desktop/terraform/task2/php_code/${var.image_name}"
}
resource "null_resource" "nullexec4" {
  provisioner "remote-exec"{
    inline = [
    "image=${var.image_name}",
    "cloudfrontip=${var.cloudfront_ip}",
    "sudo sed -i \"s/imagename1/$image/\" /var/www/html/${var.code}",
    "sudo sed -i \"s/cloudfrontip1/$cloudfrontip/\" /var/www/html/${var.code}",
    ]


    connection {
      type        = "ssh"
      user        = "ec2-user"
      private_key = file("C:/Users/nisha/Desktop/terraform/task2/aws_terra_key.pem")
      host        = "${var.host_ip}"
    }
  }
  depends_on = [ aws_s3_bucket_object.mybucket ]
}


resource "null_resource" "ip"{
  provisioner "local-exec"{
    command = "microsoftedge ${var.host_ip}/${var.code}"
  }
  depends_on = [ null_resource.nullexec4 ]
}

This code will go to the ec2 instance and replace the code where imagename1, cloudfrontip are mentioned and replace it with their value that we have assigned at the start of image.tf code

This makes our code smart so according to image.tf file , we will have to put the name of code , cloudfrontid , imagename and lastly HostIP and then image.tf code will update the code inside our running instance and then we will get our desired output

No alt text provided for this image

This windows opens automatically and here the image that we are recieving is from the CloudFront and the edge locations

No alt text provided for this image

And then everything can be destroyed whenever we want , great setup it is :)

No alt text provided for this image

You can get all the codes here


要查看或添加评论,请登录

Nishant Singh的更多文章

  • Configuring Hive with HDFS & MapReduce Cluster backend

    Configuring Hive with HDFS & MapReduce Cluster backend

    Hello to the reader , hope you are all doing great. Now that you are here, Lets just start it already ??.

  • Why handlers are used in Ansible?

    Why handlers are used in Ansible?

    Handlers are the tasks which gets triggered when some changes are made to a particular task. This solves a very…

  • Setting up AWS CDN with AWS CLI

    Setting up AWS CDN with AWS CLI

    Content Delivery Networks is one of the best utilization of a company's own private network across the globe. A company…

  • Play with IPs , IPv4 in particular

    Play with IPs , IPv4 in particular

    This article is an interesting one at least for me, Although it takes time to understand networking concepts since I…

  • A Session with two experts

    A Session with two experts

    The session was started by Mr. Arun Eapen with the explanation of what automation is and specially why we need it,So…

  • Configuring HAProxy-LB with Ansible

    Configuring HAProxy-LB with Ansible

    As I always say its always better to have a look onto the basic technical terms to get started, and so lets see what we…

  • Configuring Hadoop(NN/DN) via Ansible

    Configuring Hadoop(NN/DN) via Ansible

    Before getting hands on into any practical implementation its always good to know the terminologies..

  • Getting started with AWS CLI....

    Getting started with AWS CLI....

    This is an small article on explanation of getting started with Command line interface with some easy and helpful…

  • Ubisoft got enhanced with AWS

    Ubisoft got enhanced with AWS

    Normally Every Gaming company demands some big infrastructure with good quality CPU, RAM for the game development and…

  • Hybrid Cloud Setup: K8s and RDS

    Hybrid Cloud Setup: K8s and RDS

    A great setup to learn the intrication of the two different cloud platforms working together with the help of terraform…

社区洞察

其他会员也浏览了