Automate EFS with Cloudfront using Terraform

Automate EFS with Cloudfront using Terraform

Project Description:

1. Create a Security group that allows the port 80.

2. Create a key pair.

3. Launch EC2 instance. In this Ec2 instance use the key created in Step 2 and the security group which has been created in Step 1. Also, install the required packages.

4. Make a GitHub repo having some image/s.

5. Launch one Volume using the EFS service and attach it in your VPC, then mount that volume into /var/www/html

6. Copy the GitHub repo code into /var/www/html

7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.

8. Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update the code in /var/www/html

TERRAFORM

Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON.

AWS EFS

Amazon Elastic File System (Amazon EFS) is a cloud storage service provided by Amazon Web Services (AWS) designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources.

Implementation:

First Create a Security group that allows the port 80.

resource "aws_security_group" "task-2-sg" {
  depends_on = [ aws_key_pair.task-2-key, ]
  name        = "task-2-sg"
  description = "Allow SSH AND HTTP and NFS inbound traffic"
  vpc_id      = "vpc-758b931d"


  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [ "0.0.0.0/0" ]
  }


  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [ "0.0.0.0/0" ]
  }
 ingress {
    description = "NFS"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = [ "0.0.0.0/0" ]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "task-2-sg"
  }
}

Second Generate a key pair

resource "tls_private_key" "task-2-pri-key" { 
  algorithm   = "RSA"
  rsa_bits = "2048"
}


resource "aws_key_pair" "task-2-key" {
  depends_on = [ tls_private_key.task-2-pri-key, ]
  key_name   = "task-2-key"
  public_key = tls_private_key.task-2-pri-key.public_key_openssh
}

Third Launch EC2 instance :

In this Ec2 instance use the key created in Step 2 and the security group which has been created in Step 1. Also, install the required packages.

resource "aws_instance" "task-2-os1" {
   depends_on =  [ aws_key_pair.task-2-key,
              aws_security_group.task-2-sg, ] 
   ami                 = "ami-0447a12f28fddb066"
   instance_type = "t2.micro"
   key_name       =  "task-2-key"
   security_groups = [ "task-2-sg" ]
     connection {
     type     = "ssh"
     user     = "ec2-user"
     private_key = tls_private_key.task-2-pri-key.private_key_pem
     host     = aws_instance.task-2-os1.public_ip
   }
   provisioner "remote-exec" {
    inline = [
       "sudo yum update -y",
      "sudo yum install httpd  php git -y ",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }
  tags = {
      Name =  "task-2-os1"
           }
}

Make a GitHub repo having some image/s.

Launch one Volume using the EFS service and attach it in your VPC, then mount that volume into /var/www/html

resource "aws_efs_file_system" "allow_nfs" {
 depends_on =  [ aws_security_group.task-2-sg,
                aws_instance.task-2-os1,  ] 
  creation_token = "allow_nfs"


  tags = {
    Name = "allow_nfs"
  }
}


resource "aws_efs_mount_target" "alpha" {
 depends_on =  [ aws_efs_file_system.allow_nfs,
                         ] 
  file_system_id = aws_efs_file_system.allow_nfs.id
  subnet_id      = aws_instance.task-2-os1.subnet_id
  security_groups = ["${aws_security_group.task-2-sg.id}"]
}

  • Step 6. Copy the GitHub repo code into /var/www/html
resource "null_resource" "null-remote-1"  {
 depends_on = [ 
               aws_efs_mount_target.alpha,
                  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.task-2-pri-key.private_key_pem
    host     = aws_instance.task-2-os1.public_ip
  }
  provisioner "remote-exec" {
      inline = [
        "sudo echo ${aws_efs_file_system.allow_nfs.dns_name}:/var/www/html efs defaults,_netdev 0 0 >> sudo /etc/fstab",
        "sudo mount  ${aws_efs_file_system.allow_nfs.dns_name}:/  /var/www/html",
        "sudo curl https://https://github.com/surasolanki724/aws-tera-repo/blob/master/index.html > index.html",                                  "sudo     cp index.html  /var/www/html/",
      ]
  }
}

Create an S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.

resource "aws_s3_bucket" "terra-bucket" {
depends_on = [
    null_resource.null-remote-1,    
  ]     
  bucket = "terra-bucket"
  force_destroy = true
  acl    = "public-read"
  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Id": "MYBUCKETPOLICY",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::terra-bucket/*"
    }
  ]
}
POLICY
}
resource "aws_s3_bucket_object" "task-2-object" {
  depends_on = [ aws_s3_bucket.terra-bucket,
                null_resource.null-remote-1,
               
 ]
     bucket = aws_s3_bucket.terra-bucket.bucket
  key    = "one"
  source = "E:/tera/myaws/task-2-EFS/IMG_20200121_090551-01.jpeg"
  etag = "E:/tera/myaws/task-2-EFS/IMG_20200121_090551-01.jpeg"
  acl = "public-read"
  content_type = "image/jpeg"
}


locals {
  s3_origin_id = "aws_s3_bucket.terra-bucket.id"
}

Cloudfront :

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment

Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update the code in /var/www/html

resource "aws_cloudfront_origin_access_identity" "oai" {
   comment = "CloudFront S3 sync"
}
// Creating CloudFront Distribution for S3
resource "aws_cloudfront_distribution" "s3_distribution" {
  depends_on = [
    aws_key_pair.task-2-key,
    aws_instance.task-2-os1
  ] 
  origin {
    domain_name = aws_s3_bucket.terra-bucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
    s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
    }
  }
  enabled             = true
  is_ipv6_enabled     = true
  comment             = "ClouFront S3 sync"
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
# Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      headers      = ["Origin"]
      cookies {
        forward = "none"
      }
    }
    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
# Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
  price_class = "PriceClass_200"
  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }
  tags = {
    Environment = "production"
  }
  viewer_certificate {
    cloudfront_default_certificate = true
  }
}
output "out3" {
        value = aws_cloudfront_distribution.s3_distribution.domain_name
}



resource "null_resource" "null-remote2" {
 depends_on = [ aws_cloudfront_distribution.s3_distribution, ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.task-2-pri-key.private_key_pem
    host     = aws_instance.task-2-os1.public_ip
   }
   provisioner "remote-exec" {
      inline = [
      "sudo su << EOF",
      "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.task-2-object.key }'>\" >> /var/www/html/index.html",
       "EOF"
   ]
 }
}


Screenshots of Process:

$ terraform apply 
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


FINAL OUTPUT:

The content of the web page is coming from Github which is developed by a developer and the image is coming from the CloudFront using S3.

No alt text provided for this image


No alt text provided for this image

Thankyou !!

Hope you liked it!

If you have any query feel free dm me.

要查看或添加评论,请登录

Suraj S.的更多文章

社区洞察

其他会员也浏览了