launching instance and deploying website with EFS, S3, CLOUDFRONT.

launching instance and deploying website with EFS, S3, CLOUDFRONT.

here is the task details what to do in this task2

Perform the task-1 using EFS instead of EBS service on the AWS as,

Create/launch Application using Terraform


  1. Create the key and security group which allow the port 80.
  2. Launch EC2 instance. In this Ec2 instance use the key and security group which we have created in step-1.
  3. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
  4. Developer have uploaded the code into github repo also the repo has some images. Copy the GitHub repo code into /var/www/html
  5. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  6. Create a Cloudfront using s3 bucket(which contains images) and use the
  7. Cloudfront URL to update in code in /var/www/html


lets start writtting our code for the above step wit explation:

Step-1

I declared cloud provider and give our account details so that terraform can access our aws account. We will also provide the region where we want to work.

provider "aws" {
 region = "ap-south-1"
 profile = "default"
}

Step-2

Now we need to create security group because security group is like a virtual firewalld for our instance to control incoming or outgoing traffic. Without security group any dos-attack or hack something like that may be happen that's why we create a security group according to our requirement.

I created security group which allow port no. 80(for webserver), port no. 2049(for NFS server) and port no. 22 (for SSH) so that I can login inside instance.

/*====================================creating security group with port no. 22, 443, 80==================================================*/










resource "aws_security_group" "task2sg" {
 
  name        = "allow_tls and port 80"
  description = "Allow TLS and port 80 inbound traffic"
 
  ingress {
    description = "SSH connection"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "TLS from VPC"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "allow port 80"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "allow_tls and port 80"
  }
}



Output of following code

No alt text provided for this image


Step-3

Launching the EC2 instance from terraform using the private key and security group which we created in step-2. And going inside the instance via SSH and installing PHP, GIT , and HTTPD(for webservices ).and start our webservice nd upload some codes and images from github by the command.

/*==========================================launching EC2 instance ===========================================================================*/






resource "aws_instance" "web" {
  ami           = "ami-052c08d70def0ac62"
  instance_type = "t2.micro"
  key_name = "mytask2key"
  security_groups = ["allow_tls and port 80"]




  tags = {
    Name = "myinstance"
  }


connection {
    type         = "ssh"
    user         = "ec2-user"
    private_key  = tls_private_key.key.private_key_pem 
    host         = aws_instance.web.public_ip
  }
provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git -y",
      "sudo systemctl start httpd",
      "sudo mount -t nfs4 ${aws_efs_mount_target.target.ip_address}:/ /var/www/html",
      "sudo rm -f /var/www/html/*",
      "sudo git clone https://github.com/rakeshrdec/newrepo.git  /var/www/html",
    ]
  }




}



Output of following code

No alt text provided for this image


Step-4

what is EFS?

There are three types of storage types in aws

  1. EBS
  2. S3
  3. EFS

Elastic File system is a one kind block storage which is region dependent service. EFS is something like EBS storage in aws cloud but we can't attach EBS storage more then one instance but EFS has capability to attach more then one instance. EFS is simple, scalable, fully managed elastic NFS file system for use with AWS cloud service and on premises recourses..

Creating EFS volume

/*==========================================creating and mounting EFS================================================================================*/




resource "aws_efs_file_system" "efs" {


  tags = {
    Name = "My_efs_vol"
  }
} 




resource "aws_efs_mount_target" "target" {
  file_system_id = aws_efs_file_system.efs.id
  subnet_id      = aws_subnet.main.id
}








Output of following code

No alt text provided for this image



Step-5

S3 bucket is one kind of object storage like google drive so we can't do partion. We can put our data inside S3 bucket so we can not EDIT the file after uploading into S3 bucket. We use S3 bucket according to requirement to requirement.

Creating S3 bucket and uploading image into s3 bucket from github repo dynamically. Name of image is 1.jpg which I uploaded on github.

/*===============================================creating a new s3 bucket===============================================================*/








resource "aws_s3_bucket" "buckt" {


  bucket = "rakeshrdec"
  acl    = "public-read-write"


  tags = {
    Name        = "My_terra_bucket"
  }




provisioner "local-exec" {
    when = destroy
    command = "echo Y|rmdir /s git_image"
  }


}






/*==============================================uploading image in s3 bucket==============================================================*/








resource "aws_s3_bucket_object" "bucket_object" {
  key    = "bucket.jpg"
  bucket = aws_s3_bucket.buckt.id
  source = "git_image/1.jpg"
  acl = "public-read-write"
}




locals {
  s3_origin_id = "aws_s3_bucket.buckt.id"
}






Note:- When you will try to destroy S3 bucket from terraform via using command "terraform destroy" then it won't delete S3 bucket because your S3 bucket is not empty that time so you have to include keyword into your code.

  • force_destroy = true
No alt text provided for this image


Step-6

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as HTML, .CSS, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Creating CloudFront which distribute my image from S3 bucket in all near edge location.


/*============================================cloudfront didtribution of s3 bucket image======================================================*/






resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = aws_s3_bucket.buckt.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "this image"
  default_root_object = "1.jpg"


  


default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false
      headers      = ["Origin"]


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }




  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["IN"]
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
}


Output of following code

No alt text provided for this image


Step-7

now we have to add the image in our previous code

Cloudfront URL to update in code in /var/www/html

/*=========================================================giving null rresouces for adding image===================================================================*/




resource "null_resource" "null3" {
depends_on = [
    aws_cloudfront_distribution.s3_distribution,
  ]


 connection {
           type     = "ssh"
           user     = "ec2-user"
           private_key = tls_private_key.key.private_key_pem
           host     = aws_instance.web.public_ip
                   }
      
provisioner "remote-exec" {
           inline = [
    "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/bucket.jpg' width='400' lenght='500' >\"  | sudo tee -a /var/www/html/index.html",
 
    "sudo systemctl start httpd"
                    ]
                           }
}



Final Output of complete Code execution


No alt text provided for this image

for the code i am giving my git url:

https://github.com/rakeshrdec/task2.git

thank you for reading this article hopefully you enjoyed it

specially thanks to vimal daga sir and linux world and my friends too for supporting me to ccompelete this project

.........RAKESH KUMAR MISHRA

要查看或添加评论,请登录

Rakesh Mishra的更多文章

社区洞察

其他会员也浏览了