Infrastructure As A Code using Terraform on AWS through EFS

Infrastructure As A Code using Terraform on AWS through EFS

What is AWS EFS?

No alt text provided for this image


EFS is an NFS file system service offered by AWS. An Amazon EFS file system is excellent as a managed network file system that can be shared across different Amazon EC2 instances and works like NAS devices. It offers durable, highly available storage that can be utilized by thousands of servers at the same time. AWS EFS is a fully managed service that is automatically scalable. This means that the file system will actually go up or down in size as you add files to it or remove files from it.

Why is EFS over EBS?

While both EBS and EFS offer great features, these two storage solutions are actually built for two completely different uses. EBS volumes are limited to a single instance, and, more importantly, then can only be accessed by one instance at a time. With EFS, you can have hundreds or thousands of instances accessing the file system simultaneously. This makes AWS EFS a great fit for any use that requires a decent performing centralized shared storage.

So, for developing IaaC we need to complete following task:

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Uplode the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html.

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Step 1: Configure your AWS account and create your profile for Terraform to perform the necessary task in AWS.

provider "aws" {
  region = "ap-south-1"
  profile = "kamya"
}

Step 2: Now we need to make key-pair for logging into AWS.

resource "tls_private_key" "awskey" {
  algorithm = "RSA"
  rsa_bits   = "4096"
}
resource "aws_key_pair" "key2" {
  key_name = "kamk"
  public_key = tls_private_key.awskey.public_key_openssh
}

Step 3: Now we need to create a security group that allow HTTP port 80, SSH port 22 and NFS port 2049.

resource "aws_security_group" "security" {
  name        = "sec group"
  description = "Allow HTTP, SSH and EFS inbound traffic"
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Step 4: Next, we need to create an EC2 instance and mention other details of the instance like security group, keypair, instance type and ami id and through remote provisioner we need to download httpd, git and EFS utils to the instance.

resource "aws_instance" "instance" {
  ami                  = "ami-0447a12f28fddb066"
  instance_type = "t2.micro" 
  key_name       = "kamk"
  security_groups = [aws_security_group.security.name]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.awskey.private_key_pem
    host     =   "${aws_instance.instance.public_ip}"
  }
  provisioner "remote-exec"{
    inline = [
      "sudo yum install git -y",
      "sudo yum install httpd -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
      "sudo systemctl enable httpd",
      "sudo yum install -y amazon-efs-utils"
    ]    
  }      
  tags        ={
    Name = "kamya"
  }
}

Step 5: Now we need to create an EFS storage and attach it with the above instance by providing the subnet id and security group of the instance.

resource "aws_efs_file_system" "efs" {
  creation_token = "aws_efs"


  tags = {
    Name = "efs_storage"
  }
}
resource "aws_efs_mount_target" "efs_mount" {
  file_system_id     = "${aws_efs_file_system.efs.id}"
  subnet_id             = "${aws_instance.instance.subnet_id}"
  security_groups   = ["${aws_security_group.security.id}"]
}

We can also check that from AWS WebUI

No alt text provided for this image


Step 6: Now we will create a s3 bucket to store our images for the website in it.

resource "aws_s3_bucket" "task2s3kamya" {
  bucket = "task2s3kamya"
  acl = "public-read"
}
  locals {
    s3_origin_id = "myS3task2"
  }

resource "aws_s3_bucket_object" "object" {
  depends_on = [aws_s3_bucket.task2s3kamya]
  bucket = "task2s3kamya"
  key    = "AWS-EFS.jpg"
  source = "C:/Users/Admin/Desktop/AWS-EFS.jpg"
  content_type= "image/jpg"
  acl    ="public-read"
}

Step 7: Next, we need to create cloudfront and connect it with S3.

resource "aws_cloudfront_distribution" "s3_distribution" {
  depends_on = [aws_s3_bucket_object.object]
  origin {
    domain_name = "${aws_s3_bucket.task2s3kamya.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"
}
  
  enabled             = true
  default_root_object = "AWS-EFS.jpg"   


 default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }
  viewer_certificate {
    cloudfront_default_certificate = true
  }
 
}

Step 8: Next we need to mount the Apache Webserver on EFS and also mount our code from Github to the Apache Webserver folder and finally update the Cloudfront URL for the image to the Github code.

resource "null_resource" "null1" {


depends_on = [
    aws_efs_mount_target.efs_mount,
    aws_instance.instance,
    aws_cloudfront_distribution.s3_distribution, 
]
connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key =  tls_private_key.awskey.private_key_pem
    host     =   "${aws_instance.instance.public_ip}"
  }
provisioner "remote-exec"{
    inline = [ 
      "sudo mount -t efs '${aws_efs_file_system.efs.id}':/ /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/kamya-24/kamya.git /var/www/html/",
      "sudo su <<EOF" , "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.object.key}'>\"  >> /var/www/html/index.html" , "EOF", 
      ]   
  }
}

Step 9: Finally we need to open the chrome and by entering our public IP we can see our Website. The below codes open the chrome with the public IP and we can seethe website with one click.

resource "null_resource" "null2" {


depends_on =[
    null_resource.null1,
]
provisioner "local-exec"{
    command = "start chrome ${aws_instance.instance.public_ip}"
}
}

By using terraform apply command entire setup will be launched

OUTPUT:

No alt text provided for this image

At last we can destroy the entire setup by the following command

No alt text provided for this image
No alt text provided for this image


要查看或添加评论,请登录

Kamya Desai的更多文章

社区洞察

其他会员也浏览了