Deploy Infrastructure(website) On AWS and integrating with EFS(storage) using Terraform

Deploy Infrastructure(website) On AWS and integrating with EFS(storage) using Terraform

!! ?????????? ?????????????????????? !!

??Welcome you all to my article based on TASK-2 of Hybrid Multi Cloud Computing ??

?? TASK DESCRIPTION:

?? Write an Infrastructure as code using terraform, which automatically deploy the webserver.

??  Create Security group which allow the port 80.

?? Launch EC2 instance.

?? In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

?? Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html .

??Developer have uploded the code into github repo also the repo has some images.

?? Copy the github repo code into /var/www/html .

?? Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

?? Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html .

?? In these task we are going to automate the deployment of Web Application by Infrastructure As A Code using terraform code.The update in TASK2 is about storage we provide to our infrastructure . In this task we use EFS ( Elastic File Storage ) instead of EBS volumes .

?? Prerequisites:

  1. Terraform must be installed.
  2. AWS cli

??WHY TERRAFORM ??

Terraform provides full lifecycle management of Kubernetes resources including creation and deletion of pods, replication controllers, and services.

Unlike the kubectl CLI, Terraform will wait for services to become ready before creating dependent resources. This is useful when you want to guarantee state following the command's completion. As a concrete example of this behavior, Terraform will wait until a service is provisioned so it can add the service's IP to a load balancer. No manual processes necessary!...

Let's Begin....

Step 1: Configure AWS via CLI

No alt text provided for this image

Here you have to provide your AWS Access key and Secret Access Key...

Step 2: Create VPC and Subnet...

provider "aws" {
  region  = "ap-south-1"
}
resource "aws_vpc" "main" {
  cidr_block       = "192.168.0.0/16"
  instance_tenancy = "default"


  tags = {
    Name = "myvpc1"
  }
}
resource "aws_subnet" "main1" {
  vpc_id     = "${aws_vpc.main.id}"
  cidr_block = "192.168.0.0/24"
  map_public_ip_on_launch = true
  availability_zone = "ap-south-1a"




  tags = {
    Name = "subnet1"
  }
}
No alt text provided for this image


Step 3: Create Internet Gateway...

resource "aws_internet_gateway" "gw" {
  vpc_id = "${aws_vpc.main.id}"


  tags = {
    Name = "mygw1"
  }
}
No alt text provided for this image


Step 4: Create Route Table...

resource "aws_route_table" "r" {
  vpc_id = "${aws_vpc.main.id}"


  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.gw.id}"
  }


  tags = {
    Name = "routetable"
  }
}
resource "aws_route_table_association" "a" {
  subnet_id      = aws_subnet.main1.id
  route_table_id = aws_route_table.r.id
}
No alt text provided for this image

Step 5: Create Security Group...

resource "aws_security_group" "sg1" {
  name        = "securitygroup1"
  description = "Allow NFS"
  vpc_id      = "${aws_vpc.main.id}"


  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "NFS"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "NFSgroup1"
  
}
}
No alt text provided for this image
No alt text provided for this image


Step 6: Create Elastic File System (EFS) for storage...

resource "aws_efs_file_system" "myefs" {
  creation_token = "myefs"
  performance_mode = "generalPurpose"


  tags = {
    Name = "myefs1"
  }
}


resource "aws_efs_mount_target" "myefs-mount" {
  file_system_id = aws_efs_file_system.myefs.id
  subnet_id = aws_subnet.main1.id
  security_groups = [ aws_security_group.sg1.id ]
}
No alt text provided for this image

Step 7: After successful mounting of EFS , we have to launch our instance to deploy web application...

resource "aws_instance" "webserver" {
  depends_on = [ aws_efs_mount_target.myefs-mount ]
  ami = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = "mykey11"
  subnet_id = aws_subnet.main1.id
  vpc_security_group_ids = [ aws_security_group.sg1.id ]
  
  tags = {
    Name = "WebServer"
  }
}
resource "null_resource" "nullremote1" {
  depends_on = [
    aws_instance.webserver
  ]
  connection {
    type = "ssh"
    user= "ec2-user"
    private_key = file("mykey11.pem")
    host = aws_instance.webserver.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git amazon-efs-utils nfs-utils -y",
      "sudo setenforce 0",
      "sudo systemctl start httpd",
      "sudo systemctl enable httpd",
      "sudo mount -t efs ${aws_efs_file_system.myefs.id}:/ /var/www/html",
      "sudo echo '${aws_efs_file_system.myefs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/vishaldhole173/AWS_EFS_TerraformIntegration /var/www/html/"
    ]
  
}


}

No alt text provided for this image

Step 8: Create S3 bucket, in which we will upload an image from git hub and give it public access...

 resource "aws_s3_bucket" "b" {
  bucket = "vishaldhole173bucket"
  acl    = "public-read"
 tags = {
  Name = "mybucket12345"
}


}
resource "aws_s3_bucket_object" "object" {
  bucket = aws_s3_bucket.b.id
  key    = "image.png"
}


locals{
  s3_origin_id = "aws_s3_bucket.b.id"
  depends_on = [aws_s3_bucket.b]
}
No alt text provided for this image
No alt text provided for this image

Step 9: To access the bucket object as image we have to create one cloud front using S3 which will work as Content Delivery Network. Using , the URL provided by cloud front we can update our code.

resource "aws_cloudfront_distribution" "cloudfront1" {
    enabled             = true
    is_ipv6_enabled     = true
    wait_for_deployment = false
    origin {
        domain_name = "${aws_s3_bucket.b.bucket_regional_domain_name}"
        origin_id   = local.s3_origin_id
    s3_origin_config {
       origin_access_identity = "${aws_cloudfront_origin_access_identity.identity.cloudfront_access_identity_path}" 
        
}
}
    default_cache_behavior {
        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        cached_methods = ["GET", "HEAD"]
        target_origin_id = local.s3_origin_id
      forwarded_values {
            query_string = false
        
            cookies {
               forward = "none"
            }
        }
        
        viewer_protocol_policy = "redirect-to-https"
        min_ttl                =  0
        default_ttl            =  3600
        max_ttl                =  86400
    }
    restrictions {
        geo_restriction {
            restriction_type = "none"
        }
    }
viewer_certificate {
        cloudfront_default_certificate = true
    }
}

No alt text provided for this image

Finally ,the whole infrastructure is ready . We can now access website application using , public ip of webserver instance . You can also automate these part using echo command in local provisioner .

No alt text provided for this image

Now update the code in /var/www/html by replacing images with this cloudfront url of images.

No alt text provided for this image

Thus our website is successfully deployed on the AWS cloud using terraform. Now by using only one command "terraform apply" we can set-up our environment and using "terraform destroy" we can destroy all.This is the power of terraform. But before that we will run "terraform init" to install all the back-end plugins.

terraform init
No alt text provided for this image
terraform plan
No alt text provided for this image
No alt text provided for this image
terraform apply
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

I would like to thanks Mr .Vimal Daga sir for giving such challenging tasks which enhances my skills in cloud computing .

Thank you all for reading my article !!

??To see the complete terraform code, check the GitHub Link - Task-2











Rahul Chougule

Product Developer I at American bureau of shipping(ABS) Pune

4 年

excellent...

Dhiraj Bodake

Software Engineer at Pratiti Technology

4 年

Excellent work broo ???

Anushka Visapure

DevOps Engineer at BetaQue???? IT Engineer???? 1X AWS Certified || 1X Microsoft Certified?? Trained In Kubernetes || Terraform || Git and GitHub || GitHub Action || Docker || Ansible || AWS || GCP !!

4 年

Great ??

Chaitanya Chougule

Cloud Engineer @Searce Inc.

4 年

Well done bro Vishal Dhole

要查看或添加评论,请登录

社区洞察

其他会员也浏览了