Automating web deployment on AWS using Terraform(Task2)

Automating web deployment on AWS using Terraform(Task2)

Hello Connections ! Welcome to this Article.In this article i am gonna show you how to deploy an web application on top of aws using terraform with EFS.This is my second Task of Hybrid-multi-cloud Trainning at LinuxWorld Information PVT. LTD

Task Description:- Enhansing the task One with the help of Elastic File System

Perform the task-1 using EFS instead of EBS service on the AWS as,

Create/launch Application using Terraform

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

1.) Creating Security Group Using Terrafrom :-

Before we will proceed further aws account should be configured in our system and then only we can do this task.

provider "aws" {


        region = "ap-south-1"
        profile = "yogi"
}


In Aws Firewalls are known as Security Groups which we can apply on any instance and by-default every firewall denied Inbound traffic and allowed all outgoing Traffic.As we know httpd server run on port 80 & we also want to login inside vm to perform other tasks so i am allowing Port 80 & 22 to be accessed by any IP and also allowing icmp protocol so we can ping to our Instance.

resource "aws_security_group" "webserver" {


	name = "webserver"
	description = "Allow HTTP and SSH inbound traffic"
	
	ingress	{
		
		from_port = 80
      		to_port = 80
      		protocol = "tcp"
      		cidr_blocks = ["0.0.0.0/0"]
      		ipv6_cidr_blocks = ["::/0"]
      	}
      	
      	ingress {
      		
      		from_port = 22
      		to_port = 22
      		protocol = "tcp"
      		cidr_blocks = ["0.0.0.0/0"]
      		ipv6_cidr_blocks = ["::/0"]
      	}
      	
      	ingress {
      		
      		from_port = -1
      		to_port = -1
      		protocol = "icmp"
      		cidr_blocks = ["0.0.0.0/0"]
      		ipv6_cidr_blocks = ["::/0"]
      	}
      	
      	egress {
      	
      		from_port = 0
      		to_port = 0
      		protocol = "-1"
      		cidr_blocks = ["0.0.0.0/0"]
      	}
}

Creating Key pairs:-

Creating key pair from webui is not a big deal so as we create we also save that key otherwise we can not Login Inside the Instance.So i am creating The Key-pairs using terraform and saving the key in my local system so that in future i can login and Perform some task.

variable ssh_key_name {


        default = "efskeytf"
}

resource "tls_private_key" "key-pair" {


        algorithm = "RSA"
        rsa_bits = 4096
}


resource "local_file" "private-key" {


    content = tls_private_key.key-pair.private_key_pem
    filename =  "${var.ssh_key_name}.pem"
    file_permission = "0400"
}


resource "aws_key_pair" "deployer" {


  key_name   = var.ssh_key_name
  public_key = tls_private_key.key-pair.public_key_openssh
}

Creating Ec2 Instance :-

Here I am using Amazon Linux Ami and launching my ec2 instance and performing the further task .Untill our key will not be created so we can't launch our instance so i am writing one depends on rule after key will be creaated my instance will be launched and i am attaching previously created Secuity Group and Key-pair.

After Creating the Instance I am Installing Git,php and Apache Web server so that we can access our site.

resource "aws_instance" "web" {


         depends_on = [
            aws_key_pair.deployer,
          ]


  ami           = "ami-0732b62d310b80e97"
  instance_type = "t2.micro"
  key_name = "${var.ssh_key_name}"
  security_groups =  [ aws_security_group.webserver.name ]


  connection { 
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("${var.ssh_key_name}.pem")
    host     = aws_instance.web.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd"
    ]
  }


  tags = {
    Name = "myserver"
  }


}

                                                                      

Now I am Creating Elastic File System :- Aws EFS provides high Available file system which don't have any initially limit and it can increase as the demand will come.As we know on ebs volume we can not mount in many os so we need something which can be sharable between different os and as one os user change somthing in that perticular folder so on the fly it will be changed in all across the Instances & user can see newly updated data.

provider "aws" {


        region = "ap-south-1"
        profile = "yogi"
}
resource "aws_efs_file_system" "myefs" {
  creation_token = "my-product"


  tags = {
    Name = "tfefs"
  }
}
resource "aws_efs_mount_target" "alpha" {
  file_system_id = "${aws_efs_file_system.myefs.id}"
  subnet_id      = "subnet-f90e3391"
  security_groups=["sg-039c277a1ba67bd3c"]
}
resource "aws_efs_mount_target" "alpha2" {
  file_system_id = "${aws_efs_file_system.myefs.id}"
  subnet_id      = "subnet-c8016384"
  security_groups = ["sg-039c277a1ba67bd3c"]
}
resource "aws_efs_mount_target" "alpha3" {
  file_system_id = "${aws_efs_file_system.myefs.id}"
  subnet_id      = "subnet-3715a04c"
  security_groups = ["sg-039c277a1ba67bd3c"]
}

Now we have created our Efs now we can mount it into /var/www/html folder and as we put the data in that folder it will be stored in the EFS so if your OS will Terminated anyhow you will not Loose your Important data.

Now we will Mount This EFS to /var/www/html folder:-


resource "aws_instance" "web" {


         depends_on = [
             aws_efs_mount_target.alpha-3,
          ]


  ami           = "ami-0732b62d310b80e97"
  instance_type = "t2.micro"
  key_name = "${var.ssh_key_name}"
  security_groups =  [ aws_security_group.webserver.name ]


  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("${var.ssh_key_name}.pem")
    host     = aws_instance.web.public_ip
  }


  provisioner "remote-exec" {


    inline = [
      "sudo yum install httpd  php git -y",
      "sudo yum -y install amazon-nfs-utils",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
      "sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport efs-id.value.efs.ap-south-1.amazonaws.com:/   /var/www/html",
       "sudo git clone https://github.com/yogi456/hybrid-cloud-task1.git /var/www/html/"
    ]
  }
                                                                                                                     119,11        39%



Copying Github Code to /var/www/html :-

No alt text provided for this image


For Copying the code we will have to First login inside the instance then only we can clone the code inside the /var/www/html folder.So i have done both mounting and cloning in a single go in above snippets.

Creating s3 Bucket :- Simple Storage Service is a object storage which is easy to manage and highly available storage. I have Uploaded my Pic on the Github and then I cloned into my local system and then upload it to S3 bucket .

   resource "aws_s3_bucket" "b" {
          bucket = "bucketfortask222"
          acl    = "private"


          tags = {
            Name        = "mynewbuckett"
            Environment = "Dev"
          }


                provisioner "local-exec" {


                        command = "git clone https://github.com/yogi456/imagefortask1.git image-web"
                }


                provisioner "local-exec" {


                        when = destroy
                        command = "rm -rf image-web"
                }
        }
        resource "aws_s3_bucket_object" "object" {
          bucket = aws_s3_bucket.b.bucket
          key    = "yogesh.jpeg"
          source = "image-web/yogesh.jpeg"
          acl    = "public-read"


        }
        locals {
                
                
         s3_origin_id = "myS3Origin"
        }



                        

Creating CloudFormation Using Terraform:-

No alt text provided for this image


CloudFormation is a managed service which is used for content delivery Network.It Mainly works on the Principle That it puts the content on the nearest dc of the client.It can increase low tenancy means site can be accessed very Fastly. It also restricted direct s3 bucket Access so our data will be highly secured.

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.b.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"


   
  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "my picture"
  default_root_object = "yogesh.jpeg"


  logging_config {
    include_cookies = false
    bucket          = "yogilookbook.s3.amazonaws.com"
    prefix          = "myprefix"
  }




  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


 


  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["US", "CA", "GB", "IN"]
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
  










  }

Now We will Update the existing code of our site and add the link of image which we have uploaded to s3 bucket.After all things will be fine we will launch our website

resource "null_resource" "nullremote4"  {


depends_on = [
    aws_cloudfront_distribution.s3_distribution
  ]


  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("${var.ssh_key_name}.pem")
    host     = aws_instance.web.public_ip
  }


provisioner "remote-exec" {
    inline = [
      
  			"sudo su << EOF",
            		"echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.object.key}' width='300' height='380'>\" >> /var/www/html/index.html",
            		"EOF",	
    ]
  }
  
	provisioner "local-exec" {
	    command = "firefox  ${aws_instance.web.public_ip}"
  	}
}  
  


  
  

Here Is my site

No alt text provided for this image

For running the terraform we will run these commands

terraform init   //to initialize and download all the required plugins

terraform validate //to check the code is valid or not

terraform apply //to run the terraform code

Conclusion:- Now our Website is highly available and fault tollerence and if anyhow our instance got deleted we will have our data safe because it is in the EFS and also our images will be there because they are in S3 bucket.

Code link :- Hybrid-cloud-task-2

Thank You So Much Mr. Vimal Daga Sir for your guidance because of you i am capable of doing these tasks very easily .

要查看或添加评论,请登录

Yogesh kumar Prajapati的更多文章

  • Microsoft automates to achieve more with Red Hat Ansible Automation Platform

    Microsoft automates to achieve more with Red Hat Ansible Automation Platform

    Hello, Connections! In this article, we are going to talk about what is Redhat Ansible and How Microsoft is using…

  • Deploy a Load Balancer and multiple Web Servers on AWS instances through ANSIBLE!

    Deploy a Load Balancer and multiple Web Servers on AWS instances through ANSIBLE!

    Hello connections!!Here is my another task of Ansible! Task Description:- ??Provision EC2 instances through ansible. ??…

    4 条评论
  • NETFLIX ON AWS

    NETFLIX ON AWS

    Netflix is the world’s leading internet television network, with more than 100 million members in more than 190…

  • Big DATA

    Big DATA

    Big Data refers to the amount of huge data that is Coming day by day and large data can not be stored and processed by…

  • Integrating RDS with k8s

    Integrating RDS with k8s

    Hello Connections Welcome to this article! Task Description:- Deploy the WordPress application on Kubernetes and AWS…

  • GCP automation using Terraform

    GCP automation using Terraform

    Hello connections! Welcome this article,we are going to create a Infrastrcture as code. Task Description:- 1.

    5 条评论
  • Automating Cloud infrastructure using terraform (task-4)

    Automating Cloud infrastructure using terraform (task-4)

    Hello, connections! In this article, we will create a highly secure environment in which our site will running in the…

  • Deploy Web Server on AWS through ANSIBLE!

    Deploy Web Server on AWS through ANSIBLE!

    ??Provision EC2 instance through ansible. ??Retrieve the IP Address of instance using dynamic inventory concept.

    4 条评论
  • Automation using Ansible on Docker

    Automation using Ansible on Docker

    Task description:- Write an Ansible PlayBook that does the following operations in the managed nodes: ?? Configure…

  • Creating website using Terraform with all security appliances

    Creating website using Terraform with all security appliances

    Task Description:- Statement: We have to create a web portal for our company with all the security as much as possible.…

社区洞察

其他会员也浏览了