LAUNCHING A WEB-SERVER ON AWS USING TERRAFORM WITH COMPLETE AUTOMATION

LAUNCHING A WEB-SERVER ON AWS USING TERRAFORM WITH COMPLETE AUTOMATION

GUI Portal makes using any cloud easy, few clicks and your job is done. But in the modern world everyone needs automation which can not be achieved using the very beautiful and dynamic GUI provided by the cloud. If we have created an infrastructure on cloud and our requirement is to create multiple infrastructures of the same kind, using GUI would mean an utter waste of time and may as well lead to human errors.

This is where Infrastructure as Code (IaC) comes into play. Developers write code to create the infrastructure and when this code is run, the complete infrastructure can be created or destroyed within a few seconds.

Terraform uses Infrastructure as Code to provision and manage any cloud, infrastructure, or service

TASK 1: Have to create/launch Application using Terraform

  1. Create the key and security group which allow the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1
  4. Launch one Volume (EBS) and mount that volume into /var/www/html
  5. Developer has uploaded the code into GitHub repository also the repository has some images.
  6. Copy the GitHub repository code into /var/www/html
  7. Create S3 bucket, and copy/deploy the images from GitHub repository into the S3 bucket and change the permission to public readable.
  8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

CODE:

NOTE: The terraform has been used on a Windows operating system as the host. AWS-CLI and Git should be installed on your machine where terraform is running.

Step-1: Create a configuration file so terraform can access your AWS account and then use this profile for safety instead of providing your access and secret key separately.

provider "aws" {
  region  = "ap-south-1"
  profile = "ashimach"
}

Step-2: Create a key-pair which you will be using in the instance

variable "key_name" {}


resource "tls_private_key" "example" {
  algorithm = "RSA"
  rsa_bits  = "4096"
}


resource "aws_key_pair" "generated_key" {
  key_name   = var.key_name
  public_key = tls_private_key.example.public_key_openssh
}

Step-3: Create a security group which allows ingress traffic from port number 80 and 22

resource "aws_security_group" "sec_g" {
  name        = "sec_g"
  vpc_id      = "vpc-45f1ec2d"


  ingress {
    
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]


          }
ingress   {
    
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]


          }
   
   egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
          }


  tags = {
    Name = "sec_g"
  
         }
               
           
}

Step-4: Create EC2 instance for the deployment of the webserver, we'll be using the key pair and security group created by us set it dynamically so we don't need to hard code repeatedly. Then we'll connect our instance via ssh in terraform itself and install httpd server and Git.

resource "aws_instance" "ashweb" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = aws_key_pair.generated_key.key_name
security_groups = [ "sec_g" ]


 connection {
    type     = "ssh"
    user     = "ec2-user"
private_key =tls_private_key.example.private_key_pem
host = aws_instance.ashweb.public_ip


  }


   provisioner "remote-exec" {
    
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd"
    ]
}
tags = {
    Name = "ashweb"
  }


  }

Step-5: We will be creating 2 null resources for local execution for public IP and opening it on chrome browser

resource "null_resource" "local1" {
 


  provisioner "local-exec" {
    command = "echo ${aws_instance.ashweb.public_ip} > publicip.txt"
  }
}


resource "null_resource" "local2" {
  depends_on = [
    null_resource.remote1,aws_cloudfront_distribution.s3_distribution
  ]
  provisioner "local-exec" {
    command = "start chrome ${aws_instance.ashweb.public_ip}"
  }
}

Step-6: Now we create our own EBS (Elastic Block Storage) in the same availability zone as our EC2 instance and then attach it to the same. We will then form a connection via ssh, format and mount our disk then copy the html file from our repository in GitHub

output "outaz"{
  value=aws_instance.ashweb.availability_zone
}




resource "aws_ebs_volume" "ebs1" {
  availability_zone = aws_instance.ashweb.availability_zone
  size = 1


  tags = {
    Name = "ebs1"
  }
}
output "myos_ip" {
  value = aws_instance.ashweb.public_ip
}
resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.ebs1.id
  instance_id = aws_instance.ashweb.id
force_detach = true
}


resource "null_resource" "remote1"  {


depends_on = [
    aws_volume_attachment.ebs_att,
  ]




  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.example.private_key_pem
    host     = aws_instance.ashweb.public_ip
  }


provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
     "sudo mkdir /root/ec2-user/code",
      "sudo git clone https://github.com/ashimachopra20/lworld.git /home/ec2-user/code/",
      "sudo sed -i s/url/aws_cloudfront_distribution.s3_distribution.domain_name/g /home/ec2-user/code/index.html",
      "sudo cp /home/ec2-user/code/* /var/www/html/"
    ]
  }
}

Step-7: Create S3 bucket and add object(image) to it

resource "aws_s3_bucket" "buckett" {
  bucket = "task1bucketty"
  force_destroy = true
  acl = "public-read"


 
  tags = {
    Name        = "buckett"
    Environment = "Dev"
  }
}
output "myost" {
  value = aws_s3_bucket.buckett
}
resource "aws_s3_bucket_object" "bucketobj" {
  bucket = "task1bucketty"
  key    = "myaws.jpg"
  source = "myaws.jpg"
  acl = "public-read-write"
 
}


locals {
s3_origin_id = "s3Origin"
}
}

Step-8: Create the Cloudfront with source being S3 bucket.

resource "aws_cloudfront_distribution" "s3_distribution" {
  depends_on = [
    null_resource.remote1
  ]
  origin {
    domain_name = aws_s3_bucket.buckett.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "The image of AWS"
  default_root_object = "myaws.jpg"


  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false
      headers      = ["Origin"]


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "blacklist"
      locations        = ["US", "CA"]
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.example.private_key_pem
    host     = aws_instance.ashweb.public_ip
  }


  provisioner "local-exec" {
    //inline = [
      //"sudo su << EOF",
      command = "start chrome https://${aws_instance.ashweb.public_ip}/index.html"
      //"EOF",
    //]
  }
}

You can open the webpage and test it. You can change the GitHub repsitories and file names as per your convenience.

OUTPUT

No alt text provided for this image


All suggestions are welcome to make the article and code better.

Find the complete code here:


要查看或添加评论,请登录

社区洞察

其他会员也浏览了