Website Deployment over the 
    AWS cloud Automated using 
                    Terraform.

Website Deployment over the AWS cloud Automated using Terraform.

Cloud Computing is the reason that everybody around the world is able to access all the contents of the website without facing any delay in the same. This can be done by creating a code in Terraform. Terraform is an intelligent tool which works on the top of almost all the major clouds like AWS, Azure etc. There are several advantages of building a code in terraform to setup an infrastucture such as less chances of mistakes, very quick to setup the whole infra again etc. In this article I have created the infrastucture for the deployment of the website using Terraform. It includes:

  • Creating an AWS instance using the EC2 service from the AWS Cloud.
  • Getting all the softwares installed inside the instance.
  • Creating, attching, formating and mounting an additional EBS volume so that the data remains persistent
  • Pulling the code sent by the developer on GitHub and cloning the same into the required folder inside our EC2 instance.
  • Creating an S3 bucket for the storage of all the static data like images , videos , documents etc.
  • This will be sent to all the edge locations using another service from AWS cloud ie. CloudFront so that none of our customer accross the globe faces latency.
  • Our Final webpage would be displayed over the browser automatically.

Let's start building the code for the same.

#provider


provider "aws" {
  profile = "khushi"
  region  = "ap-south-1"
}

Provider is used to specify the cloud provider that we are going to use. It is important to specify the cloud provider since the Plugins are to be downloaded for the perticular cloud provider. These plugins are the one which makes terraform intelligent.

Profile specifies from which account you are logging in. This would pick up the credentials for that account from your local system.

No alt text provided for this image

Also, we need to specify the region in which we want to launch the entire infratructure of ours. Here I have specified ap-south-1 which is the region id for the MUMBAI region.

#securitygroup


resource "aws_security_group" "allow_traffic" {
  name        = "allow_traffic"
  description = "Allow TLS inbound traffic"
  vpc_id      = "vpc-02bea36a"


  ingress {
    description = "http"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "ping"
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "allow_traffic"
  }
}

This code is used to create the security group.

VPC ID can be taken from the AWS management console.

We need to set up the Ingress ie. the traffic coming to our websites. We need to specify the ports accordingly. I have specified 3 ports:

  • SSH so that we can connect to the EC2 instance remotely.
  • HTTP so that the traffic can connect to our website.
  • ICMP so that we can check the connectivity using the ping command.

Egress is used to setup for the outbound traffic and here it has been set to all ports .

#ec2 instance launch


resource "aws_instance" "webserver" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  security_groups = [ "allow_traffic" ]
  key_name = "mykey"
 
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/HP/Downloads/mykey.pem")
    host     = aws_instance.webserver.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd"
    ]
  }
  tags = {
    Name = "webserver"
  }
}

This is used to launch an EC2 instance over which we would like to run our website. Amazon provides many images for launching the OS known as AMI(Amazon Machine Image). The unique AMI ID can be taken from the management console.

Instance type: we need to specify the instance type according to our requirement such as Number of CPU's , RAM etc.

The key used is a precreated key which is stored inside our local system. Then we establish a Connection so that we can do ssh on the instance so that we could run the commands on the remote system to install all the required softwares.

Provisioner are used in two ways : for remote (on the remote system ) and local (on our own local base system). Here we used remote exec using which we install httpd, git and php.

(Note that these commands are according to the os that we used ie. the Amazon linux. if you would like to run the commands on some other os, you should know the commands accordingly)

# create volume
resource "aws_ebs_volume" "web_vol" {
 availability_zone = aws_instance.webserver.availability_zone
 size = 1
 tags = {
        Name = "web_vol"
 }
}

This is to create the EBS volume to make the data inside our webserver persistent. Since your availability zone should be the same as that for your instance, you can use this approach for retrieving the availablity zone of your instance. The size here specifies the size of your volume ie. 1gb in this case.

# attach volume


resource "aws_volume_attachment" "web_vol" {


depends_on = [
    aws_ebs_volume.web_vol,
  ]
 device_name = "/dev/xvdf"
 volume_id = aws_ebs_volume.web_vol.id
 instance_id = aws_instance.webserver.id
 force_detach = true


connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/HP/Downloads/mykey.pem")
    host     = aws_instance.webserver.public_ip
  }


provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4 /dev/xvdf",
      "sudo mount /dev/xvdf /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/khushi20218/cloud1.git /var/www/html/"


    ]
  }
}

This is for the attachment of the volume that we created to the instance. Volume_id and instance_id are used to specify the unique ids of the volume and the instance.

Force Detach: If your volume is mounted to some of your folder in the instance and you woukd like to terminate it , you can't. Therefore this option helps you to detach your volume forcefully even if the volume is mounted.

After this we again establish the connection to the instance and use provisioner to run the commands for formating and mounting. The git clone command is used to clone the GitHub code pushed by the developer into the /var/www/html folder (which is the by default folder for the webserver pages).

# s3 bucket


resource "aws_s3_bucket" "s3bucket" {
  bucket = "123mywebbucket"
  acl    = "public-read"
  region = "ap-south-1"


  tags = {
    Name = "123mywebbucket"
  }
}

Creating an s3 bucket would avoid any latency faced to access the static data of any website from anywhere in the world.

# adding object to s3


resource "aws_s3_bucket_object" "image-upload" {


depends_on = [
    aws_s3_bucket.s3bucket,
  ]
    bucket  = aws_s3_bucket.s3bucket.bucket
    key     = "download.jpg"
    source = "C:/Users/HP/Desktop/cloud/download.jpg"
    acl     = "public-read"
}

Now, we need to uplaod the object (ie. the static data) to the s3 bucket that we just created. Key is the name of the file after the object is uploaded in the bucket and source is the path of the file to be uploaded.

# cloud front




variable "oid" {
	type = string
 	default = "S3-"
}


locals {
  s3_origin_id = "${var.oid}${aws_s3_bucket.s3bucket.id}"
}


resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.s3bucket.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"
  }


  enabled             = true
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }




  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }




connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/HP/Downloads/mykey.pem")
    host     = aws_instance.webserver.public_ip
  }


provisioner "remote-exec" {
    inline = [
        "sudo su <<END",
        "sudo echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}' height='200' width='200' >\"  >> /var/www/html/index.php",
        "END",
    ]
  }


}

CloudFront is the service that is provided by the AWS in which they create small data centres where they store our data to achieve low latency.

We need to keep in mind the following few points while configuring the cloudfront

  • Specify the domain name and the origin id.
  • Set the default_cache_behavior which is a required block of code.
  • Set the viewer_protocol_policy specifying the default and maximum TTL.
  • Set any restrictions if required (whitelist & blacklist).
  • Set the viewer_certificate as true.

Now, we need to put the URL provided by the cloudfront into the code provided by the developer so that the client can see it.

To write the code into an already existing file we have to be the root user, and right now we are ec2-user by default. so, either We can login as the root user - This is possible from GUI or CLI but from Terraform code this is not possible directly. Therefore we Switch user to root on the fly - When we use this command: sudo su — root , a child shell is created. If this is written on the local system, this command works seamlessly. But if this is written from a remote system, here Terraform, it fails to get the child shell. So the final correct solution is use sudo to run a shell and use a heredoc to feed it commands.

This command: echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}' height='200' width='200' >\" >> /var/www/html/index.php gets copied in the end of the webpage and therefore the client is able to see the final webpage.

resource "null_resource" "website"  {
depends_on = [
    aws_cloudfront_distribution.s3_distribution,   aws_volume_attachment.web_vol
  ]
provisioner "local-exec" {
     command = "start chrome https://${aws_instance.webserver.public_ip}/"
   }
}

This is another additional thing : as and when we run the code the chrome automatically opens the public url of the website . This is one way of testing that our site has no bug. Now, lets see how the terraform builds this code .

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Terraform goes and finds out the file having extension .tf in the current folder and starts building the infrastructure and would one by one create all the resources that you asked for. (Note that it requires a stable internet connection. so, ensure that you have one !!)

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Now, finally the webpage can be accessed using the public ip of the webserever.

No alt text provided for this image
No alt text provided for this image

Thats all !! Do leave your valuable feedbacks . For any queries or correction feel free to contact.





要查看或添加评论,请登录

Khushi Thareja的更多文章

社区洞察

其他会员也浏览了