Launching Webserver on AWS cloud using Terraform
Launching Webserver on AWS cloud using terraform

Launching Webserver on AWS cloud using Terraform

Amazon Web Service:

Amazon Web Services, or AWS, is a cloud computing platform from Amazon that provides customers with a wide array of cloud services. Among the cloud options offered by Amazon AWS are Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon Virtual Private Cloud (Amazon VPC), Amazon SimpleDB and Amazon WorkSpaces.

Amazon first debuted its Amazon Web Services in 2006 as a way to enable the use of online services by client-side -applications or other web sites via HTTP, REST or SOAP protocols. Amazon bills customers for Amazon AWS based on their usage of the various Amazon Web Services.

In 2012, Amazon launched the AWS Marketplace to accommodate and grow the emerging ecosystem of AWS offerings from third-party providers that have built their own solutions on top of the Amazon Web Services platform. The AWS Marketplace is an online store for Amazon Web Services customers to find, compare, and begin using AWS software and technical services. if you want more information about AWS then please visit this site.

Terraform:

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. If you want more information about terraform then please visit this site.

Aim:

In this task, I have integrated Terraform and AWS to create an infrastructure for launching an application on the cloud using the EC2 service provided by AWS and using Terraform automated the whole process. The reason for doing so is that creating this infrastructure manually would be time-consuming which is not an option in this agile world so using terraform can automate the whole process and create the infrastructure for our application faster.

Problem Statement:

  1. Create the key and security group which allows the port 80.
  2. Launch EC2 instance.
  3. In the EC2 instance use the key and security group which we have created in step 1.
  4. Launch one Volume (EBS) and mount that into /var/www/html.
  5. Developers have uploaded the code into GitHub repo also the has some images.
  6. Copy the github repo code into /var/www/html
  7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
  8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Requirements:

A. AWS:

Create an AWS account and after logging in create an IAM account.

  1. Go to the AWS service dashboard and search 'IAM' and click it.
  2. click 'user' and then click 'add user'.
  3. Now follow the given screenshot in sequence.
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Now, click on 'create user' then an IAM account will be ready to use.

Download the AWS CLI tool and use AWS CLI for generating a profile which we would provide while running the Terraform code. To install AWS CLI please follow this link.

B. Terraform:

  1. Download terraform software and after downloading add terraform path in environment variables. to install terraform follow this link. After installing terraform please follow step 2.
  2. Create a folder in which our terraform code will be saved and using terraform init command download the required plugins for the providers used.

C. GitHub:

Create the Github Repo which will save our website code. If don't want to write code then you can follow my this web repo.

FlowChart of Problem Statement:

No alt text provided for this image

Some important commands of terraform:

  1. terraform init \\ used to plugging the drivers etc.
  2. terraform apply \\ used to run entire terraform code
  3. terraform destroy \\used to destroy entire terraform the environment

Here I am attaching some screenshot of these above commands so that you can get some idea of these commands

The output of terraform init command:

No alt text provided for this image

The output of terraform apply Command:

No alt text provided for this image

Explanation of Code

Step 1: Create an AWS provider.

provider "aws" {
        profile = "task1"
        region= "ap-south-1"
}

Step 2: Create a Security Group

resource "aws_security_group" "allow_http" {
  name        = "allow_http"
  description = "Allow http inbound traffic"
  vpc_id      = "vpc-b722c7dc"

  ingress {
    description = "http from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "ssh from VPC"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "allow_http"
  }
}

Step 3: Launch an EC2 instance using the pre-created Key-pair and Security Group.

resource "aws_instance" "my_task1_os" {
  ami = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = "eks"
  security_groups = ["allow_http"]

  connection {
     type = "ssh"
     user = "ec2-user"
     private_key = file("/home/sachinkumarkashyap/Downloads/hmc/Terraform/eks.pem")
     host = aws_instance.my_task1_os.public_ip
}
 provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }

  tags = {
    Name = "os1"
  }
}
output "az_id" {
    value = aws_instance.my_task1_os.availability_zone
}
output "publicip" {
  value = aws_instance.my_task1_os.public_ip
}



Output:

No alt text provided for this image

Step 4: Launch one Volume (EBS). Attach it to the instance. Next mount that volume into /var/www/html/

Step 5: As soon as the developer updates something in the code in GitHub, clone this Github repo which has our website source code also has some static images(static content).Clone the GitHub repo to this location /var/www/html

resource "aws_ebs_volume" "task1_ebs" {
  availability_zone = "ap-south-1a"
  size              = 1

  tags = { 
    Name = "task1_ebs"
  }
}
resource "aws_volume_attachment" "attachvol" {
  device_name = "/dev/sdh"
  volume_id   = "${aws_ebs_volume.task1_ebs.id}"
  instance_id = "${aws_instance.my_task1_os.id}"
  force_detach = true
}

resource "null_resource" "localsystem2"  {
	provisioner "local-exec" {
	    command = "echo  ${aws_instance.my_task1_os.public_ip} > publicip.txt"
  	}
}

resource "null_resource" "remotesystem1"  {

depends_on = [
    aws_volume_attachment.attachvol,
  ]


  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("/home/sachinkumarkashyap/Downloads/hmc/Terraform/eks.pem")
    host = aws_instance.my_task1_os.public_ip
  }

provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/hackcoderr/Mini-Project.git /var/www/html/"
    ]
  }
}

Step 6: Create an S3 bucket. All the static content of our web server is stored over here. Copy images from your system to S3 and change the permission to public readable.

resource "null_resource" "localsystem3"  {


depends_on = [
    aws_ebs_snapshot.snap1,
  ]

	provisioner "local-exec" {
	    command = "start chrome  ${aws_instance.my_task1_os.public_ip}"
  	}
}
resource "aws_s3_bucket" "task1_s3" {
  bucket = "my_task1_s3"
  acl    = "private"

  tags = {
    Name        = "my_task1_s3"
    Environment = "Dev"
  }
}
resource "aws_s3_bucket_public_access_block" "publicaccess" {
  bucket = "${aws_s3_bucket.task1_s3.id}"

  block_public_acls   = true
  block_public_policy = true
}
locals {
s3_origin_id = "myS3Origin"
}
resource "aws_cloudfront_origin_access_identity" "oai" {
  comment = "oai_for_task1"
}
data "aws_iam_policy_document" "oaipolicy" {
  statement {
    actions   = ["s3:GetObject"]

    principals {
      type        = "AWS"
      identifiers = ["${aws_cloudfront_origin_access_identity.oai.iam_arn}"]
    }
    resources = ["${aws_s3_bucket.task1_s3.arn}"]
    }
  }

resource "aws_s3_bucket_policy" "bucketpolicy" {
  bucket = "${aws_s3_bucket.task1_s3.id}"
  policy = "${data.aws_iam_policy_document.oaipolicy.json}"
}

Step 7: Create a distribution through CloudFront and using S3 bucket generate one CloudFront URL and dynamically update this URL in website code in the “index.html” file.

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.task1_s3.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"

    s3_origin_config {
      origin_access_identity = "${aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path}"
    }
  }

  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "index.html"

  logging_config {
    include_cookies = false
    bucket          = "mylogs.s3.amazonaws.com"
    prefix          = "myprefix"
  }

  aliases = ["mysite.example.com", "yoursite.example.com"]

  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  # Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false
      headers      = ["Origin"]

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  price_class = "PriceClass_200"

  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["US", "CA", "GB", "DE"]
    }
  }

  tags = {
    Environment = "production"
  }

  viewer_certificate {
    cloudfront_default_certificate = true
  }
}
resource "aws_ebs_snapshot" "snap1" {
  volume_id = "${aws_ebs_volume.task1_ebs.id}"

  tags = {
    Name = "job1snap"
  }
}

In the end, Terraform will automatically launch our website having the latest content in the browser.

Now Our Code will be Display on the Browser using the URL.

No alt text provided for this image

Now you can destroy the entire environment using terraform destroy command.

No alt text provided for this image

This is my GitHub Link. if you face any difficulty in the above steps then you can visit this link and take help from this code.

thanks for reading...








要查看或添加评论,请登录

Sachin Kashyap的更多文章

社区洞察

其他会员也浏览了