Website Deployment over AWS Cloud Automated using Terraform.

Website Deployment over AWS Cloud Automated using Terraform.

Cloud Computing : Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet.

AWS : AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings.

Terraform : Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language, or optionally JSON.

We use Terraform to build custom infrastructure to deploy the website in AWS Cloud with a single code without using AWS Console and to install the resources required.

STEPS involved in this project:

1.Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Prerequisites :

  1. Account on AWS
  2. Terraform Software should be downloaded on base Operating System.

Project :

STEP 1(A) : Creating the key pair in AWS Console

No alt text provided for this image

STEP 1(B) : Creating new GIT repository

No alt text provided for this image
No alt text provided for this image

GIT URL : https://github.com/Venkatsainathreddy/website.git

STEP 1(C) : Open Control Panel in Base OS

No alt text provided for this image

We are going to write Terraform Code in "project.tf" file

Starting with the Code,

STEP 1(D) : We have to Specify Cloud Provider

provider "aws" {
  region     = "ap-south-1"
  access_key = "mention your aws access key"
  secret_key = "mention your aws secret key"
}
    

STEP 1(E) : Creating Security Group

Amazon EC2 security groups for Linux instances. A security group acts as a virtual firewall for your instance to control incoming and outgoing traffic. ... You can add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time.

resource "aws_security_group" "allow_traffic" {
  name        = "allow_http"
  description = "Allow TLS inbound traffic"
  




  ingress {
    description = "HTTP from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }




  ingress {
     description = "SSH from VPC"
     from_port   = 22
     to_port     = 22
     protocol    = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
     description = "ping"
     from_port   = -1
     to_port     = -1
     protocol    = "icmp"
     cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
tags = {
    Name = "allow_http"
  }
}

STEP 2: Launching EC2 Instance

An EC2 instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for running applications on the Amazon Web Services (AWS) infrastructure. AWS is a comprehensive, evolving cloud computing platform; EC2 is a service that allows business subscribers to run application programs in the computing environment.

resource "aws_instance" "web" {
  ami             = "ami-0447a12f28fddb066"
  instance_type   = "t2.micro"
  key_name        = "websitekey"
  security_groups = ["allow_http"]




 connection {
        type     = "ssh"
        user     = "ec2-user"
        private_key = file("C:/Users/Sainath Reddy/Downloads/websitekey.pem")
        host     = aws_instance.web.public_ip
     }


 provisioner "remote-exec" {
        inline = [
          "sudo yum install httpd git -y",
          "sudo systemctl restart httpd",
          "sudo systemctl enable httpd",
    ]
  }
    tags = {
      Name = "mywebos"
    }
}

STEP 3: We mentioned key pair and security group name in the above code

STEP 4(A) : Launching EBS Volume

An Amazon EBS volume is a durable, block-level storage device that you can attach to one instance or to multiple instances at the same time. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application.

resource "aws_ebs_volume" "myebs_vol" {
    availability_zone = aws_instance.web.availability_zone
    size              = 1
    type = "gp2"
    tags = {
        Name = "myVol"
    }
}

STEP 4(B) : Attaching EBS Volume and Instance

resource "aws_volume_attachment" "ebs_attach" {
    device_name = "/dev/sdh"
    volume_id   = aws_ebs_volume.myebs_vol.id
    instance_id = aws_instance.web.id
  force_detach = true
 
}

STEP 4(C) : Format and Mounting attached EBS Volume and clone git repository

resource "null_resource" "nullremote"  {




    depends_on = [
       aws_volume_attachment.ebs_attach,
   ]




    connection {
        type     = "ssh"
        user     = "ec2-user"
        private_key = file("C:/Users/Sainath Reddy/Downloads/websitekey.pem")
        port    = 22
        host     = aws_instance.web.public_ip
   }




  provisioner "remote-exec" {
      inline = [
          "sudo mkfs.ext4  /dev/xvdh",
          "sudo mount  /dev/xvdh  /var/www/html",
          "sudo rm -rf /var/www/html/*",
          "sudo git clone https://github.com/Venkatsainathreddy/website.git /var/www/html/"
     ]
   }
 
}

The website code which was written in first step in the github, that code should be cloned in the EBS Volume.

STEP 5, STEP 6 are involved in the above step itself

STEP 7(A) : Creating S3 Bucket

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services' (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

resource "aws_s3_bucket" "my_pic_bucket15934567" {
     bucket  = "sainathimages"
     acl     = "public-read"
   
     versioning {
          enabled = true
      }
     tags = {
        Name = "my-new-bucket"
        Environment = "Dev"
    }
}
locals {
    s3_origin_id = "my_pic_bucket15934567Origin"
}

STEP 7(B) : Uploading the image on S3 Bucket

resource "aws_s3_bucket_object" "s3obj" {
depends_on = [
  aws_s3_bucket.my_pic_bucket15934567,
]
  bucket       = "sainathimages"
  key          = "download.jpg"
  source       = "C:/Users/Sainath Reddy/Desktop/download.jpg"
  acl          = "public-read"
  content_type = "image or jpeg"
}

STEP 8 : Creating CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = aws_s3_bucket.my_pic_bucket15934567.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
}
enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "website.html"
logging_config {
    include_cookies = false
    bucket          = "sainathimages.s3.amazonaws.com"
    prefix          = "myprefix"
  }
default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
forwarded_values {
      query_string = false
cookies {
        forward = "none"
      }
    }
viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id
forwarded_values {
      query_string = false
      headers      = ["Origin"]
cookies {
        forward = "none"
      }
    }
min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
forwarded_values {
      query_string = false
cookies {
        forward = "none"
      }
    }
min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
price_class = "PriceClass_200"
restrictions {
    geo_restriction {
      restriction_type = "none"
      
    }
  }
tags = {
    Environment = "production"
  }
viewer_certificate {
    cloudfront_default_certificate = true
  }
}
output "cloudfront_ip_addr" {
  value = aws_cloudfront_distribution.s3_distribution.domain_name
}

STEP 9 : Displaying the output on Local System(Chrome)

resource "null_resource" "IP_opening_on_crome"  {
  depends_on = [
    aws_cloudfront_distribution.s3_distribution,
    aws_volume_attachment.ebs_attach
  ]
  provisioner "local-exec" {
    command = "start chrome https://${aws_instance.web.public_ip}/"
   }
}

STEP 10: Run the code

terraform init (it downloads the AWS Plugin)

No alt text provided for this image

terraform apply (to run the code)

No alt text provided for this image

In just one click, website will be deployed on AWS Cloud

Final Output of the above infrastructure we get in Chrome Browser

No alt text provided for this image

Terraform code available in GITHUB.

URL : https://github.com/Venkatsainathreddy/TerraformProject1.git



要查看或添加评论,请登录

Venkat Sainath Reddy的更多文章

社区洞察

其他会员也浏览了