Automate Infrastructure setup by using AWS cloud and Terraform

Automate Infrastructure setup by using AWS cloud and Terraform

Agenda of the Project

Have to create/launch Application using Terraform

  1. Create the key and security group which allows the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1.
  4. Launch one Volume (EBS) and mount that volume into /var/www/html
  5. A developer has uploaded the code into GitHub repo also the repo has some images.
  6. Copy the github repo code into /var/www/html
  7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
  8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Prerequisite:-

  • An account on AWS ( https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/ )
  • Terraform software ( https://www.terraform.io/downloads.html )
  • AWS cli2 software (nhttps://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.htm )
  • Have to configure AWS user ( use AWS CLI to configure your AWS user )

The code is uploaded to the github: - https://github.com/Ds123-wq/cloud_task-1.git.

Step-1   

First we tell terraform, which is our provider

provider "aws" { 
   region = "ap-south-1"   
   profile = "raja"
 }

Step-2

  We create security group to launch our instance in aws .

resource "aws_security_group" "mygroup" {

  name        = "mslizard111"

  description = "Allow ssh and http"

  vpc_id      = "vpc-a8766ac0"


 
  ingress {

    description = "SSH"

    from_port   = 22

    to_port     = 22

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }

  ingress {

    description = "HTTP"

    from_port   = 80

    to_port     = 80

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

   }

 

 egress {

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["0.0.0.0/0"]

  }

 

  tags = {

    Name = "securitygp"

  }

}
  •  Here we create security group "securitygp" by using 'aws_security_group' resource.
  • In that security group we were given the following permissions of port no. 80 (for http) and port no. 22 (for SSH connection).
  • Port no 80 is used to expose our webserver to outerworld.
  • Port no 22 is used to login into instances using ssh.

Step-3

 We launch instance in ec2

 ami           =  "ami-0385d44018fb771b7"
     instance_type = "t2.micro"
     key_name  = "cloudkey"
     security_groups = [ "mslizard111" ]


    connection {
       type = "ssh"
       user = "ec2-user"
       private_key = file("C:/Users/Dell/Downloads/cloudkey.pem")
       host = aws_instance.web.public_ip
      }


     provisioner "remote-exec"  {
       inline = [
        "sudo yum install httpd php git  -y" ,
        "sudo systemctl restart httpd",
        "sudo systemctl enable httpd"
           ]
        }


    tags = {
      Name  =  "Task-1"
     }


depends_on = [
 aws_security_group.mygroup,
  ]
}

  •  Here we launch instance "Task-1" by using "aws_instance"   resource.
  • To launch instance we required :-

ami (this is image id of Amazon Machine Images).

instance_type(configuration of CPU).

key_name (Give key location ,so we can launch instance ).

security_groups(Used to control incoming and outgoing traffic)

  • After launch instance we login using ssh and install required software like httpd, php, git.
  • Here we use depends_on "aws_ security_group " .Because after generation of security group we can login into instance using security group.

Step-4

Now create volume in aws .to instance .so our data will remain persistent.

resource "aws_ebs_volume" "ebs1" {
  availability_zone = aws_instance.web.availability_zone
  size              = 1


  tags = {
    Name = "Volume_persistent"
  }
}
    
  • So our volume is created of size 1 GB.
  • Name of our volume is " Volume_persistent".

Step-5

Now we attach the volume to the instance



resource "aws_volume_attachment"  "ebs_att" {
    device_name = "/dev/sdh"
    volume_id   = aws_ebs_volume.ebs1.id
    instance_id = aws_instance.web.id
    force_detach = true
 
  depends_on = [
     aws_ebs_volume.ebs1,

  ]
}

resource "null_resource" "nullresource1" {
    depends_on = [
         aws_volume_attachment.ebs_att,
   ]


  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Dell/Downloads/cloudkey.pem")
    host     = aws_instance.web.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/Ds123-wq/terrafor_launch_ec2.git  /var/www/html/"
    ]
  }
}



  • Here we created an EBS volume by using 'aws_ebs_volume' resource
  • then, we attached that EBS volume to the EC2 instance
  • As to store data into storage ,we have to :- partition ,format the storage. So after volume attach we login into storage and partition and format the storage.
  • Then we mount to the folder "/var/www/html/" .Because we want to make data persistent of this folder .
  • And copying GitHub code into /var/www/html/

Step-6

Now we create s3 bucket and stored images into it.

resource "aws_s3_bucket" "s3-bucket" {
  bucket = "my-123bucket"
  force_destroy = true
  acl    = "public-read"
  
depends_on = [
 aws_volume_attachment.ebs_att,
]
}


resource "null_resource" "nulllocal23"{
provisioner "local-exec" {
       
        command     = "git clone https://github.com/Ds123-wq/terrafor_launch_ec2.git Images"
     
    }
provisioner "local-exec" {
        when        =   destroy
        command     =   "rmdir /s /q Images"
    }
 depends_on = [
  aws_s3_bucket.s3-bucket
  ]
]
}
resource "aws_s3_bucket_object" "image-upload" {
    bucket  = aws_s3_bucket.s3-bucket.bucket
    content_type = "image/png"
    key     = "apache-web-server.png"
    source  = "Images/apache-web-server.png"
    acl     = "public-read"


depends_on = [
   null_resource.nulllocal23,
 ]
}


locals {
  s3_origin_id = "S3-${aws_s3_bucket.s3-bucket.bucket}"
}

  • Here I used null_resource for cloning the git repo locally.
  • I used resource aws_s3_bucket for making the s3 bucket
  • I copied the image from .gitcode to s3 using resource i.e, aws_s3_bucket_object

Step-7

Now I create cloudfront .So decrease latency.

resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
  comment = "my-OAI"


depends_on = [
  aws_s3_bucket_object.image-upload,
 ]
}






resource "aws_cloudfront_distribution" "s3_distribution" {


  origin {
    domain_name = aws_s3_bucket.s3-bucket.bucket_regional_domain_name
    origin_id  = local.s3_origin_id
   
  custom_origin_config {


         http_port = 80
         https_port = 80
         origin_protocol_policy = "match-viewer"
         origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
        }
    
  }


   enabled             = true


   default_cache_behavior {
        allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        cached_methods   = ["GET", "HEAD"]
        target_origin_id = local.s3_origin_id


        forwarded_values {
            query_string = false
            cookies {
                forward = "none"
            }
    }


        viewer_protocol_policy = "allow-all"
        min_ttl                = 0
        default_ttl            = 3600
        max_ttl                = 86400
    }


   


restrictions {
        geo_restriction {
        restriction_type = "none"
        }
    }


viewer_certificate {
        cloudfront_default_certificate = true
    }


connection {
        type    = "ssh"
        user    = "ec2-user"
        host    = aws_instance.web.public_ip
        port    = 22
        private_key = file("C:/Users/DELL/Downloads/cloudkey.pem")
    }


provisioner "remote-exec" {
        inline  = [
            "sudo su << EOF",
             "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}' width = '300' height = '200'>\" >> /var/www/html/index.html",
            "EOF"
        ]


   }


depends_on = [
   aws_s3_bucket_object.image-upload,
  ]


provisioner "local-exec" {
  command = "chrome ${aws_instance.web.public_ip}"


}
}

  • For creating CloudFront I used resource aws_cloudfront_distribution, this needs domain_name, origin_id for CDN(Content Delivery Network).
  • Here I saved the CloudFront URL using resource i.e, null_resource because we need to update the CloudFront URL on the website in the img tag.
  • Now we can see our website in chrome after successfully run the terraform code.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了