CREATING AN INFRASTRUCTURE OF AWS USING TERRAFORM
Completed the first task of HybridMultiCloud Training under the guidance of Mr.Vimal Daga Sir

CREATING AN INFRASTRUCTURE OF AWS USING TERRAFORM

Task Description :-

  • Launch the EC2 instance. In this instance use the key and security group created in step 1.
  • Create an EBS volume of size 1 GB and attach this volume to the above created instance.
  • Now mount this volume to the /var/www/html/ folder to store the data permanent.
  • Developer have uploaded the code in the GitHub repository. Also that repo has some images in it.
  • Copy the GitHub repo code into /var/www/html/ folder
  • Create S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.
  • Create a Cloudfront using S3 bucket( which contains images ) and use the Cloudfront URL to update in code in /var/www/html/

STEP 1 : Creating Terraform Code

  1. First we have to set the profile and region name and the provider is aws.
provider "aws"{
  region = "ap-south-1"
  profile = "maharanasunil"
}

2. Created the security group with name "allow_http" and it will allow port 22 and 80.

resource "aws_security_group" "http" {
  name        = "MySecurity"


  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "allow_http"
  }
}

No alt text provided for this image


3. Launched an EC2 Instance "maharanasunil" with the security group "MySecurity", that we have created. We made connection to it to download the listed softwares and start the 'httpd' service.

resource "aws_instance" "myins" {
  ami             = "ami-0447a12f28fddb066"
  instance_type   = "t2.micro"
  key_name        = "awstask1"
  security_groups = [ "MySecurity" ]


  connection {
    type        = "ssh"
    user        = "ec2-user"
    private_key = file("C:/Users/dell/Downloads/awstask1.pem")
    host        = aws_instance.task1.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }
  tags = {
    Name = "maharanasunil"
  }
}

No alt text provided for this image


4. Created an EBS volume "EbsTask1" of size 1GB in the same region where our Instance is running.

resource "aws_ebs_volume" "ebsvol" {
  availability_zone = aws_instance.myin.availability_zone
  size              = 1
  tags = {
    Name = "EbsTask1"
  }
}


5. Attached the EBS Volume to our Instance "maharanasunil".

resource "aws_volume_attachment" "ebs_att" {
 device_name = "/dev/sdf"
 volume_id = aws_ebs_volume.ebs.id
 instance_id = aws_instance.myins.id
}

No alt text provided for this image

6. Than we format the Volume and mount it to the folder '/var/www/html/'.

resource "null_resource" "null"{
  provisioner "local-exec"{
    command = "echo ${aws_instance.maharanasunil.public_ip} > publicip.txt"
  }
}






resource "null_resource" "nullremote1" {
depends_on = [
  aws_volume_attachment.myebs,
]


  connection {
    type        = "ssh"
    user        = "ec2-user"
    private_key = file("C:/Users/dell/Downloads/awstaskkey.pem")
    host        = aws_instance.task1.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4 /dev/xvdh",
      "sudo mount /dev/xvdh /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/maharanasunil/HybridCloudTask1.git /var/www/html"
    ]
  }
}


7. Created a S3 bucket and made it public readable. We have to give a unique name to the bucket. We named it 'task1sunil'.

resource "aws_s3_bucket" "mys3" {
  bucket = "task1sunil"
  acl    = "public-read"


  tags = {
    Name = "bucket1"
  }


  versioning {
    enabled = true
  }


}
locals {
  s3_origin_id = "mys3Origin"
}
No alt text provided for this image

8. Uploaded the image as a object in this S3 bucket from our base system. The name of the object will be 'kese hein aap log'.

resource "aws_s3_bucket_object" "s3obj" {
depends_on = [
  aws_s3_bucket.mys3,
]
  bucket       = "task1sunil"
  key          = "kese hein aap log.jpg "
  source       = "C:/Users/dell/Downloads/kese hein aap log.jpg "
  acl          = "public-read"
  content_type = "image or jpg"
}

No alt text provided for this image

9. Created CloudFront with S3 as origin to provide CDN (Content Delivery Network)".

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = aws_s3_bucket.mys3.bucket_regional_domain_name
    origin_id   = local.s3_origin_id


    
  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "task1.html"


  logging_config {
    include_cookies = false
    bucket          = "task1sunil.s3.amazonaws.com"
    prefix          = "myprefix"
  }




  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  # Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false
      headers      = ["Origin"]


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "none"
      
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
}

No alt text provided for this image

10. To see the IP of our Instance we used output keyword. This IP will be used to see the Website.

output "myip" {
	value = aws_instance.maharanasunil.public_ip
}


STEP 2 : Running the Terraform code

After creating the terraform file containing our code, run following commands in windows command prompt.

  1. terraform init (it will download necessary plugins from internet)
  2. terraform validate (it will check the syntax and keyword error in our code)

terraform apply -auto-approve (it will run our terraform code without asking permission "yes")

No alt text provided for this image
No alt text provided for this image

SUCCESSFULLY CREATED A INFRASTRUCTURE USING AWS AND TERRAFORM.

Thank You for Reading this!

Here is my GitHub URL :


要查看或添加评论,请登录

Sunil Maharana的更多文章

社区洞察

其他会员也浏览了