CREATING AN INFRASTRUCTURE OF AWS USING TERRAFORM

CREATING AN INFRASTRUCTURE OF AWS USING TERRAFORM

Task Description :-

  • Create the Key and Security group (which allows port 80).
  • Launch the EC2 instance. In this instance use the key and security group created in step 1.
  • Create an EBS volume of size 1GB and attach this volume to the above created instance.
  • Now mount this volume to the /var/www/html/ folder to store the data permanent.
  • Developer have uploaded the code in the GitHub repository. Also that repo has some images in it.
  • Copy the GitHub repo code into /var/www/html/ folder
  • Create S3 bucket, and copy/deploy the images from GitHub repo into the S3bucket and change the permission to public readable.
  • Create a Cloudfront using S3 bucket( which contains images ) and use the Cloudfront URL to update in code in /var/www/html/


Step 1- Creating the code of Terraform

  1. We are using aws provider and we will set the profile and region name.
provider "aws" {
  region     = "ap-south-1"
  profile    = "yashika"
}

2. In this code, we will ask the user to give the key name. This key will act as the input to key in Instance.

variable "mykey" {


}
No alt text provided for this image

3. We created the security group with name "allow_tcp" and it will has inbound rule to allow port 22 and 80.

Resource used - "aws_security_group"

resource "aws_security_group" "tcp" {
	name      = "allow_tcp"


	ingress { 
		from_port    = 80
		to_port      = 80 
		protocol     = "tcp"
		cidr_blocks = ["0.0.0.0/0"]
	}
	ingress { 
		from_port    = 22
		to_port      = 22
		protocol     = "tcp"
		cidr_blocks = ["0.0.0.0/0"]
	}	


	 egress {
    		from_port       = 0
    		to_port         = 0
    		protocol        = "-1"
    		cidr_blocks     = ["0.0.0.0/0"]
  	}
	
	tags = {
		Name = "allow_tcp"
	}
}
No alt text provided for this image

4. We launched an EC2 Instance "myos2" with the security group "allow_tcp", thst we have created. We made connection to it to download the listed softwares and start the 'httpd' service.

Resource used - "aws_instance"

resource  "aws_instance"  "myins" {
	ami             = "ami-0447a12f28fddb066"
	instance_type   = "t2.micro"
	key_name        =  "mykey"
	security_groups = ["allow_tcp"]
	
	connection {
    		type        = "ssh"
    		user        = "ec2-user"
   		private_key     = file("C:/Users/Ragesh Tambi/Downloads/mykey.pem")
    		host        = aws_instance.myins.public_ip
  	}




	provisioner "remote-exec" {
    		inline = [
      			"sudo yum install httpd php git -y",
      			"sudo systemctl restart httpd",
      			"sudo systemctl enable httpd",
			
    		]
 	}		
		
	tags = {
		Name = "myos2"	
	}
}

No alt text provided for this image

5. We created an EBS volume "myebs" of size 1GB in the same region where our Instance is running.

Resource used - "aws_ebs_volume"

resource "aws_ebs_volume" "ebs" {
  availability_zone = aws_instance.myins.availability_zone
  size              = 1


  tags = {
    Name = "myebs"
  }
}

6. We then attached the EBS Volume to our Instance "myos2".

Resource used - "aws_volume_attachment"

resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.ebs.id
  instance_id = aws_instance.myins.id
  force_detach = true
}
No alt text provided for this image

7. We then format the Volume and mount it to the folder '/var/www/html/'.

resource "null_resource" "nulllocal1"{
	provisioner "local-exec" {
		command = "echo ${aws_instance.myins.public_ip} > publicip.txt"
	}
}


resource "null_resource" "nulllocal3"{
	
	depends_on = [
		aws_volume_attachment.ebs_att,
	]


	connection {
    		type        = "ssh"
    		user        = "ec2-user"
   		private_key = file("C:/Users/Ragesh Tambi/Downloads/mykey.pem")
    		host        = aws_instance.myins.public_ip
  	}


	provisioner "remote-exec" {
    		inline = [
      			"sudo mkfs.ext4 /dev/xvdh",
			"sudo mount /dev/xvdh /var/www/html",
			"sudo rm -rf /var/www/html/*",
			"sudo git clone https://github.com/Yashika-Khandelwal/cloud_task.git /var/www/html"
    		]
 	}
}

8. We created a S3 bucket and made it public readable. We have to give a unique name to the bucket. We named it 's3-bucket-task'

Resorce used - "aws_s3_bucket"

resource "aws_s3_bucket" "s3_bucket" {
	bucket = "s3-bucket-task1"
	acl    = "public-read"


	tags   = {
		Name = "task1"
	}
	versioning{
		enabled = true
	}
}
No alt text provided for this image

9. We uploaded the image as a object in this S3 bucket from our base system. The name of the object will be 'dog.jpg'.

Resorce used - "aws_s3_bucket_object"

locals {
		s3_origin_id = "mys3Origin"
}


resource "aws_s3_bucket_object" "upload" {


		depends_on = [
			aws_s3_bucket.s3_bucket
]


	bucket 		 = "s3-bucket-task1"
	key    		 = "dog.jpg"
	source 		 = "C:/Users/Ragesh Tambi/Desktop/dog.jpg"
	acl	         = "public-read"
	content_type     = "image or jpeg"
}

No alt text provided for this image

10. We created CloudFront with S3 as origin to provide CDN(Content Delivery Network)."

Resource used - "aws_cloudfront _distribution"

  resource "aws_cloudfront_distribution" "s3_distribution" {

origin {
    domain_name = aws_s3_bucket.s3_bucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id


  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "index.html"


  logging_config {
    include_cookies = false
    bucket          = "s3-bucket-task1.s3.amazonaws.com"
    prefix          = "myprefix"
  }




  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  # Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false
      headers      = ["Origin"]


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }

}
No alt text provided for this image

11. To see the IP of our Instance we used output keyword. This IP can be used to see the Website.

output "myosip" {
	value = aws_instance.myins.public_ip
}
No alt text provided for this image

Step 2- Run the terraform file using 'terraform apply' cmd.

No alt text provided for this image

LOOK OF THE WEBSITE:-

No alt text provided for this image


MY GITHUB REPOSITORY LINK:-


Vaibhav Mathur

Cloud Engineer - I @Insight | 4 X Microsoft Certified |AZ-700 | AZ-104 | AZ-900 | SC-900.

4 年

Great??

Vilsi Jain

Software Engineer | Google WTM Ambassador | Flutter Jaipur Organizer |Microsoft Azure developer community Organizer | Open source | Coding | Developer | Tech Speaker | Judge

4 年

Great??

Anish Khandelwal

DevOps Engineer || AWS Certified Solutions Architect - Associate

4 年

Great Work!

Shivansh Khandelwal

Sr. Cloud Engineer at Porter

4 年

Great !

Abhishek Chouhan

DevOps Engineer at Toorak Capital

4 年

GreaT work YashiKa !

要查看或添加评论,请登录

Yashika Khandelwal的更多文章

社区洞察

其他会员也浏览了