Task 1 - Creating An Infrastructure In AWS Using Terraform.

Task 1 - Creating An Infrastructure In AWS Using Terraform.

Task Description :-

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github-repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html


Let me explain you how I made this whole setup by using terraform.

Step 1- First I tell the terraform that I have to use or manage the services of aws for which we have to write provider is aws , in which region we want to login and use this profile for login into console.

provider "aws" {
	  region   = "ap-south-1"
	  profile  = "Shivam"
	}

Step 2 -creating a key for launching ec2 instance and remote login to the instance.

we have to use resource known as "aws-key-pair"

resource "aws_key_pair" "cloudtask1" { 
key_name = "task1-key" 
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41"
} 
"

Step 3 - After this i have written a code for creating one security group(firewall) that allow port 80 (for http) and port 22(for ssh)

resource "aws_security_group" "task1-securitygroup" {
	  name        = "task1-securitygroup"
	  description = "Allow SSH AND HTTP"
	  vpc_id      = "vpc-c0e5r9aa"
	

	

	  ingress {
	    description = "SSH"
	    from_port   = 22
	    to_port     = 22
	    protocol    = "tcp"
	    cidr_blocks = [ "0.0.0.0/0" ]
	  }
	

	  ingress {
	    description = "HTTP"
	    from_port   = 80
	    to_port     = 80
	    protocol    = "tcp"
	    cidr_blocks = [ "0.0.0.0/0" ]
	  }
	

	  egress {
	    from_port   = 0
	    to_port     = 0
	    protocol    = "-1"
	    cidr_blocks = ["0.0.0.0/0"]
	  }
	

	  tags = {
	    Name = "task1-securitygroup"
	  
}
  }

Step 4 - creating a bucket that have a image from github repository.. making this bucket to be public.


resource "aws_s3_bucket" "cloudtask1" {
	    bucket = "task1-shivam"
	    acl    = "public-read"
	

	    tags = {
		Name    = "task1shivam"
		Environment = "Dev"
	    }
	    versioning {
		enabled =true
	    }
	}

Step 5 - Then i create one cloud front distribution . in this distribution origin is s3 bucket having a image. this distribution will provide a unique url from which client can access the image in s3 bucket from anywhere in the world.

resource "aws_cloudfront_distribution" "imgcloudfront" {
	    origin {
	        domain_name = "task1-shivam.s3.amazonaws.com"
	        origin_id = "S3-task1-shivam" 
	

	

	        custom_origin_config {
	            http_port = 80
	            https_port = 80
	            origin_protocol_policy = "match-viewer"
	            origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
	        }
	    }
	       
	    enabled = true
	

	

	    default_cache_behavior {
	        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
	        cached_methods = ["GET", "HEAD"]
	        target_origin_id = "S3-task1shivam"
	

	

	        # Forward all query strings, cookies and headers
	        forwarded_values {
	            query_string = false
	        
	            cookies {
	               forward = "none"
	            }
	        }
	        viewer_protocol_policy = "allow-all"
	        min_ttl = 0
	        default_ttl = 3600
	        max_ttl = 86400
	    }
	    # Restricts who is able to access this content
	    restrictions {
	        geo_restriction {
	            # type of restriction, blacklist, whitelist or none
	            restriction_type = "none"
	        }
	    }
	

	

	    # SSL certificate for the service.
	    viewer_certificate {
	        cloudfront_default_certificate = true
	    }
	}

The url that we got from cloudfront I have used this cloudfront url in the webpage image source for which client can see the image that is in s3 bucket through cloudfront.


Step 6 - creating one instance using the key that we have created above named as task1-key and security group that we have created above.

resource "aws_instance" "web" {
	  ami           = "ami-0447a12f28fddb088"
	  instance_type = "t2.micro"
	  key_name = "task1-key"
	  security_groups = [ "task1-securitygroup" ]


Step 7 - After this now without we go manually to the instance and install all the requirements fora webserver . on our behalf terraform will go and set all the requirements for a webserver in the instance that we launched above.

For this terraform have to run the commands on remote operating system. so terraform have a provisioner known as remote-executor. For connecting to the instance user, ip of instance,private key(we have created this key above) is required.

Now this provisioner will install httpd webserver, git on the instance and enable the httpd service permanently.

connection {
	    type     = "ssh"
	    user     = "ec2-user"
	    private_key = file("C:/Users/cse_s/Downloads/task1-key.pem")
	    host     = aws_instance.web.public_ip
	  }
	

	  provisioner "remote-exec" {
	    inline = [
	      "sudo yum install httpd   git -y",
	      "sudo systemctl restart httpd",
	      "sudo systemctl enable httpd",
	    ]
	  }
	

	  tags = {
	    Name = "lwos1"
	  }
	

	}

Step 8 - creating one additional ebs volume for the instance that i will later attach to the instance. In this ebs creation we have to specify the region in which we want to create a ebs volume. As we know that ebs is a regional service of aws. so we have to give the same region to this ebs volume that we give to instance.

//That's why in availability zone we give aws_instance.web.availability_zone


resource "aws_ebs_volume" "esb1" {
	  availability_zone = aws_instance.web.availability_zone
	  size              = 1
	  tags = {
	    Name = "lwebs"
	  }
	}

Step 9 - Attaching this ebs volume to the instance , for this we require the volume id and instance id. we retrieve the volume id and instance id in form of varibale and put this variable value into the volume id and instance id.

resource "aws_volume_attachment" "ebs_att" {
	  device_name = "/dev/sdh"
	  volume_id   = "${aws_ebs_volume.esb1.id}"
	  instance_id = "${aws_instance.web.id}"
	  force_detach = true
	}

Step 10- Here i retrieve or print the public ip of the instance .

Here my desire is i want to print the public ip in local system and store or save this public ip into the a file for future reference. so i am using terraform provisioner that is local-execotor

output "myos_ip" {
	  value = aws_instance.web.public_ip
	}
	

	

	resource "null_resource" "nulllocal2"  {
		provisioner "local-exec" {
		    command = "echo  ${aws_instance.web.public_ip} > publicip.txt"
	  	}
	}

Step 11- now without we are going manually to the instance and create partition, format and mount to this ebs volume to the location we want , Terraform will do all theses things.

For this we have to use remote - executor provisioner. Provisioner always work inside the resource. For not using any resource of aws we are using null resource.

depends on means first first the job that has been written inside depends on will run after this the next job will built . we are doing this to maintaining the sequence.

Terraform will format the volume , mount this volume to the /var/www/html folder. if there is file in this folder remove it because git clone will only work in empty folder after this upload the html webpages from the git repository to this folder.

resource "null_resource" "nullremote3"  {
	

	depends_on = [
	    aws_volume_attachment.ebs_att,
	  ]
	

	

	  connection {
	    type     = "ssh"
	    user     = "ec2-user"
	    private_key = file("C:/Users/cse_s/Downloads/task1-key.pem")
	    host     = aws_instance.web.public_ip
	  }
	

	provisioner "remote-exec" {
	    inline = [
	      "sudo mkfs.ext4  /dev/xvdh",
	      "sudo mount  /dev/xvdh  /var/www/html",
	      "sudo rm -rf /var/www/html/*",
	      "sudo git clone https://github.com/cse-Shivam/cloudtask1.git /var/www/html/"
	    ]
	  }
	}
	

	

Step 12 - As soon as all the above jobs run successfully after that from using the public ip of the instance terraform will automatically connect us to the webpage in our local system.

resource "null_resource" "nulllocal1"  {
	

	

	depends_on = [
	    null_resource.nullremote3,
	  ]
	

		provisioner "local-exec" {
		    command = "chrome  ${aws_instance.web.public_ip}"
	  	}
	}
Ankit Kumar Pal

Research Analyst | Data-Driven Insights | Expertise in Data Analysis, SQL, Power BI

4 年

????

Akanksha Singh, RHCA

DevOps Engineer @ Paytm | RHCA Level 1 Certified | Infrastructure Development, Machine Learning and AI

4 年

Great going Shivam Pandey .

回复
Aastha Saxena

Software Engineer | NodeJs | TypeScript | MySQL | MongoDB

4 年

great work!

Avaneesh Shukla

Specialist Programmer @ Infosys | Data Engineer | SQL | Python | PySpark | Big Data Technologies | ETL | Data Warehousing | Performance Optimization

4 年

so fast gr8

要查看或添加评论,请登录

Shivam Pandey的更多文章

社区洞察

其他会员也浏览了