Creating infrastructure  in aws using Terraform.

Creating infrastructure in aws using Terraform.

Amazon AWS is a cloud service provider. It provides the services to the client what they want.

What is Terraform ?

Terraform is kind of management tool from which we can manage the services in the cloud either it is public or private.Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

Have you ever thought without going to the aws console from either webui or cli we can create a infrastructure whatever we want to create in aws.

In this project/task through terraform we create/launch Application using Terraform

Let me explain the task/project in detail-

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html


Let me explain you how I made this whole setup by using terraform.

Step 1- First I tell the terraform that I have to use or manage the services of aws for which we have to write provider is aws , in which region we want to login and use this profile for login into console.

provider "aws" {
	  region   = "ap-south-1"
	  profile  = "anurag"
	}
      



Step 2 -creating a key for launching ec2 instance and remote login to the instance.

we have to use resource known as "aws-key-pair"

resource "aws_key_pair" "task1" { 
key_name = "task1-key" 
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41"
} 
"

Step 3 - After this i have written a code for creating one security group(firewall) that allow port 80 (for http) and port 22(for ssh)

resource "aws_security_group" "task1-securitygroup" {
	  name        = "task1-securitygroup"
	  description = "Allow SSH AND HTTP"
	  vpc_id      = "vpc-c0e5r9aa"
	

	

	  ingress {
	    description = "SSH"
	    from_port   = 22
	    to_port     = 22
	    protocol    = "tcp"
	    cidr_blocks = [ "0.0.0.0/0" ]
	  }
	

	  ingress {
	    description = "HTTP"
	    from_port   = 80
	    to_port     = 80
	    protocol    = "tcp"
	    cidr_blocks = [ "0.0.0.0/0" ]
	  }
	

	  egress {
	    from_port   = 0
	    to_port     = 0
	    protocol    = "-1"
	    cidr_blocks = ["0.0.0.0/0"]
	  }
	

	  tags = {
	    Name = "task1-securitygroup"
	  
}
  }

Step 4 - creating a bucket that have a image from github repository.. making this bucket to be public.

Github repo link - https://github.com/anurag08-git/hybridtask1


resource "aws_s3_bucket" "task1anurag" {
	    bucket = "task1-anurag"
	    acl    = "public-read"
	

	    tags = {
		Name    = "task1anurag"
		Environment = "Dev"
	    }
	    versioning {
		enabled =true
	    }
	}

Step 5 - Then i create one cloud front distribution . in this distribution origin is s3 bucket having a image. this distribution will provide a unique url from which client can access the image in s3 bucket from anywhere in the world.

resource "aws_cloudfront_distribution" "imgcloudfront" {
	    origin {
	        domain_name = "task1anurag.s3.amazonaws.com"
	        origin_id = "S3-task1anurag" 
	

	

	        custom_origin_config {
	            http_port = 80
	            https_port = 80
	            origin_protocol_policy = "match-viewer"
	            origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
	        }
	    }
	       
	    enabled = true
	

	

	    default_cache_behavior {
	        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
	        cached_methods = ["GET", "HEAD"]
	        target_origin_id = "S3-task1anurag"
	

	

	        # Forward all query strings, cookies and headers
	        forwarded_values {
	            query_string = false
	        
	            cookies {
	               forward = "none"
	            }
	        }
	        viewer_protocol_policy = "allow-all"
	        min_ttl = 0
	        default_ttl = 3600
	        max_ttl = 86400
	    }
	    # Restricts who is able to access this content
	    restrictions {
	        geo_restriction {
	            # type of restriction, blacklist, whitelist or none
	            restriction_type = "none"
	        }
	    }
	

	

	    # SSL certificate for the service.
	    viewer_certificate {
	        cloudfront_default_certificate = true
	    }
	}

The url that we got from cloudfront i have used this cloudfront url in the webpage image source for which client can see the image that is in s3 bucket through cloudfront

No alt text provided for this image

Web page link - https://github.com/anurag08-git/hybridtask1/blob/master/index.html


Step 6 - creating one instance using the key that we have created above named as task1-key and security group that we have created above.

resource "aws_instance" "web" {
	  ami           = "ami-0447a12f28fddb066"
	  instance_type = "t2.micro"
	  key_name = "task1-key"
	  security_groups = [ "task1-securitygroup" ]


Step 7 - After this now without we go manually to the instance and install all the requirements fora webserver . on our behalf terraform will go and set all the requirements for a webserver in the instance that we launched above.

For this terraform have to run the commands on remote operating system. so terraform have a provisioner known as remote-executor. For connecting to the instance user, ip of instance,private key(we have created this key above) is required.

Now this provisioner will install httpd webserver, git on the instance and enable the httpd service permanently.

connection {
	    type     = "ssh"
	    user     = "ec2-user"
	    private_key = file("C:/Users/ANURAG MITTAL/Downloads/task1-key.pem")
	    host     = aws_instance.web.public_ip
	  }
	

	  provisioner "remote-exec" {
	    inline = [
	      "sudo yum install httpd   git -y",
	      "sudo systemctl restart httpd",
	      "sudo systemctl enable httpd",
	    ]
	  }
	

	  tags = {
	    Name = "lwos1"
	  }
	

	}

Step 8 - creating one additional ebs volume for the instance that i will later attach to the instance. In this ebs creation we have to specify the region in which we want to create a ebs volume. As we know that ebs is a regional service of aws. so we have to give the same region to this ebs volume that we give to instance.

//That's why in availability zone we give aws_instance.web.availability_zone


resource "aws_ebs_volume" "esb1" {
	  availability_zone = aws_instance.web.availability_zone
	  size              = 1
	  tags = {
	    Name = "lwebs"
	  }
	}

Step 9 - Attaching this ebs volume to the instance , for this we require the volume id and instance id. we retrieve the volume id and instance id in form of varibale and put this variable value into the volume id and instance id.

resource "aws_volume_attachment" "ebs_att" {
	  device_name = "/dev/sdh"
	  volume_id   = "${aws_ebs_volume.esb1.id}"
	  instance_id = "${aws_instance.web.id}"
	  force_detach = true
	}

Step 10- Here i retrieve or print the public ip of the instance .

Here my desire is i want to print the public ip in local system and store or save this public ip into the a file for future reference. so i am using terraform provisioner that is local-execotor

output "myos_ip" {
	  value = aws_instance.web.public_ip
	}
	

	

	resource "null_resource" "nulllocal2"  {
		provisioner "local-exec" {
		    command = "echo  ${aws_instance.web.public_ip} > publicip.txt"
	  	}
	}

Step 11- now without we are going manually to the instance and create partition, format and mount to this ebs volume to the location we want , Terraform will do all theses things.

For this we have to use remote - executor provisioner. Provisioner always work inside the resource. For not using any resource of aws we are using null resource.

depends on means first first the job that has been written inside depends on will run after this the next job will built . we are doing this to maintaining the sequence.

Terraform will format the volume , mount this volume to the /var/www/html folder. if there is file in this folder remove it because git clone will only work in empty folder after this upload the html webpages from the git repository to this folder.

resource "null_resource" "nullremote3"  {
	

	depends_on = [
	    aws_volume_attachment.ebs_att,
	  ]
	

	

	  connection {
	    type     = "ssh"
	    user     = "ec2-user"
	    private_key = file("C:/Users/ANURAG MITTAL/Downloads/task1-key.pem")
	    host     = aws_instance.web.public_ip
	  }
	

	provisioner "remote-exec" {
	    inline = [
	      "sudo mkfs.ext4  /dev/xvdh",
	      "sudo mount  /dev/xvdh  /var/www/html",
	      "sudo rm -rf /var/www/html/*",
	      "sudo git clone https://github.com/anurag08-git/hybridtask1.git /var/www/html/"
	    ]
	  }
	}
	

	

Step 12 - As soon as all the above jobs run successfully after that from using the public ip of the instance terraform will automatically connect us to the webpage in our local system.

resource "null_resource" "nulllocal1"  {
	

	

	depends_on = [
	    null_resource.nullremote3,
	  ]
	

		provisioner "local-exec" {
		    command = "chrome  ${aws_instance.web.public_ip}"
	  	}
	}
No alt text provided for this image


Thankyou for reading this article.

要查看或添加评论,请登录

Anurag Mittal的更多文章

  • GIT and Github

    GIT and Github

    1. The file in which we write the code is known as programming file and the file resides in some folder that folder is…

  • Deploying webserver on AWS by using Ansible-Dynamic inventory

    Deploying webserver on AWS by using Ansible-Dynamic inventory

    Task Details- Statement: Deploy Web Server on AWS through ANSIBLE! ??Provision EC2 instance through ansible. ??Retrieve…

  • Automation of Machine learning With Devops

    Automation of Machine learning With Devops

    HYPERPARAMETER In the practice of machine and deep learning, Parameters are the properties of training data that will…

  • How to integrate Jenkins and kubernetes

    How to integrate Jenkins and kubernetes

    Here I am show you how to integrate kubernetes with jenkins . You can read this comment for better https://www.

    1 条评论
  • Transfer learning for Face Recognition

    Transfer learning for Face Recognition

    Our task is to create face recognition model using transfer learning. Transfer learning means by using pre trained…

社区洞察

其他会员也浏览了