This is an article about how we can use terraform to create a cloud Infrastructure

This is an article about how we can use terraform to create a cloud Infrastructure

What is Terraform and how it is useful in creating cloud infrastructure?

-->Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

-->In web-UI of aws we have to do everything manually,from creating an instance to connecting it,even in CLI if we want to create a cloud infrastructure,the code are long and we have to manually create the infrastructure step by step.Instead of all this terraform uses HCL code and by using this we create a infrastructure by executing the code only once and can destroy it too by simple commands like terraform destroy and terraform apply.By using terraform,we can automate the cloud infrastructure.So,it is very useful in creating a cloud infrastructure.

Task Description :

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploaded the code into git hub repo also the repo has some images.

6. Copy the git hub repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Optional:

--> create snapshot of ebs

Pre-requsites:

->Terraform should be downloaded and installed and should be added to your path variable

download terraform from here: https://www.terraform.io/

-->create a directory in your local system where you want to do the project

-->create a AWS profile and give your AWS access key ID,AWS secret access key,give default region name, and select default output format as json

aws configure --profile profilename


Process :

-> Create a .tf file to write terraform code,close the file and then install the required plug-ins to run AWS commands by using terraform init command

notepad task_code.tf
terraform init

->Inside the task_code.tf file and provide the following informations:

1: First,Mention the provider name and within the provider block,give the name of profile you have created and the region

provider "aws" {
	profile ="Asish"
	region ="ap-south-1"
}

2: Then,create a private key using terraform

resource "tls_private_key" "key_task" {
  algorithm = "RSA"
}
module "key_pair" {
  source = "terraform-aws-modules/key-pair/aws"
  key_name   = "key_task"
  public_key = tls_private_key.key_task.public_key_openssh
}

and store the key using output for further use:

output "task_pem_key"{
	value=tls_private_key.key_task.private_key_pem
}

3: Create a security group which allows both ssh and http protocols,on port numbers 22 and 80 respectively,to add a protocol for security group we have to create an ingress block and give description,protocol name and port number there

resource "aws_security_group" "task1_securitygrp" {
  name        = "task1_securitygrp"
  description = "Allow TLS inbound traffic"
  vpc_id      = "vpc-df9489b7"


   ingress {
    description = "TLS from VPC"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
  }




 ingress{
    description = "SSH"
     from_port =22
     to_port=22
      protocol ="tcp"
       cidr_blocks = ["0.0.0.0/0"]
}




ingress{
    description = "HTTP"
     from_port=80
      to_port=80
       protocol = "tcp"
       cidr_blocks = ["0.0.0.0/0"]
    }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "task1_securitygrp"
  }
}

4: Create an instance resource and mention all the necessary things like security groups,instance_id, ami id,etc

After that,connect to the instance we have created using the private key we have created earlier and public IP of the instance as host name and then install httpd,git and then start and enable httpd server

resource "aws_instance" "task1os" {
  	ami  = "ami-0447a12f28fddb066"
 	 instance_type = "t2.micro"
 	 key_name = "key_task"
  	security_groups= ["task1_securitygrp"]


  connection {
    type     = "ssh"
    user     = "ec2-user"
   private_key=tls_private_key.key_task.private_key_pem
    host     = aws_instance.task1os.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd   git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }


  tags = {
    Name = "task1os"
  }


}

5: Create an EBS volume using the availibility zone of the instance and then attach the EBS volume to the instance we have created by using the volume id and instance id.

resource "aws_ebs_volume" "task1_volume" {
  availability_zone = aws_instance.task1os.availability_zone
  size              = 1
  tags = {
    Name = "task1_volume"
  }
}




resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdh"
  volume_id   = "${aws_ebs_volume.task1_volume.id}"
  instance_id = "${aws_instance.task1os.id}"
  force_detach = true
}

6. Store the instance public ip in a file,so that we can use it later to access our html page

output "myos_ip" {
  value = aws_instance.task1os.public_ip
}




resource "null_resource" "nulllocal2"  {
	provisioner "local-exec" {
	    command = "echo  ${aws_instance.task1os.public_ip} > publicip.txt"
  	}
}

7: Then format our volume,mount it to /var/www/html folder,and then clone the contents that we want using the git hub clone command

resource "null_resource" "nullremote3"  {


depends_on = [
    aws_volume_attachment.ebs_att,
  ]




  connection {
    type     = "ssh"
    user     = "ec2-user"
   private_key =tls_private_key.key_task.private_key_pem
    host     = aws_instance.task1os.public_ip
  }


provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/Pheonix-reaper/Task1_cloud.git /var/www/html/"
    ]
  }
}

8: Write terraform code to create an AWS bucket and use acl as public_read to give public access to the bucket and then create a public access block for the bucket to make block_public acls and block_public policy as false

resource "aws_s3_bucket" "task1_bucket" {
  bucket = "task1-bucket-asish-007-s3bucket"
  acl="public-read"
 force_destroy=true


tags = {
    Name = "My bucket"
  }
}

resource "aws_s3_bucket_public_access_block" "aws_public_access" {
  bucket = "${aws_s3_bucket.task1_bucket.id}"


 block_public_acls   = false
  block_public_policy = false
}

9: Write a code to create cloud front distribution for the bucket we have created.

resource "aws_cloudfront_distribution" "imgcloudfront" {
    origin {
        domain_name = "asishpatnaik_task1_bucket.s3.amazonaws.com"
        origin_id = "S3-asishpatnaik_task1_bucket" 


          custom_origin_config {
            http_port = 80
            https_port = 80
            origin_protocol_policy = "match-viewer"
            origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
        }
    }
       
    enabled = true




    default_cache_behavior {
        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        cached_methods = ["GET", "HEAD"]
        target_origin_id = "S3-asishpatnaik_task1_bucket"



     forwarded_values {
            query_string = false
        
            cookies {
               forward = "none"
            }
        }
        viewer_protocol_policy = "allow-all"
        min_ttl = 0
        default_ttl = 3600
        max_ttl = 86400
    }
    # Restricts who is able to access this content
    restrictions {
        geo_restriction {
            
            restriction_type = "none"
        }
    }




    # SSL certificate for the service.
    viewer_certificate {
        cloudfront_default_certificate = true
    }
}

10: Write a code to create a snapshot of our EBS volume.

resource "aws_ebs_snapshot" "task1_snapshot" {
  volume_id = "${aws_ebs_volume.task1_volume.id}"


  tags = {
    Name = "ebs_snap"
  }
}

11: In this way,we have created terraform code for all the instruction of the task.Now save the .tf file,then apply the terraform code,in terraform we can easily create our infrastructure by:

terraform apply -auto-approve

and can easily destroy the created infrastructure by:

terraform destroy -auto-approve

After applying the terraform code,our infrastructure is created:

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

12: Deploy the github repository by using aws codepipeline service

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

13: We have successfully deployed our git hub repository using codepipeline,and we have seen how infrastructure is form by using terraform

No alt text provided for this image
No alt text provided for this image

14: Now,use the public ip that we got from output in terraform to access and view our html page

No alt text provided for this image
If the above mentioned steps are followed one can easily create a cloud infrastructure using terraform


github link for the task: https://github.com/Pheonix-reaper/Task1_cloud

要查看或添加评论,请登录

Asish Patnaik的更多文章

社区洞察

其他会员也浏览了