Using Terraform to Automate basic AWS jobs
parkmycloud.com

Using Terraform to Automate basic AWS jobs

Recently I started learning about cloud technologies and how to automate them. So, I have created a simple terraform code to automate AWS jobs like create keys pairs, EC2 instances, etc.

Prerequisites

Amazon Web Services

Amazon web service is a platform that offers flexible, reliable, scalable, easy-to-use, and cost-effective cloud computing solutions.

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform works HCL language which is created by the company that created Terraform, HashiCorp.

Let's Begin

We'll start by creating a new directory. In that directory create a new file with the extension ".tf "

First thing First when you go to the AWS page what do you do? Login. Yes, log in.

provider "aws" {
  region                  = "ap-south-1"
  shared_credentials_file = ".aws/credentials"	
  profile                 = "default"
}

the code above has multiple things we'll try to look into some them

Provider

So, for every provider we trying to write a terraform script we'll have to specify the provider. To see the list of all providers you can go to terraform's site.

shared_credentials_file

this is to specify the location of your AWS credentials which is stored in your computer (if you're using AWS CLI, else you can hardcode your credentials too)

Now that this is done we'll do a little project.

  • create a key pair
  • create a security group
  • create an EC2 instance
  • install various tools on your instance (git , httpd )
  • create an EBS volume and attach it to the EC2 instance
  • create an S3 bucket
  • Add an image to S3 bucket
  • Create a CDN for S3 bucket (CloudFront)
  • get the global link of the image we added in S3 bucket
  • create a simple HTML file and add it to git repo
  • cloning it to our EC2 instance and running it

Create a key pair

resource "tls_private_key" "pri_key" { 
  algorithm   = "RSA"
  rsa_bits = 2048
}


resource "aws_key_pair" "TeraKey" {
  key_name   = "TeraKey"
  public_key = tls_private_key.pri_key.public_key_openssh
}


Aha new keyword

Resource

So we know that AWS provides multiple services like S3, EC2, EKS , etc here in terraform we have categorise them as resources and when you are writing anything for that services you'll have to use the resource keyword and then put the resource name for that service and then you'll have to add a unique identifier to it so that terraform can maintain the state of that particular resource.

Here, we are creating a new key pair and we require a private key and a public key for that so for this we'll use a terraform resource tls_security_key we'll provide details like algorithms and size and in the next block we are creating a public key for that private key and naming it TeraKey.

No alt text provided for this image

Create a security group

resource "aws_security_group" "SecGroupTera" {
  name        = "SecGroupTera"
  description = "Allow SSH AND HTTP inbound traffic"


  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [ "0.0.0.0/0" ]
  }


  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [ "0.0.0.0/0" ]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "SecGroupTera"
  }

}


Here I have created a security group in which I have written both the inbounds and outbound rules like I want to allow ssh, Http traffic to come so I have put it in ingress and I want no restriction on data coming from the machine so I have put it in egress (in egress the protocol ="-1" means any protocol can access external resources)

No alt text provided for this image

Create an EC2 instance and install various services on your instance

resource "aws_instance" "TeraOS1" {
  	
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name      = "TeraKey"
  security_groups = ["secGroupTera"]
  tags = {
    Name = "TeraOS1"
  }


 connection{
 	type = "ssh"
        user = "ec2-user"
        private_key = tls_private_key.pri_key.private_key_pem
        host = aws_instance.TeraOS1.public_ip     
 }
 provisioner "remote-exec"{
   
       inline = [ 
                   "sudo yum install httpd git -y" ,
                   "sudo systemctl restart httpd",
                   "sudo systemctl enable httpd"
                 ]
 }
}



In order to create an EC2 instance, we'll have to specify things like

  • ami (like iso / disc image for OS )
  • instance type
  • security groups which will monitor the machine's inbound and outbound traffic
  • key that we created above

you can provide more details like subnets, vpc but I have tried to keep it as simple as it can be.

Now we want to install some services in our instance. So, for that, we'll have to first login into out instance and then run the commands for installing those services

With the help of ssh we'll connect to the machine and then to run those commands we'll use provisioner

Provisioner

It is used to run commands on a local or remote system. Basic OS commands like ls, pwd, yum, etc. For remote system, we'll have to provide access for it to execute those commands on that system

Every Provisioner needs a resource to run


No alt text provided for this image

Create an EBS volume and attach it to the EC2 instance

resource "aws_ebs_volume" "TeraOS1volume" {

  availability_zone = aws_instance.TeraOS1.availability_zone
  size = 1
  tags = {
      Name = "TeraOS1volume"
     }
}


resource "aws_volume_attachment" "ebs_att" {

  device_name = "/dev/sdk"
  volume_id   = aws_ebs_volume.TeraOS1volume.id
  instance_id = aws_instance.TeraOS1.id
  force_detach = true
}

Remember that in order to attach a volume to an EC2 instance they have to be in the same availability zone.

We are getting the value of the availability zone dynamically from the EC2 instance so that we can be sure that they are in the same zone

No alt text provided for this image
No alt text provided for this image


the volume we have created we will have to partition it and mount it in order to use it

resource "null_resource" "null_rsrc"  {
  depends_on = [
      aws_volume_attachment.ebs_att,
  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.pri_key.private_key_pem
    host = aws_instance.TeraOS1.public_ip
  }
  provisioner "remote-exec" {
      inline = [
        "sudo mkfs.ext4  /dev/xvdk",
        "sudo mount  /dev/xvdk  /var/www/html" ,
      ]
  }
}


Here again, we are using a provisioner to run Linux commands for partitioning it and mounting it and connection for connecting to the remote system. But what is null_resource?

Null_resource

So, as we know that to run a provisioner we require a resource but the provisioner should fit in the resource right? I mean, if you are adding a provisioner to a resource than it should be related. but what if the provisioner is not related to any of the resources ? . That is when we use Null_resource. Terraform tries to run null_resources first but it might be the case that we need it to run after some resource is allocated. That's why we use depends_on. We have to put the resources that null_resource need to run before it starts executing in depends_on.

Create an S3 bucket

resource "aws_s3_bucket" "TerraformS3Bucket" {
  bucket = "terrabucketyuvi"
  acl    = "public-read"
}

No alt text provided for this image


Add an image to S3 bucket


resource "aws_s3_bucket_object" "pic" {

  depends_on = [ aws_s3_bucket.TerraformS3Bucket,]
  bucket = aws_s3_bucket.TerraformS3Bucket.id
  key    = "landscape.jpg"
  source = "C:/Users/yuvi/Downloads/landscape.jpg"
  acl    = "public-read"
  content_type = "image/jpg"
}

Adding an image in S3 requires

  • bucket id in which we'll store the object(image)
  • name of the image (key)
  • source
  • acl (whether it should be public or private)
No alt text provided for this image


Create a CDN for S3 bucket (CloudFront)

resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
  comment = "access identity"
}


resource "aws_cloudfront_distribution" "s3_cloudFront" {
  origin {
    domain_name =  aws_s3_bucket.TerraformS3Bucket.bucket_regional_domain_name
    origin_id   = aws_s3_bucket.TerraformS3Bucket.id


    s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
    }
  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "landscape.jpg"


  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = aws_s3_bucket.TerraformS3Bucket.id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  # Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = aws_s3_bucket.TerraformS3Bucket.id


    forwarded_values {
      query_string = false
 


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }


  
  price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["US", "IN","CA", "GB", "DE"]
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }

}


We can use CloudFront service here in terraform. This code might seem big but if you know how to create a CDN using AWS WebUI then you just have to copy the sample code from the terraform site and just put the necessary things and baaam! your distribution is online.

No alt text provided for this image


Get the global link of the image we added in S3 bucket

resource "null_resource" "StoringImgAddr" {

depends_on       = [aws_cloudfront_distribution.s3_cloudFront]
 provisioner "local-exec" {
    command = "echo https://${aws_cloudfront_distribution.s3_cloudFront.domain_name}/${aws_s3_bucket_object.pic.key} >> imgAddr.txt"
  }
}


Again to run local system commands we are using local provisioner and as a resource, we are using null_resource. here simply we are getting a universal link for our image and we are adding it to a text file so that we can use it later.

No alt text provided for this image

Create a simple HTML file and add it to git repo

It is self intuitive so, I'll just add the pictures of my git repo and code

No alt text provided for this image
No alt text provided for this image

Cloning it to our EC2 instance and running it

resource "null_resource" "addingGitRepo" {

depends_on       = [aws_cloudfront_distribution.s3_cloudFront , null_resource.StoringImgAddr]
connection{
 	type = "ssh"
        user = "ec2-user"
        private_key = tls_private_key.pri_key.private_key_pem
        host = aws_instance.TeraOS1.public_ip     
 }
 provisioner "remote-exec"{
   
       inline = [ 
                   "sudo rm -rf /var/www/html/*",
                   "sudo git clone https://github.com/yuvarajsm14/TerraformAutomate.git /var/www/html/" ,
                 ]
 }
}

Here I am cloning my repo to my webserver file directory using remote provisioner and connecting to the remote system via connection. Now I'll just take my EC2 instance IP address and goto my web browser and write https://{ip address}/index.html

No alt text provided for this image

We have successfully created an EC2 instance, attached an EBS volume, created an S3 bucket, and added an image in it and then we created a distribution of that bucket, and then we created a simple website using that image and hosted it on EC2.

It'll seem quite normal if we look at it. But the important thing is we did via a code and now if we want to create one more instance or one more S3 bucket we can do it via this code no need of going to WebUI and doing it from there you just have to run the command terraform apply and you'll get what you want.


Bhavya Gupta

Identity and Access Management || SailPoint || AZ-900 Azure Fundamentals Certified || Lead Solution Advisor @ Deloitte India (Offices of the US)

4 年

This is great Yuvaraj Singh Malawat Keep going!!

Tanishq Bhatnagar

Senior Security Engineer at Hinge Health | Product Security

4 年

Great work Yuvaraj!

Arjun Suri

Software Development Engineer 1@ Amazon || Ex - Deloitte

4 年

Awesome work... Please teach us as well

要查看或添加评论,请登录

社区洞察

其他会员也浏览了