Integrating aws with Terraform,GitHub to launch an Application(Fully Automated).

Integrating aws with Terraform,GitHub to launch an Application(Fully Automated).

Most of us have already heard about aws and Github. But Terraform might be a new term for many.

What is Terraform?

Terraform is an open source tool created by HashiCorp and written in the Go programming language. The Go code compiles down into a single binary (or rather, one binary for each of the supported operating systems) called, not surprisingly, terraform.

We create Terraform configurations, which are text files that specify what infrastructure you wish to create. These configurations are the “code” in “infrastructure as code”.

Terraform is a standardize tool to manage the cloud. Plugins that are added makes the terraform intelligent. Whatever thing terraform create for you, terraform know everything (which service is created, which service is dependent on other, if write code in defferent file with .tf extension they know the sequence of code, etc.). Terraform store everything in 1 file called state file (terraform.tstate), terraform keep track of everything from this file.

Task Description - Create/Launch Application using Terraform:

  1. Create the key and security group which allow the port 80 for webserver.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1.
  4. Launch one Volume (EBS) and mount that volume into /var/www/html.
  5. Developer have uploaded the code into github repo also the repo has some images.
  6. Copy the github repo code into /var/www/html.
  7. Create S3 bucket, and copy/deploy the images from github repo into the S3 bucket and change the permission to public readable.
  8. Cloudfront — copy static data (image, video, pdf, etc) from S3 to edge location through content delivery network.
  9. Create snapshot of ebs.

Before getting started, the system has to be armed with :

  1. Git
  2. AWS-CLI
  3. Terraform


Step 1


We need to configure the file that will be used by terraform to access my aws account.

No alt text provided for this image

The profile name that I have given is "ankushsr". Also, I have provided it to terraform aws provider to login into my account.

// Setting up the cloud provider

provider "aws" {
    region = "ap-south-1"
    profile = "ankushsr"
    }


Step2


We'll now have to create our own key and security group.

// Creating a keypair
 
resource "tls_private_key" "tf_key" {
    algorithm = "RSA"
    rsa_bits = 4096
    }
    
resource "aws_key_pair" "newkey" {
    key_name = "tfkey"
    public_key = "${tls_private_key.tf_key.public_key_openssh}"
}

resource "local_file" "key_file" {
    content = "${tls_private_key.tf_key.private_key_pem}"
    filename = "tfkey.pem"
    }


No alt text provided for this image


Now, as we have created the key pair and stored it in a file, we need to create a security group which allows port 80 for HTTP so that we can connect to our webpage using it and also we are configuring port 22 so that we can use it to connect to the instance using SSH whenever we need it.

// Creating a security group which would allow port 80 and 22

resource "aws_security_group" "allow_port80" {
  name        = "allow_80port"
  description = "Allow inbound traffic"
  vpc_id      = "vpc-41796629"

  ingress {
    description = "SSH Config"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTP config"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
 
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "my_secgrp_allow_port80"
  }
}
}

No alt text provided for this image


Step3


Now, it's time to launch our own EC2 Instance by using the key pair and security group we created.

Here, we are going to configure our instance which we use as an OS for our webserver. I have use amazon linux2 as our instance because it comes with many pre-configured features like systemctl, etc. Now, after creating our instance we are going to configure it with our security group and key pair which we created before . Also we are going to configure it with httpd, php and git for our future purposes. Also we make starting of our httpd server as default whenever we start the instance by giving the enable command.

//Launching a new EC2 instance with our already created security group and key pair 

resource "aws_instance" "my_web_instance" {
    ami = "ami-0447a12f28fddb066"
    instance_type = "t2.micro"
    availability_zone = "ap-south-1a"
    key_name = "${aws_key_pair.newkey.key_name}"
    security_groups = ["${aws_security_group.allow_port80.name}"]
    
    connection{
        type = "ssh"
        port = 22
        user = "ec2-user"
        private_key = "${tls_private_key.tf_key.private_key_pem}"
        host = "${aws_instance.my_web_instance.public_ip}"
    }

    provisioner "remote-exec"{
    inline = [
    "sudo yum install httpd php git -y",
    "sudo systemctl restart httpd",
    "sudo systemctl enable httpd",
    ]
    }
    
    tags = {
        Name="my_web"
    }
    }     
    

No alt text provided for this image

Hurray now we have a brand new Amazon Linux2 OS created!


Step4


In aws, EBS is a storage service. That is exactly what we are about to launch in this step.

// Launching a EBS and mounting it

resource "aws_ebs_volume" "my_tf_vol" {
    availability_zone = "${aws_instance.my_web_instance.availability_zone}"
    size = 1
    
    tags = {
    name = "tf_vol"
    }
}

resource "aws_volume_attachment" "my_tf_vol_attach" {
    device_name = "/dev/sdf"
    volume_id = "${aws_ebs_volume.my_tf_vol.id}"
    instance_id = "${aws_instance.my_web_instance.id}"
    force_detach = true
    }

Now thag the device is configured, we are going to mount it and also format our ebs storage, clone our git repository inside it.

// Mounting The EBS to the EC2 and cloning the repo inside the instance
    
resource "null_resource" "ebs_mount"{

    depends_on = [
        aws_volume_attachment.my_tf_vol_attach,aws_instance.my_web_instance
        ]
    connection {
        type = "ssh"
        port = 22
        user = "ec2-user"
        private_key = "${tls_private_key.tf_key.private_key_pem}"
        host = "${aws_instance.my_web_instance.public_ip}"
    }
    provisioner "remote-exec"{
    
    inline = [
                "sudo mkfs.ext4 /dev/sdf",
                "sudo mount /dev/sdf /var/www/html",
                "sudo rm -rf /var/www/html/*",
                "sudo git clone https://github.com/Ankushsr20/Cloud-Task1.git /var/www/html/"
                ]
}
}

No alt text provided for this image


Step5


In this step, we are going to create a public S3 bucket and add an image inside it, that is to be used for our webserver.

// Creating a S3 bucket and adding a image inside the bucket

resource "aws_s3_bucket" "tfbucket" {
    bucket = "tfawsbucket"
    acl = "public-read"
    }
    
resource "aws_s3_bucket_object" "tfbucketobject" {
    bucket = "${aws_s3_bucket.tfbucket.bucket}"
    key = "image.jpg"
    source = "C:/Users/KIIT/Desktop/Terraform/task1/image.jpg"
    acl = "public-read"
    }

No alt text provided for this image
No alt text provided for this image

Thus, a new bucket is formed which is public and image.jpg file has been uploaded in the bucket. This image will be used in the webpage.


Step6


Now, we need a Cloudfront Distribution , a fast content delivery network service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. It provides edge location services which provides faster Content delivery network to our webpages.

//Creating a cloudfront distribution

locals {
s3_origin_id = aws_s3_bucket.tfbucket.id
}

resource "aws_cloudfront_distribution" "cloudfronttf" {
    depends_on = [
                    aws_s3_bucket_object.tfbucketobject,
    ]

    origin {
        domain_name = "${aws_s3_bucket.tfbucket.bucket_regional_domain_name}"
        origin_id = "${local.s3_origin_id}"
        }
    enabled = true
    is_ipv6_enabled = true
    comment = "Cloud Front S3 distribution"
    
    default_cache_behavior{
    allowed_methods = ["DELETE",  "GET" , "HEAD" , "OPTIONS", "PATCH" , "POST", "PUT"]
    cached_methods = ["GET" , "HEAD"]
    target_origin_id = local.s3_origin_id
    
    forwarded_values{
    query_string = false
    cookies {
    forward = "none"
    }
    }
    viewer_protocol_policy = "allow-all"
    min_ttl = 0
    default_ttl = 3600
    max_ttl = 86400
    }
    restrictions {
    geo_restriction {
    restriction_type = "whitelist"
    locations = ["IN"]
    }
    }
    
    tags = {
    Name = "my_webserver1"
    Environment = "production_main"
    }
    
    viewer_certificate {
    cloudfront_default_certificate = true
    }
    retain_on_delete = true
  
  }
No alt text provided for this image

CloudFront created!!


Step7


Changing the S3 image inside the webpage code for faster and optimized delivery: Here, we are going to change our code of our webpage and we will replace our image with the image from the s3 bucket and using cloud front for faster delivery and low latency of our webpage. Ummm interesting right?!

// Changing the S3 image inside the webpage code


        resource "null_resource"  "null" {
        depends_on = [
        aws_instance.my_web_instance,aws_cloudfront_distribution.cloudfronttf
        ]
    connection {
        type = "ssh"
        port = 22
        user = "ec2-user"
        private_key = "${tls_private_key.tf_key.private_key_pem}"
        host = "${aws_instance.my_web_instance.public_ip}"
    }
    provisioner "remote-exec"{
    
    inline = [
                "sudo su << EOF",
                "echo '<img src='https://${aws_cloudfront_distribution.cloudfronttf.domain_name}/tfaws.jpg' height = '400px' width='400px'' >> /var/www/html/webpage.html ",
                "EOF"
    ]
    
    }
  
  }


Step8


Final step: Creating EBS Snapshots

// Creating EBS snapshot

resource "aws_ebs_snapshot" "tf_snapshot" {
    volume_id = "${aws_ebs_volume.my_tf_vol.id}"
    
    tags = {
    Name = "My_TF_SNAPSHOT"
    }
}

No alt text provided for this image

Now, finally, our code is complete!! Might look quite lengthy but that's how powerful Terraform is!

To initialize the plugins use the code
> terraform init
To Check the Code
> terraform validate
To run and create everything
> terraform apply 
— auto-approve

Now, we can view the webpage that we code. I made it quite minimal, nothing fancy :")

No alt text provided for this image


Okay, so asuccess! Our webpage is up and healthy!

I'm not really sad to break it out to all of you that, just as a single command was required to create such a powerful setup, a single command is enough to delete the whole setup : Terraform destroy .

No alt text provided for this image

Okay so the who setup is destroyed. Nothing to worry, a single command will again bring back the same setup!

Hope you all liked it! I will try to move into more deeper integrations in the near future.

The following is the link to my github repository:

https://github.com/Ankushsr20/Cloud-Task1

For any doubts, feel free to contact me.


Thank you for reading :")
BISHAL MONDAL

"Associate Software Engineer at Osmosys | .NET Developer | Experienced in Microservices, SQL Optimization, and System Integration| Hybrid Multi Cloud(AWS & Openstack)|DOTNET Developer

4 å¹´

Awesome work!!Ankush Sinha Roy

赞
回复

要查看或添加评论,请登录

Ankush Sinha Roy的更多文章

  • End of the start of a beautiful Journey : My Life-changing MLOps Journey

    End of the start of a beautiful Journey : My Life-changing MLOps Journey

    This beautiful MLOPS (I was so excited to hear about this ML + Devops concept) journey started with the 2-day IIEC DOT…

    14 条评论
  • MLOps: Complete Automation Integrating Machine Learning with Devops

    MLOps: Complete Automation Integrating Machine Learning with Devops

    What if I told you that machine learning models can train themselves without any requirement of human intervention, in…

    8 条评论
  • A productive workshop!

    A productive workshop!

    2 days workshop online workshop on Artificial Intelligence by Vimal Daga sir. It was quite interesting to know about…

社区洞察

其他会员也浏览了