Hybrid Computing Task 1

Hybrid Computing Task 1

Why Cloud?

Many companies have a hard time maintaining their data centers. It's also inconvenient for new startups to spend a huge amount on infrastructure. A data center would mean buying a whole system with lots of RAM, CPU & other necessary hardware. Then, hiring some expert guys to set up the whole system & to maintain it. Security, electricity, etc. would add on to the expenditure.

To make things easy, many companies rely on Cloud-Computing. Here, they just have to think about their work & not worry about unnecessary expenditure. Most of the Cloud Providers work on the agreement of Pay-as-we-go, which means that startups don't need a huge amount to set up their business.

Now that the cloud has excited, let's answer the next question, how do I use Cloud?

Almost all major cloud computing service providers provide the GUI version for a more user-friendly experience for naive users. But most companies prefer using the CLI version because it provides more customization, and when you get used to the CLI version, the work is faster.

The solution

The solution lies in using a single method that can be used for all the clouds. One such tool is Terraform. A Terraform code is similar for all clouds and it also helps in maintaining records of what all has been done.

The Question

In this project, I have launched a web server using a terraform code.

Step 1: First of all, configure your AWS profile in your local system using cmd. Fill your details & press Enter.

aws configure --profile naitik2
              AWS Access Key ID [****************NQTY]:
              AWS Secret Access Key [****************b/hJ]:
              Default region name [ap-south-1]:
              Default output format [None]:

Step 2: Use Terraform to launch an instance of ec2. Here, I used an AMI for Redhat 8. In this case, I installed and configured Apache Web Services using the Remote Execute Provisioner as well. I have used a key and protection category that was pre-created. You may do the same if you want to build a new one. Make sure you have SSH allowed in the security community on port 22 and HTTP allowed on port 80.

The terraform code is given below:

provider  "aws" {
        region   = "ap-south-1"
        profile  = "naitik2"
      }

      resource "aws_instance" "test_ins" {
        ami             =  "ami-052c08d70def0ac62"
        instance_type   =  "t2.micro"
        key_name        =  "newk11"
        security_groups =  [ "launch-wizard-1" ]

       connection {
          type     = "ssh"
          user     = "ec2-user"
          private_key = file("C:/Users/AAAA/Downloads/newk11.pem")
          host     = aws_instance.test_ins.public_ip
        }

        provisioner "remote-exec" {
          inline = [
            "sudo yum install httpd  php git -y",
            "sudo systemctl restart httpd",
            "sudo systemctl enable httpd",
            "sudo setenforce 0"
          ]
        }

        tags = {
          Name = "my_os"
        }
      }

Step 3: Build a volume of EBS. Here, I have produced a 1 GiB volume. One question that will occur here is that we do not know that our example is launched in which zone of availability. But, we need to start our EBS volume in the same region, otherwise, it's not possible to link it. To fix this, I downloaded the instance's availability zone and used it here.

resource "aws_ebs_volume" "my_vol" {
        availability_zone  =  aws_instance.test_ins.availability_zone
        size               =  1

        tags = {
          Name = "my_ebs"
        }
      }

Step 4: Now attach your created EBS volume to your instance.

resource "aws_volume_attachment"  "ebs_att" {
        device_name  = "/dev/sdd"
        volume_id    = "${aws_ebs_volume.my_vol.id}"
        instance_id  = "${aws_instance.test_ins.id}"
        force_detach =  true
      }

I have also retrieved the public IP of my instance and stored it in a file locally as it may be used later.

resource "null_resource" "ip_store"  {
        provisioner "local-exec" {
            command = "echo  ${aws_instance.test_ins.public_ip} > public_ip.txt"
          }
      }

Step 5: Now, we need to mount our EBS volume to the folder /var/www/html so that it can be deployed by the Apache Web Server. I have downloaded the code from Github at the same location.

resource "null_resource" "mount"  {

    depends_on = [
        aws_volume_attachment.ebs_att,
      ]


      connection {
        type     = "ssh"
        user     = "ec2-user"
        private_key = file("C:/Users/AAAA/Downloads/newk11.pem")
        host     = aws_instance.test_ins.public_ip
      }

    provisioner "remote-exec" {
        inline = [
          "sudo mkfs.ext4  /dev/xvdd",
          "sudo mount  /dev/xvdd  /var/www/html",
          "sudo rm -rf /var/www/html/*",
          "sudo git clone https://github.com/naitik2pnd23/Cloud_task1.git /var/www/html/"
        ]
      }
    }

I have also downloaded all the code & images from Github in my local system so that I can automate the upload of images in s3 later.

    resource "null_resource" "git_copy"  {
      provisioner "local-exec" {
        command = "git clone https://github.com/naitik2pnd23/Cloud_task1.git C:/Users/AAAA/Pictures/" 
        }
    }

Step 6: Now, we create an S3 bucket on AWS. The code snippet for doing the same is as follows -

resource "aws_s3_bucket" "sp_bucket" {
        bucket = "naitik23"
        acl    = "private"

        tags = {
          Name        = "naitik2314"
        }
      }
       locals {
          s3_origin_id = "myS3Origin"
        }

Step 7: Now that the S3 bucket has been created, we will upload the images that we had downloaded from Github in our local system in the above step. Here, I have uploaded just one pic. You can upload more if you wish.

resource "aws_s3_bucket_object" "object" {
          bucket = "${aws_s3_bucket.sp_bucket.id}"
          key    = "test_pic"
          source = "C:/Users/naitik/Pictures/pic1.jpg"
          acl    = "public-read"
        }

Step 8: Now, we're building our CloudFront and linking it to our S3 bucket. The CloudFront ensures fast distribution of content around the world leveraging AWS 's edge locations.

resource "aws_cloudfront_distribution" "my_front" {
         origin {
               domain_name = "${aws_s3_bucket.sp_bucket.bucket_regional_domain_name}"
               origin_id   = "${local.s3_origin_id}"

       custom_origin_config {

               http_port = 80
               https_port = 80
               origin_protocol_policy = "match-viewer"
               origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
              }
            }
               enabled = true

       default_cache_behavior {

               allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
               cached_methods   = ["GET", "HEAD"]
               target_origin_id = "${local.s3_origin_id}"

       forwarded_values {

             query_string = false

       cookies {
                forward = "none"
               }
          }

                viewer_protocol_policy = "allow-all"
                min_ttl                = 0
                default_ttl            = 3600
                max_ttl                = 86400

      }
        restrictions {
               geo_restriction {
                 restriction_type = "none"
                }
           }
       viewer_certificate {
             cloudfront_default_certificate = true
             }
      }

Now, we go to /var/www/html & update the link of the images with the link from CloudFront. As of now, only this part is manaul in my project. I'm trying my best to automate it & will update here as soon as I reach to success. Any help in this regard is strongly welcome.

Step 9: Now, we write a terraform code snippet to automatically retrieve the public ip of our instance and open it in chrome. This will land us on the home page of our website that is present in /var/www/html.

resource "null_resource" "local_exec"  {


        depends_on = [
            null_resource.mount,
          ]

          provisioner "local-exec" {
              command = "start chrome  ${aws_instance.test_ins.public_ip}"
                 }
        }

Finally, you'll see your home page open up.

Any suggestions are always welcome.

Vidhi Shah

Senior Data Scientist at DTE Energy

4 年

Very well explained.

回复

要查看或添加评论,请登录

Naitik Shah的更多文章

  • JavaScript - Journey from Zero to Hero with Vimal Daga Sir

    JavaScript - Journey from Zero to Hero with Vimal Daga Sir

    I have seen a lot of "Free" courses on YouTube, which assure you to take your basic level in JavaScript to next level…

  • Chest X-Ray Medical Diagnosis with Deep Learning

    Chest X-Ray Medical Diagnosis with Deep Learning

    Project Name: Chest X-Ray Medical Diagnosis with Deep Learning Team Members: Naitik Shah Ashutosh Kumar Sah This…

    2 条评论
  • Top 5 Billion Dollar Companies Using AWS Cloud

    Top 5 Billion Dollar Companies Using AWS Cloud

    Hello Readers, AWS has captured a whopping 51% of the total cloud computing service providers, and their competitors…

    2 条评论
  • Multi-Cloud Project

    Multi-Cloud Project

    A quick summary of the project: The purpose is to deploy a WordPress framework using Terraform on Kubernetes. For this,…

    2 条评论
  • Data Problem of Big Tech Companies

    Data Problem of Big Tech Companies

    Every hour, 30,000 hours of videos are uploaded to YouTube, crazy isn't it? and that data is of March 2019, so, I am…

    2 条评论
  • Hybrid Cloud Computing Task 4

    Hybrid Cloud Computing Task 4

    Hey fellas, presenting you my Hybrid Cloud Computing Task 4, which I am doing under the mentorship of Vimal Daga Sir…

  • Hybrid Cloud Computing Task 3

    Hybrid Cloud Computing Task 3

    Hey fellas, I bring you the Hybrid Cloud Computing task 3. What is this task all about? The motive is for our company…

  • Automating AWS Service(EC2, EFS, S3, Cloud Front) using Terraform

    Automating AWS Service(EC2, EFS, S3, Cloud Front) using Terraform

    So let me take you through the steps: First of all, create an IAM user by going to AWS GUI, and don't forget to…

  • Deploying Prometheus and Grafana on top of Kubernetes

    Deploying Prometheus and Grafana on top of Kubernetes

    Hello readers, this is my DevOps Task 5, and the problem statement is: Integrate Prometheus and Grafana and perform in…

  • Integrating Groovy with Kubernetes and Jenkins (DevOps Task 6)

    Integrating Groovy with Kubernetes and Jenkins (DevOps Task 6)

    Hola! so you guys might remember my DevOps Task 3 , if you haven't read it, then do give it a read, because this Task…

社区洞察

其他会员也浏览了