?? Hybrid Multi Cloud Task-1??

?? Hybrid Multi Cloud Task-1??

To launch Webserver with AWS using Terraform code

Steps required to launch the App using terraform:-

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance, use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html.

5. The developer has uploaded the code into Github repo & also the repo has some images.

6. Copy the Github repo code into /var/www/html.

7. Create an S3 bucket, and copy/deploy the images from Github repo into the s3 bucket and change the permission to public readable.

8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

The process required to launch the app in the detailed format:-

Step 1: Create an account on AWS (Amazon Web Services). Create a key in keypairs of Network & Security. Download the key details for login purpose. The details regarding the key what I have used is shown below.

No alt text provided for this image

Step 2: Create a security group for allowing SSH and HTTP protocol.

It will allow SSH and HTTP protocol.

resource "aws_security_group" "allow_my_http" {
  name        = "launch-wizard-7"
  description = "Allow my HTTP SSH inbound traffic"
  vpc_id      = "vpc-dbe1fcb3"


  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [ "0.0.0.0/0" ]
  }
  
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }




  tags = {
    Name = "httpsecurity"
  }
    
}


Step 3: Launch EC2 instance. Code related to launching an instance using Terraform is as follows.

resource "aws_instance" "os" {
  ami               = "ami-07db4adf15d7719d1"
  instance_type     = "t2.micro"
  key_name          = "task2"
  security_groups   = [ "launch-wizard-7" ]

Step 4: Now launch EBS volume with a size of 1 GiB & then attach it to EC2 instance.

resource "aws_ebs_volume" "ebs_vol" {
  availability_zone = aws_instance.os.availability_zone
  size              = 1


  tags = {
    Name = "myfirstos"
  }
}




resource "aws_volume_attachment" "Ebs_Att" {
  device_name = "/dev/sdf"
  volume_id   = aws_ebs_volume.ebs_vol.id
  instance_id = aws_instance.os.id
  force_detach = true
}


output "myos_ip" {
  value = aws_instance.os.public_ip
}

Step 5: Mount that volume into /var/www/html. For this, the connection is needed to EC2 instance. “ remote-exec ” function is required to perform this process. The code is as follows.

resource "null_resource" "nullremote1"  {


  depends_on = [
    aws_volume_attachment.Ebs_Att,
  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Admin/Downloads/task2.pem")
    host     = aws_instance.os.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdf",
      "sudo mount  /dev/xvdf  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/dighetushar654/Cloud_Task1.git /var/www/html/"
    ]
  }
}

→ ‘sudo mount’ is used to mount into /var/www/html. ‘sudo git clone’ is used to copy the Github repo code into /var/www/html.

Step 6: Create a new repository to upload ‘index.html’ on GitHub account. It is used to show my webpage.

No alt text provided for this image

Step 7: Create a bucket on S3 service in AWS. And deploy the images from GitHub repo into S3 bucket. Modify the permissions to public readable.

resource "aws_s3_bucket" "job171" {
  bucket = "job171" 
  acl    = "public-read"
  tags = {
    Name        = "job171"
  }
  versioning {
	enabled =true
  }
}


resource "aws_s3_bucket_object" "s3object" {
  bucket = "${aws_s3_bucket.job171.id}"
  key    = "download.png"
  source = "C:/Users/Admin/Pictures/download.png"
}

Step 8: Create a Cloudfront using S3 bucket. Use the CloudFront URL to update in the code. The code is as follows.

resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
  comment = "This is origin access identity"
}


resource "aws_cloudfront_distribution" "imgcf" {
    origin {
        domain_name = "job171.s3.amazonaws.com"
        origin_id = "S3-job171" 




        s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
    }
  }
       
    enabled = true
      is_ipv6_enabled     = true


    default_cache_behavior {
        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        cached_methods = ["GET", "HEAD"]
        target_origin_id = "S3-job171"




        # Forward all query strings, cookies and headers
        forwarded_values {
            query_string = false
        
            cookies {
               forward = "none"
            }
        }
        viewer_protocol_policy = "allow-all"
        min_ttl = 0
        default_ttl = 10
        max_ttl = 30
    }
    # Restricts who is able to access this content
    restrictions {
        geo_restriction {
            # type of restriction, blacklist, whitelist or none
            restriction_type = "none"
        }
    }




    # SSL certificate for the service.
    viewer_certificate {
        cloudfront_default_certificate = true
    }
}

→ CloudFront service on AWS.

No alt text provided for this image

→ Command “ terraform apply — auto-approve ” is used to run our code with ‘.tf’ extension. The result regarding this command is as follows.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


→ The webpage will be displayed using the IP address (https://35.154.206.235/).

→ The resulted webpage is as follows.

No alt text provided for this image

For complete code, go through my Github URL https://github.com/dighetushar654/Cloud_Task1.

要查看或添加评论,请登录

Tushar Dighe的更多文章

  • Enhancing Cloud Security with Wiz: A Game-Changer for DevOps and Security Teams

    Enhancing Cloud Security with Wiz: A Game-Changer for DevOps and Security Teams

    Enhancing Cloud Security with Wiz: A Game-Changer for DevOps and Security Teams In today's fast-paced cloud…

  • AWS : NASA Case Study

    AWS : NASA Case Study

    Established in 1958, the National Aeronautics and Space Administration (NASA) has been working around the world—and off…

  • The World of “Big Data”

    The World of “Big Data”

    What is data? The quantities, characters, or symbols on which operations are performed by a computer, which may be…

  • ?? Hybrid Multi Cloud Task-3??

    ?? Hybrid Multi Cloud Task-3??

    INTEGRATION OF WORDPRESS RUNNING ON TOP OF KUBERNETES CLUSTER WITH AWS RDS.?? So, you might be thinking that what is…

    2 条评论
  • ?? Hybrid Multi Cloud Task-2??

    ?? Hybrid Multi Cloud Task-2??

    Using EFS service with Terraform. Objective: Perform the task-1 using EFS instead of EBS service on the AWS as…

  • ?? Ansible Task-2??

    ?? Ansible Task-2??

    Deploying Web Server on AWS through ANSIBLE! TASK DESCRIPTION: ?? Provision EC2 instance through ansible. ?? Retrieve…

  • ?? DevOps Task-6 ??

    ?? DevOps Task-6 ??

    Integration of Jenkins with Kubernetes using groovy What is Groovy? Groovy is a Java-syntax-compatible object-oriented…

  • ?? DevOps Task-5 ??

    ?? DevOps Task-5 ??

    Deploy prometheus and grafana on the top of kubernetes . Tasks : Integrate Prometheus and Grafana and perform in…

  • ?? DevOps Task-4 ??

    ?? DevOps Task-4 ??

    In this article i have created a fully automated CI/CD build pipeline by using the technology git, github, Jenkins…

  • ?? Ansible Task-1 ??

    ?? Ansible Task-1 ??

    Integrate Ansible with Docker What is Ansible ?? Ansible is an open-source automation tool, or platform, used for IT…

    2 条评论

社区洞察

其他会员也浏览了