TASK-2 HybridMultiCloud

TASK-2 HybridMultiCloud

Launch The EC2 instance with EFS using Terraform

Prerequisites: AWS account, AWS CLI, Terraform

In this article, I'm Going To launch the EC2 instance with EFS mounted to the EC2 instance

So What is EFS?

In simple word Its a file storage system

But we need the elaborate words so here we go,

"Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations."

So it's a file system which works on port no 2049, Its important because we need it some time.

The Process Begins:

STEP 1: We need to generate a key for our instance and We have to save it in our local system so that we can use it for ssh.

//Creating The Key and Saving them on The Disk


resource "tls_private_key" "mykey"{
  algorithm = "RSA"
}
resource "aws_key_pair" "key1" {
  key_name   = "key3"
  public_key = tls_private_key.mykey.public_key_openssh
}
resource "local_file" "key_pair_save"{
   content = tls_private_key.mykey.private_key_pem
   filename = "key.pem"
}

STEP 2: Now Since we have created a key we need a security group for the instance which allows port no: 2049 as I mentioned earlier, we need HTTP, SSH for the same we need to open the ports.

//Creating The Security Group And Allowing The HTTP, SSH and EFS
resource "aws_security_group" "sec-grp" {


    depends_on = [
        tls_private_key.mykey,
        local_file.key_pair_save,
        aws_key_pair.key1
    ]
  name        = "Allowing SSH and HTTP"
  description = "Allow ssh & http connections"
 
  ingress {
    description = "Allowing Connection for SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "Allowing Connection For HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "Allow Connection for NFS"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "Web-Server"
  }
}

STEP 3: We need to create the EFS

// Launching EFS File System


resource "aws_efs_file_system" "myefs" {


  creation_token = "my-product"
  performance_mode = "generalPurpose"
  encrypted = "true"

  tags = {
    Name = "MyEFS"
  }
}

STEP 4: Since the EFS is created We need to mount it to the subnet so that we can mount it to EC2 instances for that we need the file-system-id of the efs and security group that we created earlier with the subnet-id.

// Mounting The EFS


resource "aws_efs_mount_target" "alpha" {


  depends_on = [
    aws_efs_file_system.myefs,
    aws_security_group.sec-grp
  ]
  file_system_id = aws_efs_file_system.myefs.id
  security_groups = [aws_security_group.sec-grp.id]
  subnet_id = aws_instance.web1.subnet_id
}

STEP 5: Now we are going to launch the EC2 with all the required parameters, like attaching the key, security group, etc. I'm using the amazon-Linux-2 with instance type t2.micro in the availability zone of the Mumbai data center.

//Launching The Instances

    resource "aws_instance" "web1" {


        depends_on = [
            tls_private_key.mykey,
            aws_key_pair.key1,
            local_file.key_pair_save,
            aws_security_group.sec-grp,
        ]
        ami = "ami-0732b62d310b80e97"
        instance_type = "t2.micro"
        key_name = "key3"
        availability_zone = "ap-south-1a"
        security_groups = [aws_security_group.sec-grp.name]
        tags = {
        Name = "Web-Server"
              }
    }

STEP 6: Here I have created a null resource for the installation of software and configuring the web-server, mounting the EFS to the instance to the folder of /var/www/html, I have used the sleep command as the efs need some time for the creation and for the same I have kept it waiting for 90sec.

// Resource grp
  resource "null_resource" "null1"{
      depends_on = [
        aws_efs_mount_target.alpha,
        aws_instance.web1
      ]
    
        connection {
            type = "ssh"
            user = "ec2-user"
            private_key = tls_private_key.mykey.private_key_pem
            host = aws_instance.web1.public_ip
        }


        provisioner "remote-exec" {
            inline = [
               
                "sudo yum install httpd  php git amazon-efs-utils -y",
                "sudo systemctl start httpd",
                "sudo systemctl enable httpd",
                "sleep 90",
                "mkdir efs",
                "sudo  mount -t efs -o tls ${aws_efs_file_system.myefs.id}:/ /var/www/html",
                "sudo rm -rf /var/www/html/*",
                "sudo git clone https://github.com/ashutosh5786/for-ec2.git /var/www/html"
            ]
        
        }
    }

STEP 7: Now I'm going to download the git-hub repository in my local system for uploading to the S3 bucket

// Downlaod img The Images from The Github on local dir
resource "null_resource" "null2"{
  
    provisioner "local-exec" {
      command = "git clone https://github.com/ashutosh5786/for-ec2.git ./image"
    }


}

STEP 8: Now we need to create the S3 bucket with ACL(access control list) as public.

//Creating the S3 Bucket
resource "aws_s3_bucket" "my-s3" {
    bucket = "ashutosh-bucket-s3-for-task2"
    acl    = "public-read"
  


  tags = {
    Name        = "My bucket"
  }
}

STEP 9: Uploading the File to the S3 bucket and making it public.

// Uplaoding File to Bucket
resource "aws_s3_bucket_object" "object" {
    depends_on = [
        null_resource.null2,
        aws_s3_bucket.my-s3
    ]
  bucket = aws_s3_bucket.my-s3.bucket
  key    = "img.png"
  source = "./image/12.png"
  acl = "public-read"
  }

STEP 10: As we no longer required the git-hub repository in my local-system we are going to delete it

// Delting the Image from local Directory
resource "null_resource" "null3"{
    depends_on = [
      aws_s3_bucket_object.object
    ]
    provisioner "local-exec" {
        
      command = "RMDIR /Q/S image"
    }


}

STEP 11: Now we are in the last stage, and for that, we need to create the CloudFront and set the origin as S3 bucket that we have created.

//Creating of The CLOUDFRONT



locals {
  s3_origin_id = "myS3Origin"
}



resource "aws_cloudfront_distribution" "distribution" {



    depends_on = [
        aws_s3_bucket.my-s3,


    ]
  origin {
    domain_name = aws_s3_bucket.my-s3.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "index.html"


  



  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  


  restrictions {
        geo_restriction {
            restriction_type = "none"
                        }
                  }



  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
}


STEP 12: Since we have created the CloudFront for the s3 we no longer need the previous URL of the image we need to update it with the new URL which we get through the CloudFront.

Here I have to use the sed command of Linux for the replacement of the word you can read about it through the Internet.

// Updating The URL in HTML file 



    resource "null_resource" "null4" {
        depends_on = [
            aws_cloudfront_distribution.distribution
        ]
            connection {
            type = "ssh"
            user = "ec2-user"
            private_key = tls_private_key.mykey.private_key_pem
            host = aws_instance.web1.public_ip
        }


        provisioner "remote-exec" {
            inline = [
                "cd /var/www/html",
                "sudo sed -i 's/12.png/https:${aws_cloudfront_distribution.distribution.domain_name}\\/img.png/g' index.html"
                ]
        
        }
    }

STEP 13: Now in Last we can open the URL where our Web-Server is running and for that, we need the public-ip of that instance

// Opening the URL in Web-Browser
    resource "null_resource" "null5" {
        depends_on = [
            null_resource.null4
        ]


        provisioner "local-exec" {
            command = "chrome ${aws_instance.web1.public_ip}"
        
        }
    }

Here I Attach the ScreenShot of The Above Work

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

THAT'S ALL FOR THIS TASK

For the Full Code Check out My Git-Hub Repository


No alt text provided for this image


要查看或添加评论,请登录

Ashutosh Singh的更多文章

  • TASK-3 HYBRID-MULTICLOUD

    TASK-3 HYBRID-MULTICLOUD

    Launch The WordPress and MySQL in Different Subnet(Public, Private Respectively) for the Security Purpose. What The…

    2 条评论
  • TASK-1 ANSIBLE

    TASK-1 ANSIBLE

    TASK: Write an Ansible PlayBook that does the following operations in the managed nodes. ?? Configure Docker ?? Start…

    2 条评论
  • EKS Task

    EKS Task

    Here Is The Task For The EKS training First We need an account with Admin Access We can add a new user with admin power…

  • Magic#1 (Task1 of Multi-Cloud)

    Magic#1 (Task1 of Multi-Cloud)

    Why I Call This Magic In The Heading I call this as magic because For the First Time when you See this Working it asks…

    6 条评论

社区洞察

其他会员也浏览了