Hybrid Multi Cloud Task 2 : Creating AWS infrastructure using EFS by Terraform
Creating AWS infrastructure using EFS by Terraform

Hybrid Multi Cloud Task 2 : Creating AWS infrastructure using EFS by Terraform

Amazon Elastic File System (EFS) :

Amazon Elastic File System is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources.

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN

Task 2 : Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EFS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Prerequisite for this task :

  1. Setup Terraform in your OS .
  2. Have some knowledge about some basic commands of Linux Operating System.
  3. Have an account on AWS
  4. Some basic commands of terraform:
 terraform apply  -> Builds or manages the infrastructure

 terraform destroy -> Destroy infrastructure created by Terraform

Softwares Required :

  • AWS CLI
  • Terraform CLI

Let's do the task :

This code sets profile created by the aws configure command , this code should be run on cmd :

$ aws configure --profile yash
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: ap-south-1
Default output format [None]: text

The code begins with here this code selects the provider or profile for login into the aws .

provider "aws" {
  profile = "yash"
  region = "ap-south-1"
}

This code creates the key so that new created instance can be use this key for connecting through ssh:

#create key
resource "aws_key_pair" "mykey" {
  key_name   = "mykey"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCml88NDD4RiOOs4YBQ9grzvYG/eZB55IapQuIkYJkAGbGoQ7m3sPGl+KKFVnOIciJkyeAJ/YaUoF7bljadWFTWYtc5fG4fVNScmzCoCfntQGijX3PO227zc5t1cXkTeyCIu+/66jD1GSoTxr58IHFv6iZ5JCpvzozonjowS6HWjkpNpP1A+ziXk/NNPF/0kwTJJ6lEvo0LQuop44yIRyYEQPnvEpk6zPL76BQSrP8LnCjYTxuF3PJxyS2QPcxpakkFe5fF8sm3JDR0Dqi4PaMXvd07JRfB5oiEni4dLPIqjLEeaZbNjEf+NZ1ipRnFtReK1tLr [email protected]"
}

This code creates the EFS file system

# NFS File system creation
resource "aws_efs_file_system" "myefs" {
  creation_token = "myefs"           

  tags = {
    Name = "myefs_storage"
  }
}

This code mounts the created EFS to the instance:

# mount target for EFS
resource "aws_efs_mount_target" "myefs_storage" {
depends_on = [
    aws_efs_file_system.myefs,
    aws_security_group.allow_efs,
    aws_instance.myinstance,
  ]
  file_system_id  = aws_efs_file_system.myefs.id
  subnet_id       = aws_instance.myinstance.subnet_id
  security_groups = [aws_security_group.allow_efs.id]


}

This code creates the security group for the EFS server , so that the server can give the storage to the instance :

# Creating Security Group for EFS server
resource "aws_security_group" "allow_efs" {
  name        = "allow_efs"
  ingress {
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }
}

Inbound rules filter traffic passing from the network to the local computer based on the filtering conditions specified in the rule. Conversely, outbound rules filter traffic passing from the local computer to the network based on the filtering conditions specified in the rule.

This security group is created by the terraform for the instance so that the inbound and outbound rules can be managed :

#creating  security group

resource "aws_security_group" "allow_ports" {
  name        = "allow_ports"
  description = "Allow  inbound traffic"
 
 # Here we are Creating Security Group for WEB server
  ingress {
    description = "tcp from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  # Here we are Creating Security Group for SSH server
ingress {
    description = "ssh from VPC"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  # Here we are Creating Security Group for NFS server
ingress {
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "allow_ports"
  }
}

This code creates the instance so that it can deploy the web sever :

# creating instance

resource "aws_instance" "myinstance" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = "mykey"
  security_groups = ["${aws_security_group.allow_ports.name}"]
 
connection {
    type     = "ssh"
    user     = "ec2-user"
   private_key = file("C:/Users/Preadiator/Desktop/Terraform/mykey.pem")
    host     = aws_instance.myinstance.public_ip
  }

   provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }
  tags = {
    Name = "Webserver"
  }
}

This code creates the S3 bucket so that the static persistent data should be stored here for fast accessing :

#create bucket
resource "aws_s3_bucket" "ns29bucket" {
  bucket = "ns29bucket"
  acl    = "private"
  region = "ap-south-1"

  tags = {
    Name   = "ns29bucket"
    
  }
}

locals {
  s3_origin_id = "myS3_bucket_Origin"

}

Creating the permission for the bucket access:

#change permission

resource "aws_s3_bucket_public_access_block" "example" {
  bucket = "ns29bucket"

  block_public_acls   = false
  block_public_policy = false
}

Amazon CloudFront :

Amazon CloudFront is a content delivery network offered by Amazon Web Services. Content delivery networks provide a globally-distributed network of proxy servers which cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.

This code creates the Cloud Font:

#creating cloud fornt

resource "aws_cloudfront_distribution" "cloud_dist" {
  origin {
    domain_name = aws_s3_bucket.ns29bucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id

    s3_origin_config {
  origin_access_identity ="${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
  }

  }

  enabled             = true
  default_root_object = "index.html"


  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id

    forwarded_values {
      query_string = true

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
     
    }
  }
 depends_on = [ aws_s3_bucket_policy.mypolicy ]
 
  viewer_certificate {
    cloudfront_default_certificate = true
  }
}


output "cloudfront_domain_name" {
       value = aws_cloudfront_distribution.cloud_dist.domain_name
}

here is the cloudfront origin access identity means the cloud front is joining with the Bucket so that data of the bucket is fetched by the cloud front :

#Making CloudFront Origin access Identity
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
  comment = "NO comment"
  depends_on = [ aws_s3_bucket.ns29bucket ]
}


Here is the policy of the S3 bucket is changed to private and access by the cloud front only:

#Updating IAM policies in bucket
data "aws_iam_policy_document" "s3_policy" {
  statement {
    actions   = ["s3:GetObject"]
    resources = ["${aws_s3_bucket.ns29bucket.arn}/*"]


    principals {
      type        = "AWS"
      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
    }
  }


  statement {
    actions   = ["s3:ListBucket"]
    resources = ["${aws_s3_bucket.ns29bucket.arn}"]


    principals {
      type        = "AWS"
      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
    }
  }
  depends_on = [ aws_cloudfront_origin_access_identity.origin_access_identity ]
}


#Updating Bucket Policies
resource "aws_s3_bucket_policy" "mypolicy" {
  bucket = "${aws_s3_bucket.ns29bucket.id}"
  policy = "${data.aws_iam_policy_document.s3_policy.json}"
  depends_on = [ aws_cloudfront_origin_access_identity.origin_access_identity ]

}

Here is the section which runs the code into the instance so that some configurations of the mounting of EFS server can be done.

resource "null_resource" "nullremote3"  {

depends_on = [
    aws_efs_mount_target.myefs_storage,aws_cloudfront_distribution.cloud_dist,
  ]


 connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Preadiator/Desktop/Terraform/mykey.pem")
    host     = aws_instance.myinstance.public_ip
  }

provisioner "remote-exec" {
    inline = [
      "sudo yum install amazon-efs-utils nfs-utils -y",
      "sudo chmod -R ugo+rw /etc/fstab",
      "sudo echo '${aws_efs_file_system.myefs.id}:/ /var/www/html efs tls,_netdev 0 0' >> /etc/fstab",
      "sudo mount -a -t efs,nfs4 defaults",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/NikhilSuryawanshi/webtest.git /var/www/html/",
      "sudo su << EOF",
      "echo 'https://${aws_cloudfront_distribution.cloud_dist.domain_name}/${aws_s3_bucket_object.ns29bucket.key}' > /var/www/html/url.txt",
      "EOF",
      
    ]
  }
}

Here the file or image can be uploaded to the bucket so that it can be used later :

# upload image
resource "aws_s3_bucket_object" "ns29bucket" {
depends_on = [
    aws_s3_bucket.ns29bucket,
  ]
  bucket = "ns29bucket"
  key    = "image.jpg"
  source = "C:/Users/Preadiator/Pictures/image.jpg"
  etag = filemd5("C:/Users/Preadiator/Pictures/image.jpg")
  acl = "public-read"
  content_type = "image/jpg"

}

resource "null_resource" "nulllocal1"  {


depends_on = [
    null_resource.nullremote3,
  ]


provisioner "local-exec" {
        command = "firefox  ${aws_instance.myinstance.public_ip}"
      }
}

Create the total infrastructure by the command :

$ terrafor apply -auto-approve

After creating it will look like this

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

After building the whole code by terraform this page comes.

No alt text provided for this image

This is the EC2 instance created by the Terraform code.

No alt text provided for this image

Here the IP of the instance is 13.234.110.74

No alt text provided for this image

The upper two security group is created by the terraform code

No alt text provided for this image
No alt text provided for this image

The elastic file system is used for making data persistent

No alt text provided for this image

The S3 bucket is used for making the static data persistent like photo , video , pdf , etc.

No alt text provided for this image

Destroy the total infrastructure by the command :

$ terrafor destroy -auto-approve

After destroy it will look like this

No alt text provided for this image
No alt text provided for this image
Anubhav S.

Graduate intern @ Dell | Mtech(CS) @ IIITL

4 年

Great job ??

回复

要查看或添加评论,请登录

Nikhil Suryawanshi的更多文章

社区洞察

其他会员也浏览了