Launching Elastic File System(EFS) for WebApp using Terraform

Launching Elastic File System(EFS) for WebApp using Terraform

Cloud computing with AWS

Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally. Millions of customers — including the fastest-growing startups, largest enterprises, and leading government agencies — are using AWS to lower costs, become more agile, and innovate faster.

What is AWS used for ?

Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow. Running web and application servers in the cloud to host dynamic websites. Amazon Web Services is a cloud computing platform that provides customers with a wide array of cloud services. We can define AWS (Amazon Web Services) as a secured cloud services platform that offers compute power, database storage, content delivery and various other functionalities.

What is Terraform ?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire data center. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

Infrastructure as Code

Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your data center to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

In simple terms terraform gives us power of Infrastructure as code means your whole setup like web server, web app, etc. you can create it by just a simple descriptive code.

What is AWS EC2 ?

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

What is AWS EFS ?

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

What is AWS S3 ?

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.

What is AWS CLOUDFRONT ?

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

Problem/Task : Perform the task-1 using EFS instead of EBS service on the AWS as, Create/launch Web Application using Terraform

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the git hub repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from git hub repo into the s3 bucket and change the permission to public readable.

8 Create a Cloud front using s3 bucket(which contains images) and use the Cloud front URL to update in code in /var/www/html

Before get started with Terraform we have to be ready with the following things

  1. Create your AWS account
  2. Create an IAM user
  3. Terraform installed
  4. Set up path for Terraform in system environment variables
  5. Install AWS CLI

Lets Get Started

Provider

First you should configure with AWS on AWS CLI. It take your Access key, Secret access key, Region name and Output format(default is JSON) to configure you in.

For configure yourself you can use this command

aws configure
No alt text provided for this image
Create key pair for your Instance

Create your key pair to login in your instance

resource "tls_private_key" "amazon_linux_key_private" {

  algorithm   = "RSA"

  rsa_bits = 2048

}

resource "aws_key_pair" "amazon_linux_key" {

depends_on = [

    tls_private_key.amazon_linux_key_private,

  ]

  key_name   = "amazon_linux_os_key"

  public_key = tls_private_key.amazon_linux_key_private.public_key_openssh

}
No alt text provided for this image
Create security group

create your security group which allow port number 22 and 80.

resource "aws_security_group" "allow_http_and_ssh" {
  name        = "allow_http_and_ssh"
  vpc_id      = "vpc-eaced182"
  description = "Allow all http and ssh"


  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "allow_http_and_ssh"
  }
}
No alt text provided for this image
Create/Launch AWS Instance

create a instance / OS to do further things. and use the same key pair and security group which created above.

resource "aws_instance" "amazon_linux_os" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.medium"
 
  key_name = "amazon_linux_os_key"

  security_groups = [ "${aws_security_group.allow_http_and_ssh.name}" ]
 
  tags = {
    Name = "amazon_linux_os"
  }
}

No alt text provided for this image

Instance EBS volume

No alt text provided for this image
Connect to Instance / OS

To install httpd, php, git and nfs-utils software we have login in our instance

resource "null_resource" "connection_after_instance_launch"  {

depends_on = [
    aws_instance.amazon_linux_os, aws_efs_file_system.efs_amazon_linux_os,
  ]

  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.amazon_linux_key_private.private_key_pem
    host     = aws_instance.amazon_linux_os.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git nfs-utils -y",
      "sudo systemctl start httpd",
      "sudo systemctl enable httpd",
    ]
  }
}
Create EFS volume

Create a new AWS EFS volume to store your data permanently

resource "aws_efs_file_system" "efs_amazon_linux_os" {

  depends_on = [
    aws_instance.amazon_linux_os,
  ]

  creation_token = "efs_amazon_linux_os"

  tags = {
    Name = "efs_for_amazon_linux_os_web_server"
  }
}

Attach EFS volume to Instance

Attach this new EFS volume to EC2 instance / OS.

resource "aws_efs_mount_target" "efs_amazon_linux_os_attach" {

depends_on = [
    aws_instance.amazon_linux_os, aws_efs_file_system.efs_amazon_linux_os, null_resource.connection_after_instance_launch,
  ]

  file_system_id = aws_efs_file_system.efs_amazon_linux_os.id
  subnet_id      = aws_instance.amazon_linux_os.subnet_id
  security_groups = [aws_security_group.allow_http_and_ssh.id]
}

No alt text provided for this image
Create S3 bucket

Create S3 bucket and give it permission public read

resource "aws_s3_bucket" "amazon_linux_os_bucket" {

depends_on = [
    aws_efs_mount_target.efs_amazon_linux_os_attach,
  ]

  bucket = "amazon-linux-os-bucket"
  acl    = "public-read"
  force_destroy = true
  tags = {
    Name = "amazon_linux_os_s3_bucket"
  }
}

   locals {
    s3_origin_id = "myorigin"
   
}

No alt text provided for this image
Put an Object in s3 bucket

Put some object / Images in same S3 bucket created above to show on your web page

resource "aws_s3_bucket_object" "amazon_linux_os_bucket_object" {

depends_on = [
    aws_s3_bucket.amazon_linux_os_bucket,
  ]

  bucket = aws_s3_bucket.amazon_linux_os_bucket.id
  key    = "awsefs.jpg"
  source = "D:/hybrid_2/awsefs.jpg"
  etag   = "D:/hybrid_2/awsefs.jpg"
  force_destroy = true
  acl    = "public-read"
 
}



resource "aws_s3_bucket_public_access_block" "make_item_public" {
  bucket = aws_s3_bucket.amazon_linux_os_bucket.id

  block_public_acls   = false
  block_public_policy = false
}
No alt text provided for this image
Create CloudFront distribution

Create a CloudFront distribution of S3 bucket

resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
  comment = "origin access identity"
}



resource "aws_cloudfront_distribution" "amazon_linux_os_cloudfront" {
 
depends_on = [
    aws_s3_bucket_object.amazon_linux_os_bucket_object,
  ]

  origin {
    domain_name = aws_s3_bucket.amazon_linux_os_bucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
 
    s3_origin_config {
        origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
    }
  }    

    enabled             = true
    is_ipv6_enabled     = true
      comment             = "my cloudfront s3 distribution"
      default_root_object = "index.php"


  default_cache_behavior {

    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]

    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false
      headers      = ["Origin"]

      cookies {
        forward = "none"
      }
    }


   viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

 
   restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }


  viewer_certificate {
    cloudfront_default_certificate = true  
  }
}
No alt text provided for this image
Connect to Instance / OS

Again connect / login to instance, to format and mount new EFS volume, then put GitHub code and give Cloud Front distribution domain name in /var/www/html folder/directory. In last again start the httpd service and enable it. do this process in last because it depends on many previous resources.

resource "null_resource" "connection"  {

 depends_on = [
    aws_s3_bucket_object.amazon_linux_os_bucket_object,aws_cloudfront_origin_access_identity.origin_access_identity,
        aws_cloudfront_distribution.amazon_linux_os_cloudfront,
  ]

  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.amazon_linux_key_private.private_key_pem
    host     = aws_instance.amazon_linux_os.public_ip
  }

provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/uditagarwal1305/cloudfront /var/www/html/",
      "sudo su << EOF",
            "echo \"${aws_cloudfront_distribution.amazon_linux_os_cloudfront.domain_name}\" >> /var/www/html/myimg.txt",
            "EOF",
      "sudo systemctl stop httpd",
      "sudo systemctl start httpd",
      "sudo systemctl enable httpd"
    ]
  }
}

Launch Web browser

Launch web browser to see output of code.

resource "null_resource" "chrome_output"  {

depends_on = [
    aws_cloudfront_distribution.amazon_linux_os_cloudfront,null_resource.connection,
  ]
    
    provisioner "local-exec" {
        command = "start chrome  ${aws_instance.amazon_linux_os.public_ip}"
    }
}
No alt text provided for this image
See IP address and availability zone on AWS CLI

To see IP address and availability zone of your EC2 instance / OS on AWS CLI use output keyword and give value whatever information you want to see on AWS CLI

output "amazon_linux_os_ip_address" {
    value = aws_instance.amazon_linux_os.public_ip
}

output "amazon_linux_os_availability_zone" {
    value = aws_instance.amazon_linux_os.availability_zone
}

Important Instructions

At the first time of running Terraform code on AWS CLI, use "terraform init" to install the plugins. do this process only once.

terraform init

To check syntax of code use "terraform validate". if you get some error, check syntax of code, and try again it until success come.

terraform validate

To apply Terraform code, use

terraform apply --auto-approve

To destroy all the infrastructure made by your Terraform code , use

terraform destroy --auto-approve


要查看或添加评论,请登录

社区洞察

其他会员也浏览了