Launching an Application using EC2 attached to EFS and VPC in AWS

Launching an Application using EC2 attached to EFS and VPC in AWS

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS is well suited to support a broad spectrum of use cases from home directories to business-critical applications. Customers can use EFS to lift-and-shift existing enterprise applications to the AWS Cloud. Other use cases include: big data analytics, web serving and content management, application development and testing, media and entertainment workflows, database backups, and container storage.

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

Terraform Code:-

Firstly, we have to declare the provider, profile and the region where to built the infrastructure to the terraform.

provider "aws" {
 region = "ap-south-1"
 profile = "Diyansh"
       
}

We will create a VPC that has certain range of IP address's which will be provide to the instances, router, switches, DHCP servers, and etc. This VPC will contain all the subnets, security groups, route table, internet gateway. Don't forget to enable the auto assign DNS hostname.

resource "aws_vpc" "terra_vpc" {
  cidr_block       = "192.168.0.0/16"
  instance_tenancy = "default"
  enable_dns_hostnames = true
  tags = {
    Name = "terra_vpc"
  }
}
No alt text provided for this image

Create subenet in the above created VPC, in availability zone ap-south-1a to launch the instances. Don't forget to enable the auto public ip assign to the subnet so that client can access the site.

resource "aws_subnet" "terra_subnet" {
 depends_on = [
    aws_vpc.terra_vpc,
   ]
  vpc_id     = aws_vpc.terra_vpc.id
  cidr_block = "192.168.5.0/24"
  availability_zone = "ap-south-1a" 
  //availability_zone_id = "aps1-az1"
  map_public_ip_on_launch = true
  tags = {
    Name = "terra_subnet"
  }
}
No alt text provided for this image

We will create a security group which has a inbound rule to allow port 22 for SSH , 80 so that client can connect to website and 2049 for NFS so that EC2 can be accessable.

resource "aws_security_group" "task-2-sg" {
   depends_on = [
    aws_vpc.terra_vpc,
           aws_subnet.terra_subnet,
   ]
  name        = "task-2-sg"
  description = "Allow TLS Inbound traffic"
  vpc_id      = aws_vpc.terra_vpc.id
    
    egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "NFS from EFS"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
    ingress {
    description = "TLS from SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }
  ingress {
    description = "TLS from HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  } 
tags = {
    Name = "task-2-sg"
  }
}
No alt text provided for this image

Creating an internet gateway so that our public subnet can connect to outside world and client can access the Website.

resource "aws_internet_gateway" "task-2-igw" {
 vpc_id = aws_vpc.terra_vpc.id
 tags = {
        Name = "My task-2 VPC Internet Gateway"
     }
}
No alt text provided for this image

We will create a route table and associate the route table to our subnet so that it can know where is the internet gateway to connect to the outside world.

resource "aws_route_table" "route-table-igw" {
   depends_on = [
    aws_internet_gateway.task-2-igw,
   ]
  vpc_id = aws_vpc.terra_vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.task-2-igw.id
  }
  tags = {
    Name = "route-table"
  }
}
No alt text provided for this image
resource "aws_route_table_association" "subnet-association" {
     depends_on = [
    aws_route_table.route-table-igw,
   ]
  subnet_id      = aws_subnet.terra_subnet.id
  route_table_id = aws_route_table.route-table-igw.id
}
No alt text provided for this image

We will create a EFS storage in AWS and attach it with our VPC, security group and subnet where our application is running

resource "aws_efs_file_system" "efs_task" {
   
   depends_on = [
    aws_route_table_association.subnet-association,
   ]
   creation_token = "efs_task"
   performance_mode = "generalPurpose"
   throughput_mode = "bursting"
   encrypted = "true"
 tags = {
     Name = "efs_task"
   }
 }

resource "aws_efs_mount_target" "efs_mount" {
   
   depends_on = [
    aws_efs_file_system.efs_task,
   ]
   file_system_id  = aws_efs_file_system.efs_task.id
   subnet_id = aws_subnet.terra_subnet.id
   security_groups = [aws_security_group.task-2-sg.id]
 }
No alt text provided for this image
No alt text provided for this image

We will launched an instance with the created VPC security group and made connection to internet to download the required software's and start the services required for setting-up a website.

resource "aws_instance" "vmout" {
depends_on = [
    aws_efs_mount_target.efs_mount,
  ]
  ami           = "ami-0185e010d074994be"
  instance_type = "t2.micro"
  vpc_security_group_ids = [aws_security_group.task-2-sg.id ]
  subnet_id = aws_subnet.terra_subnet.id
   key_name = "puttykey1234"
  
    connection {
    type = "ssh"
    user = "ec2-user"
    private_key = file("D:/key1234.pem")
    host = aws_instance.vmout.public_ip
  }
 
  provisioner "remote-exec" {
    inline = [
      "sudo yum install -y httpd git php amazon-efs-utils nfs-utils",
      "sudo systemctl start httpd",
      "sudo systemctl enable httpd",
      "sudo chmod ugo+rw /etc/fstab",
      "sudo echo '${aws_efs_file_system.efs_task.id}:/ /var/www/html efs tls,_netdev' >> /etc/fstab",
      "sudo mount -a -t efs,nfs4 defaults",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/Divyansh-saxena/image-with-phpcode.git   /var/www/html/"
    ]
  }  

tags = {
         Name = "EC2"
   } 
}
No alt text provided for this image

Once we have created the EFS storage we have to mount it to /var/www/html/ folder and then copy our code from GitHub to the same folder so that data of our application remains persistent even if our instance goes down.

No alt text provided for this image

Creating a S3 bucket and make it public readable and using aws s3 bucket object to store the images in the bucket

resource "aws_s3_bucket" "divyansh1222bucket"  {
  
   depends_on = [
           aws_security_group.task-2-sg,
   ]
  bucket = "divyanshbu22cketaws"
  acl = "public-read"
  force_destroy = true
 provisioner "local-exec" {
     command = "git clone  https://github.com/Divyansh-saxena/image-with-phpcode.git  D:/Terraform/TASK/upload "
   }
 
}

resource "aws_s3_bucket_policy" "b67" {
depends_on = [
    aws_s3_bucket.divyansh1222bucket,
  ]
  bucket = "${aws_s3_bucket.divyansh1222bucket.id}"
  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Id": "MYBUCKETPOLICY",
  "Statement": [
    {
      "Sid": "IPAllow",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::divyanshbu22cketaws/*",
      "Condition": {
       "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
      }
    }
]
}
POLICY
}
resource "aws_s3_bucket_object" "image_upload" {
    
depends_on = [
    aws_s3_bucket.divyansh1222bucket,
  ]
    bucket = aws_s3_bucket.divyansh1222bucket.bucket
    key = "wall.jpg"
    source = "D:/Terraform/TASK/upload/wall.jpg"
    acl = "public-read" 
    content_type = "image or jpeg"
}
No alt text provided for this image

At last create a Cloudfront with S3 as origin to reduce latency and provide CDN(Content Delivery Network). Uploading the S3 bucket data to the EC2 instance at var/www/html/

resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
    aws_s3_bucket.divyansh1222bucket,
  ]
    origin {
        domain_name = aws_s3_bucket.divyansh1222bucket.bucket_regional_domain_name
        origin_id = local.s3_origin_id
    s3_origin_config {
        origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
      } 
    }
    
    enabled = true
    is_ipv6_enabled = true
    default_root_object = "index.php"

    custom_error_response {
        error_caching_min_ttl = 3000
        error_code = 404
        response_code = 200
        response_page_path = "/ibm-red-hat-leadspace.png"
    }

    default_cache_behavior {
        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        cached_methods = ["GET", "HEAD"]
        target_origin_id = local.s3_origin_id

    forwarded_values {
        query_string = false
    cookies {
        forward = "none"
      }
    }
    viewer_protocol_policy = "allow-all" 
        min_ttl = 0
        default_ttl = 3600
        max_ttl = 86400
    }
    
 restrictions {
        geo_restriction {
            restriction_type = "none"
        }
    } 
 
 viewer_certificate {
        cloudfront_default_certificate = true
      }

    tags = {
        Name = "Web-CF-Distribution"
      }
connection {
        type = "ssh"
        user = "ec2-user"
        private_key = file("D:/key1234.pem") 
        host = aws_instance.vmout.public_ip
     }

    provisioner "remote-exec" {
        inline  = [
            "sudo chmod ugo+rw /var/www/html/",
            "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image_upload.key}'>\" >> /var/www/html/index.php",
          ]
      }


}
No alt text provided for this image

Now using the Public IP or Public DNS name of the instance we can access the site :

No alt text provided for this image

To run the command first use terraform init command and then terraform apply --auto-apporve to run the code and to delete all the services use terraform destroy --auto-approve.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image



Thank You of Reading !!!

















































要查看或添加评论,请登录

Divyansh Saxena的更多文章

  • Connecting Souls in Cyberspace this Diwali: The Spiritual Resonance of Virtual Celebrations

    Connecting Souls in Cyberspace this Diwali: The Spiritual Resonance of Virtual Celebrations

    Diwali, the Hindu festival of lights, is a time to celebrate the triumph of good over evil, light over darkness, and…

  • Deploying Grafana Application On Kubernetes With Helm

    Deploying Grafana Application On Kubernetes With Helm

    What is Helm? Helm is a tool that streamlines installing and managing Kubernetes applications. Helm helps us to install…

  • Case-Study on how Industries are using MongoDB

    Case-Study on how Industries are using MongoDB

    MongoDB : An Introduction MongoDB, the most popular NoSQL database, is an open-source document-oriented database. The…

  • Chat App using UDP Protocol

    Chat App using UDP Protocol

    What is a Server? A server is either a program, a computer, or a device that is devoted to managing network resources…

  • Running GUI Applications on Docker in Linux

    Running GUI Applications on Docker in Linux

    Docker’s normally used to containerize background applications and CLI programs. You can also use it to run graphical…

    2 条评论
  • Configuration of K8s_Cluster over AWS by using Ansible

    Configuration of K8s_Cluster over AWS by using Ansible

    What is a Kubernetes cluster? A Kubernetes cluster is a set of node machines for running containerized applications. If…

    2 条评论
  • CI/CD Integration with Jenkins

    CI/CD Integration with Jenkins

    Jenkins Jenkins offers a simple way to set up a continuous integration or continuous delivery (CI/CD) environment for…

    2 条评论
  • NASA using Amazon SQS for NASA Image and Video Library

    NASA using Amazon SQS for NASA Image and Video Library

    Amazon Simple Queue Service Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables…

  • Industry use cases of AKS(Azure Kubernetes Service)

    Industry use cases of AKS(Azure Kubernetes Service)

    Microsoft Azure Microsoft Azure commonly referred to as Azure, is a cloud computing service created by Microsoft for…

  • Integrating AWS RDS with WordPress Application on AWS Cloud

    Integrating AWS RDS with WordPress Application on AWS Cloud

    What is Amazon Relational Database Service (Amazon RDS)? Amazon Relational Database Service (Amazon RDS) is a web…

社区洞察

其他会员也浏览了