Launching Website Using AWS with EFS using Terraform
What is AWS ?
Amazon web service is a platform that offers flexible, reliable, scalable, easy-to-use and cost-effective cloud computing solutions.
AWS is a comprehensive, easy to use computing platform offered Amazon. The platform is developed with a combination of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings.
Amazon Elastic File System (Amazon EFS)
EFS is the best choice for running any application that has a high workload, requires scalable storage, and must produce output quickly. It scales automatically, even to meet the most abrupt workload spikes. After the period of high-volume storage demand has passed, EFS will automatically scale back down. EFS can be mounted to different AWS services and accessed from all your virtual machines. Use it for running shared volumes, or for big data analysis. You’ll always pay for the storage you actually use, rather than provisioning storage in advance that’s ultimately wasted.
Amazon EFS Use Cases
- Lift-and-shift application support: EFS is elastic, available, and scalable, and enables you to move enterprise applications easily and quickly without needing to re-architect them.
- Analytics for big data: It has the ability to run big data applications, which demand significant node throughput, low-latency file access, and read-after-write operations.
- Content management system and web server support: EFS is a robust throughput file system capable of enabling content management systems and web serving applications, such as archives, websites, or blogs.
- Application development and testing: Only EFS provides a shared file system needed to share code and files, across multiple compute resources to facilitate auto-scaling workloads.
What is Terraform ?
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.
Task Description :
Using EFS instead of EBS service on the AWS while Creating/launching Application using Terraform
1. Create Security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html .
Prerequisites :
- You need a AWS account .
- Amazon CLI installed in system .
- Terraform installed in system.
- aws IAM user configured .
Let us start our solution .
Step 1: Declaring Provider
At first we have to specify the provider on which we want to work , and in my case it is AWS and hence I have specify the Availability Zone(AZ- data centre) .
provider "aws" { region = "ap-south-1"
}
Step 2 : Create a amazon VPC .
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined.
resource "aws_vpc" "shyamvpc"{ cidr_block = "192.168.0.0/16" instance_tenancy = "default" tags = { Name = "shyamvpc" } }
Step 3: Creating subnet
Subnet is “part of the network”, in other words, part of entire availability zone. Each subnet must reside entirely within one Availability Zone and cannot span zones.
resource "aws_subnet" "firstsubnet" { vpc_id = "${aws_vpc.shyamvpc.id}" cidr_block = "192.168.0.0/24" map_public_ip_on_launch = "true" availability_zone = "ap-south-1a" tags = { Name = "firstsubnet" } }
Step 4: Creating Security Group
A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. When you launch an instance, you can specify one or more security groups. If you don't specify a security group, Amazon EC2 uses the default security group. You can add rules to each security group that allow traffic to or from its associated instances.
resource "aws_security_group" "shyamsg" { name = "shyamsg" vpc_id = "${aws_vpc.shyamvpc.id}" ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0"] } ingress { from_port = 2049 to_port = 2049 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0"] } ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "shyamsg" }
}
Step 5: Creation and mounting the EFS storage
resource "aws_efs_file_system" "shyamefs" { creation_token = "shyamefs" tags = { Name = "shyamefs" } }
resource "aws_efs_mount_target" "shyamefsmount" { file_system_id = "${aws_efs_file_system.shyamefs.id}" subnet_id = "${aws_subnet.firstsubnet.id}" security_groups = [aws_security_group.shyamsg.id] }
Step 6: Internet gateway
An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.
An internet gateway supports IPv4 and IPv6 traffic. It does not cause availability risks or bandwidth constraints on your network traffic.
resource "aws_internet_gateway" "getwy"{ vpc_id = "${aws_vpc.shyamvpc.id}" tags = { Name = "getwy1" } }
Step 7: Creating and associating route table
A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed. To put it simply, a route table tells network packets which way they need to go to get to their destination.
resource "aws_route_table" "shyamrttb" { vpc_id = "${aws_vpc.shyamvpc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.getwy.id}" } tags = { Name = "shyamrttb" } } resource "aws_route_table_association" "artas" { subnet_id = "${aws_subnet.firstsubnet.id}" route_table_id = "${aws_route_table.shyamrttb.id}" }
Step 8: Launching Instance
Here you required keypair already created , if you dont have create one using AWS GUI .
resource "aws_instance" "shyaminstance" { ami = "ami-052c08d70def0ac62" instance_type = "t2.micro" key_name = "EFS_task" subnet_id = "${aws_subnet.firstsubnet.id}" security_groups = ["${aws_security_group.shyamsg.id}"] connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/DELL/Downloads/EFS_task.pem") host = aws_instance.shyaminstance.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install amazon-efs-utils -y", "sudo yum install httpd php git -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd", "sudo setenforce 0", "sudo yum -y install nfs-utils" ] } tags = { Name = "shyaminstance" } }
Step 9: Cloning the github repo
Cloning the github repo and then mounting to /var/www/html . We have used the remote provisionor to run required commands in remote instance .
resource "null_resource" "mount" { depends_on = [aws_efs_mount_target.shyamefsmount] connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/DELL/Downloads/EFS_task.pem") host = aws_instance.shyaminstance.public_ip } provisioner "remote-exec" { inline = [ "sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport ${aws_efs_file_system.shyamefs.id}.efs.ap-south-1.amazonaws.com:/ /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/shyamwin/CloudTaskwithEFS.git /var/www/html/", "sudo sed -i 's/url /${aws_cloudfront_distribution.myfront.domain_name}/g' /var/www/html/index.html" ] } }
Step10: Creating a local copy
resource "null_resource" "git_copy" { provisioner "local-exec" { command = "git clone https://github.com/shyamwin/CloudTaskwithEFS.git C:/Users/DELL/Desktop/Test1" } }
Step 11: Getting the ip address of instance
resource "null_resource" "writing_ip" { provisioner "local-exec" { command = "echo ${aws_instance.shyaminstance.public_ip} > public_ip.txt" } }
Step 12: Creating S3 bucket
resource "aws_s3_bucket" "shyams3bucket" { bucket = "shyamstorage" acl = "private" tags = { Name = "shyamstorage" } } locals { s3_origin_id = "S3storage"
}
Step 13 : Uploading static data S3 bucket
resource "aws_s3_bucket_object" "object" { bucket = "${aws_s3_bucket.shyams3bucket.id}" key = "EFS_task" source = "C:/Users/DELL/Desktop/TASK2_CLOUD/sir.jpeg" acl = "public-read" }
Step 14 : Setting up the cloud front distribution
resource "aws_cloudfront_distribution" "myfront" { origin { domain_name = "${aws_s3_bucket.shyams3bucket.bucket_regional_domain_name}" origin_id = "${local.s3_origin_id}" custom_origin_config { http_port = 80 https_port = 80 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "${local.s3_origin_id}" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }
Step 15: Now launching the chrome to see the website .
One thing is manual here we have to enter the cloudfront url in index.html file by going into instance .
resource "null_resource" "local-exec" { depends_on = [ null_resource.mount, ] provisioner "local-exec" { command = "start chrome ${aws_instance.shyaminstance.public_ip}" } }
To run this code file you have to first run terraform init .
Then you have to apply the code using terraform apply , and after this command you have to give yes when the cli asks or you can also auto approve the code to build .
Let us see the some screenshots regarding this practical .
Successfully Done .
Any suggestions are always welcome.
Thanks for reading .
Cloud Engineer - I @Insight | 4 X Microsoft Certified |AZ-700 | AZ-104 | AZ-900 | SC-900.
4 年Great work
Software Intern @Baker Hughes | Web-Apps Developer | PG in Software Engineering @ VJTI
4 年Shyam Sir ??? Congratulations ???????? Congratulations ???????? Congratulations ??????????? Your actions inspire us ????????
Software Engineer - Cloud Engineer at CRISIL Limited
4 年Nice work shyam sulbhewar bro
Pursuing B.tech in Computer Science and Engineering
4 年Great work ??