Webserver Deployment on AWS EC2 using EFS , Cloudfront and Terraform.
Atharva Patil
SWE@Cadence | Spring Boot | Java | DSA | K8s | OpenSource | Microservices
Why we need cloud?
Cloud computing facilitates the access of applications and data from any location worldwide and from any device with an internet connection. Cost savings; Cloud computing offers businesses with scalable computing resources hence saving them on the cost of acquiring and maintaining them.
Prerequisites :
For deploying webserver first we need following setup:
- AWS account with root power
Key pair for EC2 connectivity
Softwares:
- SSH should be installed on your os.
- AWS CLI
- Terraform
Why we need EFS?
EFS is the best choice for running any application that has a high workload, requires scalable storage, and must produce output quickly. It scales automatically, even to meet the most abrupt workload spikes. After the period of high-volume storage demand has passed, EFS will automatically scale back down.
Amazon EBS Use Cases
- Testing and development: You can scale, archive, duplicate or provision your testing, development, or production environments.
- NoSQL databases: EBS offers NoSQL databases the low-latency performance and dependability they need for peak performance.
- Relational databases: EBS scales to meet your changing storage needs. This makes it a great choice for deploying databases, including PostgreSQL, MySQL, Oracle, or Microsoft SQL Server.
- Business consistency: Copy EBS Snapshots and Amazon Machine Images (AMIs) to run applications in different AWS regions. This reduces data loss and speeds recovery time by backing up log files and data regularly, across geographies.
- Enterprise-wide applications: It can meet a variety of enterprise computing needs through powerful block storage that can support your most important applications, such as Microsoft Exchange, Oracle, or Microsoft SharePoint.
Why we need terraform?
As we know their are many different cloud platforms .So, every platform has different cli commands .So, we need one way which is used for all cloud platform. Hence, we need terraform.
Terraform is a tool for developing, changing and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. ... One Important reason people consider Terraform is to manage their infrastructure as code.
Now Let's start
Configuring AWS CLI
After installing AWS CLI run aws configure command.
aws configure
Now you have to proivide AWS Access Key ID, AWS Secret Access Key, Default Region Name and Output format
Now Let's write terraform code,
Create file with .tf extension in separate folder
Step 1: First we have to declare provider and reagion:
provider "aws"{ region = "ap-south-1" }
Step 2: Writing Code for VPC Creation:
resource "aws_vpc" "myvpc"{ cidr_block = "192.168.0.0/16" instance_tenancy = "default" tags = { Name = "myvpc" } }
Step3: Creating Subnet
A subnetwork or subnet is a logical subdivision of an IP network.The practice of dividing a network into two or more networks is called subnetting.
The main purpose of subnetting is to help relieve network congestion. Congestion used to be a bigger problem than it is today because it was more common for networks to use hubs than switches. When nodes on a network are connected through a hub, the entire network acts as a single collision domain.
resource "aws_subnet" "subnet1" { vpc_id = "${aws_vpc.myvpc.id}" cidr_block = "192.168.0.0/24" map_public_ip_on_launch = "true" availability_zone = "ap-south-1a" tags = { Name = "subnet1" } }
Step 4: Writing code for Security Group Creation
A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. When you launch an instance, you can specify one or more security groups.
We given access to our instance from the defined ports using TCP
resource "aws_security_group" "secgrp" { name = "secgrp" vpc_id = "${aws_vpc.myvpc.id}" ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0"] } ingress { from_port = 2049 to_port = 2049 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0"] } ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "secgrp" } }
Step 5:EFS Creation
Here, provided token from creation is EFS
resource "aws_efs_file_system" "EFS" { creation_token = "EFS" tags = { Name = "EFS" } }
Step 6:Code for mounting EFS
resource "aws_efs_mount_target" "EFSmount" { file_system_id = "${aws_efs_file_system.EFS.id}" subnet_id = "${aws_subnet.subnet1.id}" security_groups = [aws_security_group.secgrp.id] }
Step 7:Making Internet Getway
An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. An internet gateway supports IPv4 and IPv6 traffic.
resource "aws_internet_gateway" "Gateway"{ vpc_id = "${aws_vpc.myvpc.id}" tags = { Name = "Gateway" } }
Step 8:Creating routing table
In computer networking a routing table, or routing information base (RIB), is a data table stored in a router or a network host that lists the routes to particular network destinations, and in some cases, metrics (distances) associated with those routes. The routing table contains information about the topology of the network immediately around it.
resource "aws_route_table" "route_table" { vpc_id = "${aws_vpc.myvpc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.Gateway.id}" } tags = { Name = "route_table" } }
Step 9:Creating routing table association:
resource "aws_route_table_association" "route_table_association" { subnet_id = "${aws_subnet.subnet1.id}" route_table_id = "${aws_route_table.route_table.id}" }
Step 10:Creating Instance and installing https in it
Here, provided instance name i.e. ami namem and its type
also written code for ssh conmnection So that we can connect to our instancefrom our computers cli and in remote execution proivisioner installed httpd and disabled the firewall protection selinux for easy connection.
resource "aws_instance" "instance" { ami = "ami-052c08d70def0ac62" instance_type = "t2.micro" key_name = "MyKey" subnet_id = "${aws_subnet.subnet1.id}" security_groups = ["${aws_security_group.secgrp.id}"] connection { type = "ssh" user = "ec2-user" private_key = file("D:/terra/Mycredentials/MyKey.pem") host = aws_instance.instance.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install amazon-efs-utils -y", "sudo yum install httpd php git -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd", "sudo setenforce 0", "sudo yum -y install nfs-utils" ] } tags = { Name = "instance" } }
Step 11:Mounting EFS to our instance
Now,we mount EFS to folder where our whole code is present in ec2. So that if our ec2 instance crashes then our all code remains safe in EFS.
And also we are cloning github repo in /var/www/html folder
resource "null_resource" "mount" { depends_on = [aws_efs_mount_target.EFSmount] connection { type = "ssh" user = "ec2-user" private_key = file("D:/terra/Mycredentials/MyKey.pem") host = aws_instance.instance.public_ip } provisioner "remote-exec" { inline = [ "sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport ${aws_efs_file_system.EFS.id}.efs.ap-south-1.amazonaws.com:/ /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/Atharva321/AWS_EFS.git /var/www/html/", "sudo sed -i 's/url/${aws_cloudfront_distribution.cloudfront.domain_name}/g' /var/www/html/index.html" ] } }
Step 12:Making local copy of git repo
resource "null_resource" "git_copy" { provisioner "local-exec" { command = "git clone https://github.com/Atharva321/AWS_EFS.git D:/Testing/" } }
Step 13: Writing IP of ec2 instance in text file
if we write the ec2 IP in text file the it is easy to us to copy ip and access our instance through ssh.
resource "null_resource" "writing_ip" { provisioner "local-exec" { command = "echo ${aws_instance.instance.public_ip} > public_ip_ec2.txt" } }
Step 14: Creating S3 bucket
resource "aws_s3_bucket" "awsjcdefsbucket1" { bucket = "astoragep" acl = "private" tags = { Name = "apstorage1" } } locals { s3_origin_id = "apS3origin" } resource "aws_s3_bucket_object" "object" { bucket = "${aws_s3_bucket.awsjcdefsbucket1.id}" key = "MyKey" source = "D:/images/image.jpg" acl = "public-read" }
Step 15: Creating Cloudfront Distribution
CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website.
A simple approach for storing and delivering static content is to use an Amazon S3 bucket. Using S3 together with CloudFront has a number of advantages, including the option to use Origin Access Identity (OAI) to easily restrict access to your S3 content.
resource "aws_cloudfront_distribution" "cloudfront" { origin { domain_name = "${aws_s3_bucket.awsjcdefsbucket1.bucket_regional_domain_name}" origin_id = "${local.s3_origin_id}" custom_origin_config { http_port = 80 https_port = 80 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "${local.s3_origin_id}" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }
Step 16: Launching chrome and accessing webserver's web address automatically
the start chrome command will start the chrome automatically
resource "null_resource" "local-exec" { depends_on = [ null_resource.mount, ] provisioner "local-exec" { command = "start chrome ${aws_instance.instance.public_ip}" } }
All our terraform code is set.
----------------------------------------------------------------------------------------------------------
Now go to the folder where you stored your terraform file
And run terrarform init command
The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.
Now we run terraform plan command to chreck the all things we have written
The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
After checking all code is like as we want now we run terraform apply command with -auto-approve command due to this option it not ask for approval . It approve the code automatically and apply it .
Now EC2 insance and all other services are started in ouraws account
Let's Check them
VPC
Subnet
EFS
Instance
Gateway
Routing table
S3 Bucket
Cloudfront
As we written code for creating a text file which contain the ip of webserver . It is created
Now we have to provide our image url to our html code
the command to access the ec2 is
$ssh -i <key> ec2-user@<ip_of_ec2>
for it we have to go in the folder through cmd where the key is stored and then run it .
The image link is the domain name in cloudfront and we have to attach key name after it.
Now visit our webpage again we see image their
And Finally Webserver Deployed!
Now the complete infrastructure is deployed.
Destroying Infrastructure
by terraform destroy command all architecture will be destroyed
The terraform destroy command terminates resources defined in your Terraform configuration. This command is the reverse of terraform apply in that it terminates all the resources specified by the configuration. It does not destroy resources running elsewhere that are not described in the current configuration.
The complete architecture is destroyed successfully.
So, By using terraform we can deploy or destroy complete architecture in single click
Thank You Very Much for Reading
Thank you Very Much to Vimal Daga Sir.Who always motivates me and encourages me to perform this kinds of projects.
I am also thankful to linuxworld-informatics-pvt-ltd and whole team.
Programmer Analyst at Cognizant
4 年Keep growing Atharv??
Software Engineer - Cloud Engineer at CRISIL Limited
4 年Great job brother??????
6 x Salesforce Certified Developer | 2 x Flosum Certified | 1 x Copado Certified Developer
4 年Good work.? Keep it up!?
Software Engineer At BUSINESSNEXT
4 年Nice ????
Mainframe Developer at Wipro | Microsoft Certified Azure Administrator Associate | Cloud & Data Enthusiast.
4 年Good job brother