Automated webserver launching by terraform for AWS(EC2, EFS, S3, CloudFront)
Divya Raj Lavti
Experienced IT Project Manager | Expert in Agile & Scrum, Risk Management, IT Infrastructure | Cloud Migration Specialist
In this task we have to make a change that in first task we used EBS volume for making our storage persistent to save data but here we are going to use EFS volume for the same.
About EFS ?
Elastic file system (EFS) is a service provided by AWS which provides a Network File System(NFS) Server over the cloud. This is simple, scalable and fully managed by AWS. We can use this in AWS cloud for storage and even in on-premises resources too. We can attach the same EFS file system to multiple instances provided that they are in same VPC and are mount on the subnet where instance exists. This provides nearby unlimited storage. Whereas, EBS can be attached to only one instance at a time that too should be in same subnet as of that the volume.
Task Description:
- Create a key pair and security group allowing ingress for http, SSH and NFS port and egress for all ports.
- create a security group for EFS allowing ingress only for NFS port.
- Create an EC2 instance which uses the key and security group created in first step.
- Create EFS file system and mount on the subnet where instance exists then mount this volume to /var/www/html directory of instance.
- Create a S3 bucket and download the image here from github and make it public.
- Create a cloudfront distribution for the image in s3 bucket.
- Developer has created a code for webpage in github. Download the code and put it in /var/www/html directory of instance.
- Update the code with the image file code using the link provided by cloudfront distribution for that image.
Softwares Required :
- AWS CLI
- Terraform CLI
Pre-requisites :
- Basic knowledge of AWS CLI and AWS is required.
- Basic knowledge of terraform is also required.
Solution :
- Declare providers you are going to use with login info.
# Declaring Provider provider "aws" { region = "ap-south-1" // your profile for AWS here. profile = "task2"
- Create a key pair and security group allowing ingress for http, SSH and NFS port and egress for all ports.
# Creating Key Pair resource "tls_private_key" "task2_key" { algorithm = "RSA" rsa_bits = 4096 } # Creating a file for key on local system. resource "local_file" "private_key" { depends_on = [ tls_private_key.task2_key, ] content = tls_private_key.task2_key.private_key_pem filename = "Task2-key.pem" file_permission = 0777 } resource "aws_key_pair" "webserver_key"{ key_name = "mynewkey" public_key = tls_private_task2_key.public_key_openssh }
- Creating security group for attaching with the instance with the required inbound and outbound permissions.
# Creating Security Group resource "aws_security_group" "allow_traffic" { name = "allowed_traffic" ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 2049 to_port = 2049 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
2. create a security group for EFS allowing ingress only for NFS port
# Creating Security Group for NFS server resource "aws_security_group" "allow_nfs" { name = "NFS_security" ingress { from_port = 2049 to_port = 2049 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
3. Create an EC2 instance which uses the key and security group created in first step.
// NFS File system creation resource "aws_efs_file_system" "my_file_system" { // unique file system name here. creation_token = "myuniquefilesystem" tags = { Name = "task2product" } } // mount target for EFS resource "aws_efs_mount_target" "gamma" { depends_on = [ aws_efs_file_system.my_file_system, aws_security_group.allow_nfs, aws_instance.ins1, ] file_system_id = aws_efs_file_system.my_file_system.id subnet_id = aws_instance.ins1.subnet_id security_groups = [aws_security_group.allow_nfs.id] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.webserver_key.private_key_pem host = aws_instance.ins1.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install amazon-efs-utils nfs-utils -y", "sudo chmod ugo+rw /etc/fstab", "sudo echo '${aws_efs_file_system.my_file_system.id}:/ /var/www/html efs tls,_netdev 0 0' >> /etc/fstab", "sudo mount -a -t efs,nfs4 defaults", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/drlraj2805/hctask2.git /var/www/html", ] } }
- Creation of instance and making environment suitable for hosting webserver is done by this block
#Creation of instance is done in this block. resource "aws_instance" "ins1"{ depends_on = [ aws_key_pair.webserver_key, aws_security_group.allow_traffic, ] ami = "ami-005956c5f0f757d37" instance_type = "t2.micro" key_name = aws_key_pair.webserver_key.key_name security_groups = [aws_security_group.allow_traffic.name] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.webserver_key.private_key_pem host = aws_instance.ins1.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install httpd git php -y", "sudo service httpd start", ] } tags = { Name = "myOS" } }
5. Create a S3 bucket and download the image here from github and make it public.
# Variable Declaration For bucket variable "Unique_Bucket_Name"{ type = string //default = "my-bucket-9521" } # AWS S3 Bucket Creation resource "aws_s3_bucket" "my_bucket" { bucket = var.Unique_Bucket_Name acl = "public-read" provisioner "local-exec" { when = destroy command = "echo y | rmdir /s hctask2" } } # Upload image file on S3 storage from github repository at local system resource "aws_s3_bucket_object" "object1" { depends_on =[ null_resource.null, aws_s3_bucket.my_bucket ] bucket = aws_s3_bucket.my_bucket.bucket key = "bucket_image.jpg" // Provide path here according To your system's file system source = "C:/Users/A/Desktop/hybrid/task2/pic.jpg" acl = "public-read" }
6.Create a cloudfront distribution for the image in s3 bucket
# Cloudfront Distribution Creation resource "aws_cloudfront_distribution" "s3_distribution" { origin { domain_name = aws_s3_bucket.my_bucket.bucket_regional_domain_name origin_id = aws_s3_bucket.my_bucket.bucket } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = aws_s3_bucket.my_bucket.bucket forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } } output "cloudfront"{ value = aws_cloudfront_distribution.s3_distribution.domain_name }
7. Developer has created a code for webpage in github. Download the code and put it in /var/www/html directory of instance.
# Copying object link from cloudfront distribution to webserver file resource "null_resource" "nulll" { depends_on = [ aws_cloudfront_distribution.s3_distribution, aws_efs_mount_target.gamma, ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.webserver_key.private_key_pem host = aws_instance.ins1.public_ip } provisioner "remote-exec" { inline = [ # sudo su << \"EOF\" \n echo \"<img src='${aws_cloudfront_distribution.s3_distribution.domain_name}'>\" >> /var/www/html/code.html \n \"EOF\" "sudo su << EOF", "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.object1.key}'>\" >> /var/www/html/code.html", "EOF" ] }
}
So, Now our code for webserver is modified as per our need. WE can also make further changes in code we need by using SSh to login instance and go to /var/www/html directory and change the code for webpage there.
Note : Save the code in a file with ".tf" extension.
now to execute this terraform script run following command:
- Terraform init :- to initialise the following directory and to download respective plugin for provider
- Terraform validate : -to validatde written script
- Terraform apply : -to run .tf file
and after this program runs successfully ...it gives u i.p of cloudfront from where u can browse the website page
and after all your work completed we can destroy whole setup with one command :
-Terraform destroy
github link : - https://github.com/drlraj2805/hctask2.git