Task_2 ,Create a Web application and use EFS as Persistent storage.
Simply,updating my Task-1 with efs and Create everything from beginning.
Hello readers , here i'm going to explain how to launch an application on AWS where i completely automate the things using Terraform.So Let's continue...
As we all know , in aws we can do everything using graphical Interface i.e web-portal .But the challenge is whenever we create any infrastructure for particular task ,right from the beginning say from creating a keypair to launch and configure the instances .and successful execution of the task we need to terminate everything what we have created ..here it is very tidious thing for everyone .Isn't it??
Have You ever felt that if your complete infrastructure has to be launched with a single click ,also terminate or destroy everything with the same click ,is it possible??
Yes , the solution for this is Terraform.What is terraform and how it is going to help us..continue to read...
What is Terraform?
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.
The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.
Now Let's Get into The Task..
Task Description..
- Create a key-pair and a Security group which allow the port 80.
- Launch EC2 instance.
- In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
- Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
- Developer have uploded the code into github repo also the repo has some images.
- Copy the github repo code into /var/www/html
- Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
- Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.
Let's begin the Task..
1.Firstly,we create a profile to login to aws ..
Now , here onwards we do write code in terrform for everything.
1.Login to AWS..
provider "aws"{ profile ="task2profile" region ="ap-south-1" }
2.Create a Key-Pair..
resource "tls_private_key" "T-key" { algorithm = "RSA" } resource "aws_key_pair" "Task2-key"{ key_name = "task2-key" public_key = tls_private_key.T-key.public_key_openssh }
Save the Key-locally and verify further..
resource "local_file" "keylocally" { content = tls_private_key.T-key.private_key_pem filename = "task2-key.pem" depends_on = [ tls_private_key.T-key ] }
3.Create a security group..
resource "aws_security_group" "my-task2-sg" { name = "my-task2-sg" description = "Allow SSH and HTTP" ingress { description = "Allow SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "Allow HTTP" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "Allow NFS" from_port = 2049 to_port = 2049 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0" ] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "my-task2-sg" } }
What is ingress and egress mean here??
Egress in the world of networking implies traffic that exits an entity or a network boundary, while Ingress is traffic that enters the boundary of a network.
4.Now we create an Ec2-Instance and by remote-login using SSH , we install required software for our application.
resource "aws_instance" "webserver" { ami = "ami-0447a12f28fddb066" instance_type = "t2.micro" key_name = aws_key_pair.Task2-key.key_name security_groups = ["my-task2-sg"] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.T-key.private_key_pem host = aws_instance.webserver.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install httpd php git amazon-efs-utils nfs-utils -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd", ] } tags = { Name = "webserver" } }
Here,as we launching a webserver we installed , httpd and php softwares.
Next, as we have to download code from git-hub , git software is required.
Finally , For using EFS we have to install EFS client softwares such , amazon-efs-utils ..etc.
5.Create an EFS (Elastic File System) for our application.
resource "aws_efs_file_system" "task2-efs" { creation_token = "task2-efs" tags = { Name = "task2-efs" } }
6.Create a mount-target ,to enable EFS to use by someone.
resource "aws_efs_mount_target" "mount-target" { file_system_id = aws_efs_file_system.task2-efs.id subnet_id = aws_instance.webserver.subnet_id security_groups = ["${aws_security_group.my-task2-sg.id}"] depends_on = [ aws_efs_file_system.task2-efs] }
What is this mount-target??
A mount target provides an IP address for an NFSv4 endpoint at which you can mount an Amazon EFS file system. You mount your file system using its Domain Name Service (DNS) name, which resolves to the IP address of the EFS mount target in the same Availability Zone as your EC2 instance.
Here , we give only one subnet as mount-target where we running our EC2-instance.
7.Login to the instance via SSH and mount the Efs also download the code from git-hub to respective folder.
resource "null_resource" "Run_cmds" { depends_on = [ aws_efs_mount_target.mount-target ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.T-key.private_key_pem host = aws_instance.webserver.public_ip } provisioner "remote-exec" { inline = [ "sudo mount ${aws_efs_file_system.task2-efs.dns_name}:/ /var/www/html", "sudo echo ${aws_efs_file_system.task2-efs.dns_name}:/ /var/www/html efs defaults,_netdev 0 0 >> sudo /etc/fstab", "sudo git clone https://github.com/vamsi-01/task2-code.git /var/www/html/" ] } }
Here, we run three commands remotely :
- First one is for mounting the efs storage to /var/www/html folder.
- Secondly , we made this mount permanent , by the writing the same mount command in fstab file which is known File system Tables ,which we use to make any mount permanent.
- Finally,downloading the code from git-hub .
8.Creating a S3 Bucket and download images to local-host from git-hub to create the bucket-object.
resource "aws_s3_bucket" "My_task2_bucket" { bucket = "my-task2-image-bucket" acl = "public-read" tags = { Name = "My bucket" } provisioner "local-exec"{ command="git clone https://github.com/vamsi-01/Task-2-image.git Img_down" } provisioner "local-exec" { when = destroy command = "rd /S/Q Img_down" } }
Here , I downloaded the images from Git-hub to local folder named Img_down and whenever we destroyed this environment ,this folder also got deleted.
9.Create S3 bucket object inside the above created bucket.
Here , we upload the images we downloaded into the S3 bucket to create an object.
resource "aws_s3_bucket_object" "task2-bucket" { depends_on=[ aws_s3_bucket.My_task2_bucket ] key = "s3_image.jpg" bucket = aws_s3_bucket.My_task2_bucket.bucket acl = "public-read" source ="Img_down/s3_image.jpg" }
Now ,using a locals block we create a variable of origin_id of s3 object to use further.
locals { s3_origin_id = "S3-${aws_s3_bucket.My_task2_bucket.bucket}" }
Confused with locals , here it is:
What is terraform local?
The local block defines one or more local variables within a module. Comparing modules to functions in a traditional programming language, if input variables are analogous to function arguments and outputs values are analogous to function return values, then local values are comparable to a function’s local temporary symbols.
Now, we use this s3_origin_id to create a cloud front distribution.
10.Create a cloud-front-distribution for s3-object and upload the URL provided to webserver application ...using SSH.
resource "aws_cloudfront_distribution" "Cloudfront-S3" { depends_on=[ null_resource.Run_cmds ] enabled = true is_ipv6_enabled = true origin { domain_name = aws_s3_bucket.My_task2_bucket.bucket_domain_name origin_id = local.s3_origin_id } default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.T-key.private_key_pem host = aws_instance.webserver.public_ip } provisioner "remote-exec" { inline = [ "sudo su << EOF", " echo \"<img src='https://${self.domain_name}/${aws_s3_bucket_object.task2-bucket.key}' width='1200' height='300'>\" >> /var/www/html/index.php", "EOF" ] } }
Now,everything we did it...It's time to access the web site and check how it's working..
Print the Ip and directly access the site automatically using Local-exec .
output "pub_ip"{ value=aws_instance.webserver.public_ip } resource "null_resource" "Auto_access_website" { depends_on = [ aws_cloudfront_distribution.Cloudfront-S3 ] provisioner "local-exec" { command = "start firefox ${aws_instance.webserver.public_ip}" } }
That's it we have done everything , let's see the output of each step after running this entire code.
Terraform init : To initialize the plugins and backends required for running the code.
Run : terraform init
Terraform validate : Check for and errors and validate.
Run :terraform validate.
?Terraform apply :Let's create the entire architecture in a single go..
Run : terraform apply -auto-approve
Result...
Key-pair and security-group has been created..
Ec2 instance of webserver and EFS file system has been created..
S3 bucket and its Bucket object with git-hub image created..
Cloud-front-Distribution for S3-bucket created above..
Code from Git-hub and URL from Cloud-front-distribution has been updated in the Index.php.
Finally,Launched my website ..automatically by local-exec...
Key-pair and Image from git-hub downloaded locally..
We done with our job , Let's destroy the infrastructure in a single go...
Run : terraform destroy --auto-approve:
That's all for the Task....Thanks for Reading.
Signing off ...Hope You like it. Thank You Vimar sir ..for this task.
Git-Hub repo: