An Infra Creation on top of AWS using TERRAFORM ...
We gonna launch one ec2-instance which will have apache server running inside and then we will copy code from Github and will paste inside /var/www/html folder and then will mount efs with /var/www/html folder to make our data persistent and then we will create s3 and will upload images to s3 again these images will also be downloaded from Github and then we will create CloudFront using s3 to have low latency and finally our task will be done...
- USE-CASE:
1. Create a Security group that allows the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html.
5. The developer has uploaded the code into GitHub repo also the repo has some images.
6. Copy the GitHub repo code into /var/www/html.
7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
- Here I am gonna use some more components like to create key-pair I will be using the TLS algorithm method... So to know about more Visit Here...
- First, select your provider and region and use the configured profile...
provider "aws" { region = "ap-south-1" profile = "ranjit" }
- Create Key using the TLS method and save the key in a local folder and then use the generated key to create one key-pair...
resource "tls_private_key" "generated_key" { algorithm = "RSA" rsa_bits = 2048 } resource "local_file" "the_key" { content = tls_private_key.generated_key.private_key_pem filename = "mykey.pem" file_permission = 0400 } resource "aws_key_pair" "key" { //key_name = "my_private_key" public_key = tls_private_key.generated_key.public_key_openssh }
- Then we will use default VPC...
resource "aws_default_vpc" "default" { tags = { Name = "Default VPC" } }
- Then create one security group which will have inbound traffic for ssh, Http kinds of requests...
resource "aws_security_group" "security_groups_created" { //name = "allow_ssh_http" description = "Allow ssh inbound traffic" vpc_id = aws_default_vpc.default.id ingress { description = "http" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "ssh" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "allowing http ssh" } }
- Then we will launch one instance by attaching the created security group, VPC, key-pair. And we will do ssh to the instance and will run the respective command there in the shell of the instance to install respective Softwares in the instance and we will use remote-exec provisioned to do ssh to the instance...
resource "aws_instance" "web" { ami = "ami-0447a12f28fddb066" instance_type = "t2.micro" key_name = aws_key_pair.key.key_name security_groups = [ aws_security_group.security_groups_created.name ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.generated_key.private_key_pem host = aws_instance.web.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install git httpd php -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd" ] } tags = { Name = "ranjit AWS EC2 Instance" } }
- Then we will create EFS and will mount with /var/www/html. Then we will download the code from Github to the /var/www/html folder. To do this we will again do ssh to the instance and will run the commands to do the respective tasks...
resource "aws_efs_file_system" "elasticfilesystem" { depends_on = [ aws_security_group.security_groups_created , aws_instance.web ] creation_token = "createnfs" tags = { Name = "ElasticFileSystem" } } resource "aws_efs_mount_target" "mounted_target" { depends_on = [ aws_efs_file_system.elasticfilesystem ] file_system_id = aws_efs_file_system.elasticfilesystem.id subnet_id = aws_instance.web.subnet_id security_groups = ["${aws_security_group.security_groups_created.id}"] } resource "null_resource" "null1"{ depends_on=[ aws_efs_mount_target.mounted_target ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.generated_key.private_key_pem host = aws_instance.web.public_ip } provisioner "remote-exec" { inline = [ "sudo mount ${aws_efs_file_system.elasticfilesystem.dns_name}:/ /var/www/html", "sudo git clone https://github.com/ranjitben10/cloudtask.git /var/www/html" ] } }
- Now create S3 and download the images from GitHub... And store the images in a local folder to do this we will use local-exec provisioner...
resource "aws_s3_bucket" "my-bucket" { bucket = "ui-ux-designs" acl = "public-read" provisioner "local-exec" { command = "git clone https://github.com/ranjitben10/Cloudimages.git cloud-images" } tags = { Name = "S3-Bucket" Environment = "Production" } versioning { enabled= true } }
- Now we will upload images one by one from the local folder to our created S3 bucket...
resource "aws_s3_bucket_object" "ui-ux-cloud-image1" { bucket = aws_s3_bucket.my-bucket.bucket key = "EventManagerSignUp.png" source = "cloud-images/EventManagerSignUp.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image2" { bucket = aws_s3_bucket.my-bucket.bucket key = "Training.png" source = "cloud-images/Training.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image3" { bucket = aws_s3_bucket.my-bucket.bucket key = "Manage Profile and can post talents.png" source = "cloud-images/Manage Profile and can post talents.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image4" { bucket = aws_s3_bucket.my-bucket.bucket key = "PostEventByEvent Manager.png" source = "cloud-images/PostEventByEvent Manager.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image5" { bucket = aws_s3_bucket.my-bucket.bucket key = "solution Architecture.png" source = "cloud-images/solution Architecture.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image6" { bucket = aws_s3_bucket.my-bucket.bucket key = "Register To An Event.png" source = "cloud-images/Register To An Event.png" acl = "public-read" }
- Now we will create CloudFront for S3 to reduce the latency... In a NutShell, CloudFront Is A Content Delivery Network(CDN)...
locals { s3_origin_id = "myS3Origin" } resource "aws_cloudfront_distribution" "s3_bucket_distribution" { origin { domain_name = aws_s3_bucket.my-bucket.bucket_regional_domain_name origin_id = local.s3_origin_id custom_origin_config { http_port = 80 https_port = 80 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false cookies { forward = "none" } } min_ttl = 0 default_ttl = 3600 max_ttl = 86400 viewer_protocol_policy = "allow-all" } restrictions { geo_restriction { restriction_type = "blacklist" locations = ["CA", "GB", "DE"] } } viewer_certificate { cloudfront_default_certificate = true } }
- As terraform execute codes non-sequentially so to execute in a sequential manner we have to use the "depends_on" attribute which will let us specify after which component creation which component should be created... Here I am using local-exec provisioner to run chrome with having the instance IP on our local machine automatically when all the components will be created...
resource "null_resource" "final"{ depends_on = [ aws_instance.web,aws_s3_bucket.my-bucket,null_resource.null1,aws_s3_bucket_object.ui-ux-cloud-image1,aws_s3_bucket_object.ui-ux-cloud-image2,aws_s3_bucket_object.ui-ux-cloud-image3,aws_s3_bucket_object.ui-ux-cloud-image4,aws_s3_bucket_object.ui-ux-cloud-image5,aws_s3_bucket_object.ui-ux-cloud-image6,aws_cloudfront_distribution.s3_bucket_distribution ] provisioner "local-exec"{ command= "start chrome ${aws_instance.web.public_ip}" } }
Execute the total file(having all the code snippets) by the command "terraform apply --auto-approve"...
- All the components have been created successfully...
Finally, Let's have a look At the RESULT...
- Destroy the whole infra by using the command "terraform destroy --auto-approve"...
- All Done... Tasks Solved...
THANK YOU...
My Portfolio
?