Task 1 : Lauch Application Using Terraform
* For company products ,using one single cloud is not good. So they are using mesh of cloud i.e more than one cloud , from here the term multicloud come in play. For launching app or website we need multicloud. But the biggest challenge is that every cloud platform have their separate GUIs , separate commands , separate syntax etc.
* In the multicloud world , I don't want to learn all the cloud, we need some standardized tool for all the cloud and basically that tool is known as terraform. They know how to contact to cloud platforms like aws , gcp , Alibaba , azure , openstack etc. Mainly the basic requirement is you need to know how to write a terraform codes.
* Terraform :
? It is a standard tool ( software )
? They have their own language known as HCL ( hashicorp configuration language)which is declarative in nature ,it is similar to JSON.
? aws,openstack,azure,gcp,kubernetes etc are providers for terraform.
* if we provide documents to human beings, 70-80 % chances we will miss something that is a reason it is highly recommend never do anything manually not even in cli. Always create a code it is something like document. Code creates a complete infrastructure for us i.e infrastructure as a code (IAC)
>> AWS Configure :
provider "aws" { profile = "mickey" region = "ap-south-1" }
>> Creating the key and Security Group which allow port no 80
// Creating RSA key variable "EC2_Key" {default="keyname111"} resource "tls_private_key" "mynewkey" { algorithm = "RSA" rsa_bits = 4096 } // Creating AWS key-pair resource "aws_key_pair" "generated_key" { key_name = var.EC2_Key public_key = tls_private_key.mynewkey.public_key_openssh } // Creating security group resource "aws_security_group" "mysg" { depends_on = [ aws_key_pair.generated_key, ] name = "allow_http" description = "Allow http inbound traffic" ingress { description = "SSH Port" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "http from VPC" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "httpdsecurity" } }
In my case i have created "keyname111" as a key pair.
In my case i have created "allow_http" as a security group that allow port no 80.
>> Creating EC2 Instance and use the key and security group which we have created in above steps.
resource "aws_instance" "myterraformos1" { depends_on = [ aws_security_group.mysg, ] ami = "ami-0447a12f28fddb066" instance_type = "t2.micro" key_name = var.EC2_Key security_groups = [ "${aws_security_group.mysg.name}" ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.mynewkey.private_key_pem host = aws_instance.myterraformos1.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install httpd php git -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd", ] } tags = { Name = "OSterraform" }
>> Launch one EBS Volume and mount that Volume into /var/www/html.
resource "aws_ebs_volume" "volterraform" { availability_zone = aws_instance.myterraformos1.availability_zone size = 1 tags = { Name = "volforterraform" } } resource "aws_volume_attachment" "attachvol" { device_name = "/dev/sdh" volume_id = aws_ebs_volume.volterraform.id instance_id = aws_instance.myterraformos1.id force_detach = true } resource "null_resource" "mountingvol" { depends_on = [ aws_volume_attachment.attachvol, ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.mynewkey.private_key_pem host = aws_instance.myterraformos1.public_ip } provisioner "remote-exec" { inline = [ "sudo mkfs.ext4 /dev/xvdh", "sudo mount /dev/xvdh /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/aaditya2801/terraformjob1.git /var/www/html/" ] } }
In my case i have created "volforterraform" as a EBS Volume.
here i have mounted volforterraform as a /dev/sdh into /var/www/html.
>> Developer have uploaded the code into github repo and also the repo has some images.
Note - i have already copied the github code into /var/www/html in above steps.
>> Creating S3 bucket and deploy the images from github repo into the S3 bucket and change the permission to public readable.
resource "aws_s3_bucket" "s3bucketjob1" { bucket = "mynewbucketforjob1" acl = "public-read" } //Putting Objects in mynewbucketforjob1 resource "aws_s3_bucket_object" "s3_object" { bucket = aws_s3_bucket.s3bucketjob1.bucket key = "snapcode.png" source = "C:/Users/Sharma/Desktop/snapcode.png" acl = "public-read" }
In my case i have created "mynewbucketforjob1" as a S3bucket which is set to public readable where i can upload one image named "snapcode.png"
>> Create a Cloudfront using S3 bucket(which contain images) and use the cloudfront URL to update in code in /var/www/html.
locals { s3_origin_id = aws_s3_bucket.s3bucketjob1.id } resource "aws_cloudfront_distribution" "CloudFrontAccess" { depends_on = [ aws_s3_bucket_object.s3_object, ] origin { domain_name = aws_s3_bucket.s3bucketjob1.bucket_regional_domain_name origin_id = local.s3_origin_id } enabled = true is_ipv6_enabled = true comment = "s3bucket-access" default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } # Cache behavior with precedence 0 ordered_cache_behavior { path_pattern = "/content/immutable/*" allowed_methods = ["GET", "HEAD", "OPTIONS"] cached_methods = ["GET", "HEAD", "OPTIONS"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false headers = ["Origin"] cookies { forward = "none" } } min_ttl = 0 default_ttl = 86400 max_ttl = 31536000 compress = true viewer_protocol_policy = "redirect-to-https" } # Cache behavior with precedence 1 ordered_cache_behavior { path_pattern = "/content/*" allowed_methods = ["GET", "HEAD", "OPTIONS"] cached_methods = ["GET", "HEAD"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false cookies { forward = "none" } } min_ttl = 0 default_ttl = 3600 max_ttl = 86400 compress = true viewer_protocol_policy = "redirect-to-https" } price_class = "PriceClass_200" restrictions { geo_restriction { restriction_type = "blacklist" locations = ["CA"] } } tags = { Environment = "production" } viewer_certificate { cloudfront_default_certificate = true } retain_on_delete = true } // url added to our code resource "null_resource" "addingurl" { depends_on = [ aws_cloudfront_distribution.CloudFrontAccess, ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.mynewkey.private_key_pem host = aws_instance.myterraformos1.public_ip } provisioner "remote-exec" { inline = [ "echo '<img src='https://${aws_cloudfront_distribution.CloudFrontAccess.domain_name}/snapcode.png' width='300' height='330'>' | sudo tee -a /var/www/html/index.html" ] } }
In my case i have created one cloudfront distribution and provide my s3bucket "mynewbucketforjob1" as a origin
>> Creating Snapshot for EBS Volume.
resource "aws_ebs_snapshot" "snap1" { depends_on = [ null_resource.addingurl, ] volume_id = aws_ebs_volume.volterraform.id tags = { Name = "job1snap" } }
In my case i have created "job1snap" as a snapshot for ebs volume.
>> Launching a Website :
resource "null_resource" "deploywebapp" { depends_on = [ aws_ebs_snapshot.snap1, ] provisioner "local-exec" { command = "start chrome ${aws_instance.myterraformos1.public_ip}/index.html" } }
This is my website lauched by Terraform.
DevOps Engineer | Atlassian Administrator | Jira | Confluence | GIT | Kubernetes | Docker | AWS
4 年Nice??
Product Manager | Expert in Product Roadmaps, Agile, MVP Development, & Go-to-Market Strategy
4 年Good one bro!???