Task 1 : Lauch Application Using Terraform

Task 1 : Lauch Application Using Terraform

* For company products ,using one single cloud is not good. So they are using mesh of cloud i.e more than one cloud , from here the term multicloud come in play. For launching app or website we need multicloud. But the biggest challenge is that every cloud platform have their separate GUIs , separate commands , separate syntax etc.


* In the multicloud world , I don't want to learn all the cloud, we need some standardized tool for all the cloud and basically that tool is known as terraform. They know how to contact to cloud platforms like aws , gcp , Alibaba , azure , openstack etc. Mainly the basic requirement is you need to know how to write a terraform codes.


* Terraform :

? It is a standard tool ( software )

? They have their own language known as HCL ( hashicorp configuration language)which is declarative in nature ,it is similar to JSON.

? aws,openstack,azure,gcp,kubernetes etc are providers for terraform.

* if we provide documents to human beings, 70-80 % chances we will miss something that is a reason it is highly recommend never do anything manually not even in cli. Always create a code it is something like document. Code creates a complete infrastructure for us i.e infrastructure as a code (IAC)

>> AWS Configure :

provider "aws" {
        profile = "mickey"
        region = "ap-south-1"
}

>> Creating the key and Security Group which allow port no 80

// Creating RSA key


variable "EC2_Key" {default="keyname111"}
resource "tls_private_key" "mynewkey" {
  algorithm = "RSA"
  rsa_bits  = 4096
}


// Creating AWS key-pair


resource "aws_key_pair" "generated_key" {
  key_name   = var.EC2_Key
  public_key = tls_private_key.mynewkey.public_key_openssh
}


// Creating security group


resource "aws_security_group" "mysg" {


depends_on = [
    aws_key_pair.generated_key,
  ]


  name         = "allow_http"
  description  = "Allow http inbound traffic"
 
  ingress {
    description = "SSH Port"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "http from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  } 


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "httpdsecurity"
  }
}

  


In my case i have created "keyname111" as a key pair.

No alt text provided for this image

In my case i have created "allow_http" as a security group that allow port no 80.

>> Creating EC2 Instance and use the key and security group which we have created in above steps.

resource "aws_instance" "myterraformos1" {


depends_on = [
    aws_security_group.mysg,
  ]


  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name      = var.EC2_Key
  security_groups = [ "${aws_security_group.mysg.name}" ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.mynewkey.private_key_pem
    host     = aws_instance.myterraformos1.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }


  tags = {
    Name = "OSterraform"
  
}




No alt text provided for this image

>> Launch one EBS Volume and mount that Volume into /var/www/html.

resource "aws_ebs_volume" "volterraform" {
  availability_zone = aws_instance.myterraformos1.availability_zone
  size              = 1
  tags = {
    Name = "volforterraform"
  }
}


resource "aws_volume_attachment" "attachvol" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.volterraform.id
  instance_id = aws_instance.myterraformos1.id
  force_detach = true
}


resource "null_resource" "mountingvol"  {


depends_on = [
    aws_volume_attachment.attachvol,
  ]


  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.mynewkey.private_key_pem
    host     = aws_instance.myterraformos1.public_ip
  }


provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/aaditya2801/terraformjob1.git /var/www/html/"
    ]
  }
}


No alt text provided for this image

In my case i have created "volforterraform" as a EBS Volume.

No alt text provided for this image

here i have mounted volforterraform as a /dev/sdh into /var/www/html.

>> Developer have uploaded the code into github repo and also the repo has some images.

No alt text provided for this image


Note - i have already copied the github code into /var/www/html in above steps.

>> Creating S3 bucket and deploy the images from github repo into the S3 bucket and change the permission to public readable.

resource "aws_s3_bucket" "s3bucketjob1" {
bucket = "mynewbucketforjob1"
acl    = "public-read"
}


//Putting Objects in mynewbucketforjob1


resource "aws_s3_bucket_object" "s3_object" {
  bucket = aws_s3_bucket.s3bucketjob1.bucket
  key    = "snapcode.png"
  source = "C:/Users/Sharma/Desktop/snapcode.png"
  acl    = "public-read"
}


No alt text provided for this image

In my case i have created "mynewbucketforjob1" as a S3bucket which is set to public readable where i can upload one image named "snapcode.png"

>> Create a Cloudfront using S3 bucket(which contain images) and use the cloudfront URL to update in code in /var/www/html.

locals {
s3_origin_id = aws_s3_bucket.s3bucketjob1.id
}


resource "aws_cloudfront_distribution" "CloudFrontAccess" {


depends_on = [
    aws_s3_bucket_object.s3_object,
  ]


origin {
domain_name = aws_s3_bucket.s3bucketjob1.bucket_regional_domain_name
origin_id   = local.s3_origin_id
}


enabled             = true
is_ipv6_enabled     = true
comment             = "s3bucket-access"


default_cache_behavior {
allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods   = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl                = 0
default_ttl            = 3600
max_ttl                = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern     = "/content/immutable/*"
allowed_methods  = ["GET", "HEAD", "OPTIONS"]
cached_methods   = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
headers      = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl                = 0
default_ttl            = 86400
max_ttl                = 31536000
compress               = true
viewer_protocol_policy = "redirect-to-https"
}
# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern     = "/content/*"
allowed_methods  = ["GET", "HEAD", "OPTIONS"]
cached_methods   = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl                = 0
default_ttl            = 3600
max_ttl                = 86400
compress               = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "blacklist"
locations        = ["CA"]
}
}
tags = {
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
retain_on_delete = true
}


// url added to our code


resource "null_resource" "addingurl"  {
depends_on = [
    aws_cloudfront_distribution.CloudFrontAccess,
  ]
connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.mynewkey.private_key_pem
    host     = aws_instance.myterraformos1.public_ip
  }
  provisioner "remote-exec" {
    inline = [
	"echo '<img src='https://${aws_cloudfront_distribution.CloudFrontAccess.domain_name}/snapcode.png' width='300' height='330'>' | sudo tee -a /var/www/html/index.html"
    ]
  }
}
No alt text provided for this image

In my case i have created one cloudfront distribution and provide my s3bucket "mynewbucketforjob1" as a origin

>> Creating Snapshot for EBS Volume.

resource "aws_ebs_snapshot" "snap1" {
depends_on = [
    null_resource.addingurl,
  ]
  volume_id = aws_ebs_volume.volterraform.id


  tags = {
    Name = "job1snap"
  }
}


No alt text provided for this image

In my case i have created "job1snap" as a snapshot for ebs volume.

>> Launching a Website :

resource "null_resource" "deploywebapp"  {
depends_on = [
    aws_ebs_snapshot.snap1,
  ]


	provisioner "local-exec" {
	    command = "start chrome  ${aws_instance.myterraformos1.public_ip}/index.html"
  	}
}


No alt text provided for this image

This is my website lauched by Terraform.

>>MY_CODE :


Ojaswini Panse

DevOps Engineer | Atlassian Administrator | Jira | Confluence | GIT | Kubernetes | Docker | AWS

4 年

Nice??

Tejus W.

Product Manager | Expert in Product Roadmaps, Agile, MVP Development, & Go-to-Market Strategy

4 年

Good one bro!???

要查看或添加评论,请登录

Aaditya Tiwari的更多文章

  • AI Games – Video Games with Artificial Intelligence

    AI Games – Video Games with Artificial Intelligence

    ?What is Artificial Intelligence ..

    24 条评论
  • AWS : Supercell Case Study

    AWS : Supercell Case Study

    First we discuss some basics about cloud then we move towards the super cell case study : ★ What is Cloud Computing ..

    81 条评论
  • BIG DATA

    BIG DATA

    ★ What is BIG DATA ..

    44 条评论
  • INTEGRATING - Ansible , Terraform , AWS & GCP Cloud

    INTEGRATING - Ansible , Terraform , AWS & GCP Cloud

    In this use case : ★I have Launched RDS on the top of AWS Public Cloud using Ansible. ★I have created one Project in…

    18 条评论
  • TASK 3 - ANSIBLE (Deploying Load Balancer and WebServers )

    TASK 3 - ANSIBLE (Deploying Load Balancer and WebServers )

    Statement: Deploy a Load Balancer and multiple Web Servers on AWS instances through ANSIBLE! ?? Provision EC2 instances…

    22 条评论
  • Google Cloud Platform - TASK

    Google Cloud Platform - TASK

    Objectives : 1) We have to Create two different projects such as Dev and Prod. 2) In Dev project we have to create one…

    35 条评论
  • Task 2 : EC-2 Instance provisioning on AWS using Ansible

    Task 2 : EC-2 Instance provisioning on AWS using Ansible

    Objectives : Statement : Deploy Web Server on AWS through ANSIBLE! ??Provision EC2 instance through ansible. ??Retrieve…

    10 条评论
  • TASK 1 : Integrating Docker and K8S with Ansible

    TASK 1 : Integrating Docker and K8S with Ansible

    Objectives : ? *Ansible Task 1*? Write an Ansible PlayBook that does the following operations in the managed nodes: ??…

    14 条评论
  • HYBRID MULTI CLOUD - TASK 2

    HYBRID MULTI CLOUD - TASK 2

    OBJECTIVES : Perform the task-1 using EFS instead of EBS service on the AWS as, Create/launch Application using…

    8 条评论
  • HYBRID MULTI CLOUD : TASK 4

    HYBRID MULTI CLOUD : TASK 4

    OBJECTIVES : NOTE : Here i am Performing task-3 with an additional feature to be added that is NAT Gateway to provide…

    8 条评论

社区洞察

其他会员也浏览了