Infrastructure As A Code

Infrastructure As A Code

Terraform is a multi-cloud (IaC) Infrastructure as Code software by HashiCorp written in Go Language using (HCL) HashiCorp Config Language. An open source command line tool that can be used to provide an infrastructure on many different platforms and services such as IBM, AWS, GCP, Azure, OpenStack, VMware and more. By using a plugin-based model to support providers and provisioners, giving it the ability to support almost any service that exposes APIs.

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Task : Launch A Website on AWS with EFS using Terraform

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploaded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Basic Terraform commands -:

-terraform init

-terraform plan

-terraform validate

-terraform show

-terraform apply -auto-approve


-terraform destroy -auto-approve

STEPS:-

  1. Create a directory let suppose name EFS_TERRAFORM .
  2. Make a terraform file having extension ".tf" of name for example "efs_terra.tf"
  3. Now open the file and for loading plugins in this folder we have to write some terraform code and run "terraform apply" command.
#CREATING AWS PROVIDER WITH REQUIRED PROFILE NAME

variable "enter_ur_profile_name" {
     type = string
  //   default = "Dev"
}

provider "aws" {                                 
  region = "ap-south-1"
  profile = var.enter_ur_profile_name
}

4. Now we have to Create a Keypair via terraform code.

#CREATING KEY PAIR


resource "tls_private_key" "keypairos" {
  algorithm   = "RSA"
  rsa_bits = 2048
}


resource "local_file" "keypairos2" {
    content     = tls_private_key.keypairos.private_key_pem
    filename = "keypair2011.pem"  
    file_permission = 0400	
}




resource "aws_key_pair" "deployer" {
  key_name   = "keypairdev"
  public_key = tls_private_key.keypairos.public_key_openssh
}

5. Creating a default VPC.

#SETTING DEFAULT VPC

resource "aws_default_vpc" "default" {
  tags = {
    Name = "Default VPC"
  }  

}

6. Creating a Security Group which allows port 80 as http and port 22 as ssh connection

#SETTING SECURITY GROUP


resource "aws_security_group" "security" {
  vpc_id = aws_default_vpc.default.id
  name        = "websecurity"
ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "websecuritygroup"
  }
}

7. Creating a Public Subnet.

# Creating a Public Subnet


resource "aws_subnet" "subnet_public1"{
     vpc_id = aws_vpc.default.id
     map_public_ip_on_launch = "true"
     availability_zone = "ap-south-1a"
     cidr_block = "10.0.0.0/24"




     tags = {
              Name = "Public VPC Subnet"
       }
}

8. Creating an EC2 instance and install httpd and git in it.

#CREATING AN EC2 INSTANCE WITH ALL ABOVE ABBREVIATIONS USED


resource "aws_instance" "myoperatingsys" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = aws_key_pair.deployer.key_name
  security_groups = [ "websecurity" ]
  
   connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.keypairos.private_key_pem
    host     = aws_instance.myoperatingsys.public_ip
  }




   provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y ",
      "sudo systemctl restart httpd",
       "sudo systemctl enable httpd",
        
    ]
  }


 tags = {
       Name = "MyOS1"
      }


}

9. Creating an EKS File system and attaching it to VPC and then attach it to our EC2 instance.

# Creating EKS File System

resource "aws_efs_file_system" "efsmyos" {
  creation_token = "firstefs"


  tags = {
    Name = "EFS_OS"
  }
}


resource "aws_efs_mount_target" "alpha" {
  file_system_id = aws_efs_file_system.efsmyos.id
  subnet_id      = aws_subnet.subnet_public1.id
}




resource "null_resource"  "nullres" {
            provisioner "remote-exec" {
    inline = [
         "sudo yum -y install nfs-utils",
         "sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport ${aws_efs_file_system.efsmyos.id}:/   /var/www/html",
        "sudo su -c \"echo '${aws_efs_file_system.efsmyos.id}:/ /var/www/html nfs4 defaults,vers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0' >> /etc/fstab\""
     
    ]
 
 }
}

10. Creating S3 Bucket and push content into it via using git clone command .

#CREATING AN S3 BUCKET 


resource "aws_s3_bucket" "s3_bucketos" {
	bucket = "os-bucket-dev0608"  
  	acl    = "public-read"


         connection {
         type     = "ssh"
         user     = "ec2-user"
          private_key = tls_private_key.keypairos.private_key_pem
          host     = aws_instance.myoperatingsys.public_ip
         }


    provisioner "local-exec" {
    command =  "sudo git clone https://github.com/DevendraJohari24/multicloud.git   terra",
    
  }
	
  	tags = {
   	Name        = "My-S3-bucket"
    	Environment = "Production"
  	}
	versioning {
	enabled= true
	}
 
}

11. Creating a Cloud Front Distribution for this S3 bucket.

#CREATING A CLOUD FRONT


locals {
  s3_origin_id = "myS3Origin"
}


resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = aws_s3_bucket.s3_bucketos.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
    
    custom_origin_config {
            		http_port = 80
            		https_port = 80
            		origin_protocol_policy = "match-viewer"
            	origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
        	}
   
  }


  enabled             = true






  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
 


  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
  depends_on = [
    aws_s3_bucket.s3_bucketos
  ]
}

12. Place this CloudFront URL in our Code.

provisioner "remote-exec" {
    inline = [
      "sudo bash -c 'echo export url=${aws_s3_bucket.s3_bucketos.bucket_domain_name} >> /etc/apache2/envvars'",
      "sudo sysytemctl restart apache2"
    ]
  }

13. Save IP of EC2 instance in publicip.txt and opening it into our chrome browser.

#OUTPUT IP SHOWN IN COMMAND PROMPT

output "myos_ip" {
    value = aws_instance.myoperatingsys.public_ip
}


#SAVE IP TO LOCAL FILE 
resource "null_resource" "nulllocal1" {
     provisioner "local-exec" {
             command = "echo ${aws_instance.myoperatingsys.public_ip} >publicip.txt"
          }
}


#OPENING CHROME AND SEARCH IP


resource "null_resource" "nulllocal0608"  {


	provisioner "local-exec" {
	    command = "start chrome  ${aws_instance.myoperatingsys.public_ip}"
  	}
}

This is the whole code with description for launching our infrastructure on AWS.

For complete code you can also go to my github account . Github link is given below -:

https://github.com/DevendraJohari24/AWShybridmulticloud/tree/master/Task2

要查看或添加评论,请登录

Devendra Johari的更多文章

  • Automate LVM Using Python

    Automate LVM Using Python

    INTRODUCTION LVM stands for Logical Volume Management. It is a system of managing logical volumes, or filesystems, that…

  • Benefits which MNCs are getting from AI/ML

    Benefits which MNCs are getting from AI/ML

    Artificial Intelligence -: Artificial intelligence (AI) refers to the simulation of human intelligence in machines that…

  • AWS CLI (Command Line Interface)

    AWS CLI (Command Line Interface)

    The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and…

  • SmugMug Case Study - AWS

    SmugMug Case Study - AWS

    SmugMug is a paid image-sharing, image hosting service, and online video platform on which users can upload photos and…

  • Big Data & Hadoop

    Big Data & Hadoop

    There is no place where Big Data does not exist! The curiosity about what is Big Data has been soaring in the past few…

  • Deploy Web Server on AWS through ANSIBLE!

    Deploy Web Server on AWS through ANSIBLE!

    Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the…

  • Automation of 'DOCKER' using 'ANSIBLE'

    Automation of 'DOCKER' using 'ANSIBLE'

    Ansible is an open source automation platform . It is simple automation language that can perfectly describe an IT…

  • Terraform Automation to Setup Hybrid Network on AWS

    Terraform Automation to Setup Hybrid Network on AWS

    Terraform is a multi-cloud (IaC) Infrastructure as Code software by HashiCorp written in Go Language using (HCL)…

  • Automation using Terraform to create VPC and Internet Gateway

    Automation using Terraform to create VPC and Internet Gateway

    Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you…

  • Elastic Kubernetes Services and Fargate Cluster

    Elastic Kubernetes Services and Fargate Cluster

    Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. Customers such as Intel, Snap…

社区洞察

其他会员也浏览了