How to Create/Launch Application using Terraform
Abhilash Sharma
Android Developer @SISGAIN | Android App Development | Kotlin | Java
TASK 1 :
1. Create the key and security group which allow the port
80.
2. Launch EC2 instance.
3. In this Ec2 instance use the key and security group which
we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /
var/www/html
5. Developer have uploded the code into github repo also
the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from
github repo into the s3 bucket and change the permission
to public readable.
8. Create a Cloudfront using s3 bucket(which contains
images) and use the Cloudfront URL to update in code
in /var/www/html
TERRAFORM :
Terraform is an infrastructure as code tool. Infrastructure as
code tools allow us to create infrastructure, such as databases,
web servers using written code that is then converted into our
required resources. In our case, we’re going to use it to create
an S3 bucket. An S3 bucket is an easy way to store files in AWS and it
can even act as a website.So what are the reasons
for choosing Terraform? To keep the justification simple, the
main reason is that Terraform is platform agnostic. But what
do I mean “platform agnostic”? I mean that you can use
Terraform to provision any different type of cloud resource,
from GCP to Stripe apps. That’s great for your learning
experience as you only have one tool to use.
How to install Terraform in redhat linux :
*) To dowload terraform zipped file in the sysytem
cmd-> wget https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_linux_amd64.zip
*) Now unzip your downloaded file
cmd-> unzip "file name"
*) Make another directory as shown above and move that unzipped file in that direcrtory.
cmd-> mkdir downloads
cmd-> mv terraform downloads/
cmd-> ls -a
*) Now edit .profile
cmd-> vim ~/.profile
*) now go to editing mode by clicking "i".
cmd-> export PATH="$PATH:~/dowloads"
*) now press ESC to undo editing mode and then type ":wq" then enter
cmd-> source ~/.profile
cmd-> terraform --version
Configure aws terraform profile :
provider “aws”{
region=”ap-south-1”
profile=”abhilash”
}
You can create your profile using aws configure. [ $ aws
configure –profile profile_name ]
Create Key :
resource “tls_private_key” “task1key” {
algorithm = “RSA”
}
resource “aws_key_pair” “keypair” {
key_name = “task1key”
p u b l i c _ k e y = {tls_private_key.task1key.public_key_openssh}”
depends_on = [ tls_private_key.task1key ]
}
Create Security Group :
*)This security group will allow port 80 and 22.
resource “aws_security_group” “task1_sec_group” {
name = “task1_sec_group”
description = “Allows SSH and HTTP”
vpc_id = “vpc-9ff7eaf7”
ingress {
description = “SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [ “0.0.0.0/0” ]
}
ingress {
description = “HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [ “0.0.0.0/0” ]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “task1_sec_group”
}
}
Create AWS Instance :
resource “aws_instance” “myin1?{
ami=”ami-0447a12f28fddb066?
instance_type=”t2.micro”
key_name=aws_key_pair.keypair.key_name
security_groups=[“task1_sec_group”]
provisioner “remote-exec” {
connection {
agent = “false”
type = “ssh”
user = “ec2-user”
private_key = “${tls_private_key.task1key.private_key_pem}”
host = “${aws_instance.myin1.public_ip}”
}
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}
tags={
Name=”mytask1os”
}
*) After launching the instance : automatically “httpd web server” ,“php” , “git” will be installed in the instance and httpd server will be start.
Print Availability Zone :
*)Availability zone of ebs volume and instance should be same for mounting the external volume to instance.
output “az”{
value=aws_instance.myin1.availability_zone
}
Print Public IP :
*) We need public ip of ec2 instance for remote login using ssh protocol.
output “pubip”{
value=aws_instance.myin1.public_ip
}
Create EBS Volume :
resource “aws_ebs_volume” “esb1” {
availability_zone = aws_instance.myin1.availability_zone
size = 1
tags = {
Name = “myebs1”
}
}
Attach EBS Volume to Instance :
resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdd”
volume_id = aws_ebs_volume.esb1.id
instance_id = aws_instance.myin1.id
force_detach=true
}
Mounting EBS Volume in EC2 Instance
resource “null_resource” “mounting” {
depends_on = [ aws_volume_attachment.ebs_att, ]
connection {
agent = “false”
type = “ssh”
user = “ec2-user”
private_key = “${tls_private_key.task1key.private_key_pem}”
host = “${aws_instance.myin1.public_ip}”
}
provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdd”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/abhilash-sharm/launching-
EC2-instance-using-terraform.git /var/www/html”
]
}
}
*)If volume will be successfully attached then it will be mounted and github code will automatically downloaded into appropriate httpd directory.
Create S3 Bucket :
resource “aws_s3_bucket” “task1-bucket” {
bucket = “abhilash1”
acl = “private”
versioning {
enabled = true
}
tags = {
Name = “task1-bucket”
Environment = “Dev”
}
}
Download object/image in S3 bucket :
*) For downloading image of GitHub ; first I save it in local- system using jenkins and then gave the source path of cloned repository.
resource “aws_s3_bucket_object” “task1bucket_object” {
key = “myimage”
bucket = “${aws_s3_bucket.task1-bucket.id}”
source = “/home/akshit/Downloads/external-content.duckduckgo.com.jpeg
acl=”public-read”
}
Create Cloudfront :
resource “aws_cloudfront_distribution” “task1_cloudfront” {
origin {
domain_name = “abhilash1.s3.amazonaws.com”
origin_id = “S3-abhilash1-id”
*) This code is for attaching s3 bucket to cloud front.
custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1","TLSv1.2"]
}
}
enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD","OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-abhilash1-id"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl =86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
*) Transfer HTTP to HTTPS and allow port 80 for accessing content inside the bucket using cloudfront.
resource “null_resource” “remote” {
depends_on = [null_resource.mounting,]
*) If bucket will be successfully created then cloud front will be created.
provisioner “local-exec” {
command = “google-chrome ${aws_instance.myin1.public_ip}”
}
}
*) This code will Launch our “index.html” file in google-chrome browser that was cloned in /var/www/html folder.
Role of Jenkins :
Jenkins will copy the github code in our local system for uploading the image into s3 bucket and It will perform all terraform command like init and apply.Hence After writing the terraform code as soon as we run a single job of jenkins end 2 end automation will be achieved and our instance will automatically launced with fillowing rules :
1. Extra EBS volume will be mounted in the folder /var/www/html
2. httpd server will be started.
3. s3 bucket will be created having static image
4. Cloud front will be created using s3 bucket domain name
5. Finally “index.html” file will be automatically open in google-chrome using th public ip of our instance.
Software Engineer at HCL Technologies
4 年keep going bro !!