TASK-2: Automation using Jenkins for Launching website on AWS using EFS storage, S3 bucket and Cloudfront CDN with the help of Terraform

TASK-2: Automation using Jenkins for Launching website on AWS using EFS storage, S3 bucket and Cloudfront CDN with the help of Terraform


Hello everyone in this article I am demonstrating how to launch a Website on AWS EC-2 instance with EFS storage and website data storing on S3 bucket with Cloudfront Content Devlivery Network for faster delivery. All these infrastructre will be created using Terraform. Here I am also using Jenkins for automation. Jenkins will clone terraform scripts from github and apply all scripts to launch our infrastructure then jenkins will copy cloudfront links and update it to out webpage using JQuery.

About Task:-

Perform the task-1 using EFS instead of EBS service on the AWS as,

Create/launch Application using Terraform

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Optional

1) Those who are familiar with jenkins or are in devops AL have to integrate jenkins in this task wherever you feel can be integrated

## Additional :-

1) With the help of jenkins clone Terraform scripts

2) Launch terraform scripts using jenkins

3) generating cloudfront url and EC2 domain in text format on same directory of your Terraform scripts

4) With the help of JQuery replace old image links with new cloudfront links

5) Use Build Pipeline View for running JOBS

Tools Used:-

1) AWS CLI

2) AWS account

3) Terraform

4) Jenkins

5) Github account

First setup aws configuration part using

$aws configure command before this first create a IAM user in AWS

Terraform Script:-

Creating AWS provider

#AWS provider
provider "aws" {

region = "ap-south-1"

profile = "default"

}

Generating tls key pair and uploading to aws

#tls private key

resource "tls_private_key" "tls_key" {

algorithm = "RSA"

rsa_bits = 4096

}




#local file

resource "local_file" "key_name" {

depends_on = [tls_private_key.tls_key]

content = tls_private_key.tls_key.private_key_pem

filename = "tls_key.pem"

}




#aws key pair

resource "aws_key_pair" "tls_key" {

depends_on = [local_file.key_name]

key_name = "tls_key"

public_key = tls_private_key.tls_key.public_key_openssh

}

#Creating Security Group

In AWS security groups works as a firewall. Which controls incoming and outgoing traffic rules. Here we allowing SSH protocol and HTTP protocol.

#security group

resource "aws_security_group" "ssh-http-1" {

depends_on = [aws_key_pair.tls_key]

name = "ssh-http"

description = "allow ssh and http"




ingress {

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}







ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}




ingress {

from_port = 2049

to_port = 2049

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}




egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}




tags = {

Name = "sg2"

}
}


  

#Creating AWS EC2 instance and installing required packages

#aws instance

resource "aws_instance" "aws-os-1" {

depends_on = [aws_security_group.ssh-http-1]

ami = "ami-0447a12f28fddb066"

instance_type = "t2.micro"

availability_zone = "ap-south-1a"

security_groups = ["ssh-http"]

key_name = aws_key_pair.tls_key.key_name

user_data = <<-EOF

#!/bin/bash

sudo yum install httpd -y

sudo yum install git wget -y

sudo systemctl start httpd

sudo systemctl enable httpd

EOF

tags = {

Name = "aws-os-1"

}

}

#Create EFS filesystem and mounting it

#aws efs file system

resource "aws_efs_file_system" "test-efs" {

depends_on = [aws_instance.aws-os-1]

creation_token = "test-efs"




tags = {

Name = "test-efs"

}

}




#aws efs mount target

resource "aws_efs_mount_target" "alpha" {

file_system_id = "${aws_efs_file_system.test-efs.id}"

subnet_id = aws_instance.aws-os-1.subnet_id

security_groups = ["${aws_security_group.ssh-http-1.id}"]

depends_on = [aws_efs_file_system.test-efs]

}

null resource for installing packages in EC2 machine

resource "null_resource" "mount-efs" {

depends_on = [aws_efs_mount_target.alpha]




connection {

type = "ssh"

user = "ec2-user"

private_key = tls_private_key.tls_key.private_key_pem

host = aws_instance.aws-os-1.public_ip

}




provisioner "remote-exec" {

inline = [

"yum install amazon-efs-utils nfs-utils -y",

"sudo mount -t efs ${aws_efs_file_system.test-efs.id}:/ /var/www/html",

"sudo echo '${aws_efs_file_system.test-efs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab",

"sudo git clone https://github.com/Divyansh747/Terraform_AWS-task-2.git /var/www/html",

"sudo chmod 777 /var/www/html/index.html"

]

}

}

#Creating AWS S3 bucket and uploading data on it

#aws s3 bucket

resource "aws_s3_bucket" "aws-s3-test" {

depends_on = [null_resource.mount-efs]

bucket = "awstestbucket747"

acl = "public-read"

force_destroy = true




provisioner "local-exec" {

command = "wget https://github.com/Divyansh747/Terraform_AWS-task-2/raw/master/image-1.png"

}

}




#aws s3 bucket object

resource "aws_s3_bucket_object" "object" {

depends_on = [aws_s3_bucket.aws-s3-test]

bucket = "awstestbucket747"

key = "image-1.png"

source = "image-1.png"

}

#Creating AWS Cloudfront for S3 bucket

#aws cloudfront with s3

resource "aws_cloudfront_distribution" "aws-cloudfront-s3" {

depends_on = [aws_s3_bucket_object.object]

origin {

domain_name = "awstestbucket747.s3.amazonaws.com"

origin_id = "S3-awstestbucket747"




custom_origin_config {

http_port = 80

https_port = 443

origin_protocol_policy = "match-viewer"

origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]

}

}

enabled = true




default_cache_behavior {

allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]

cached_methods = ["GET", "HEAD"]

target_origin_id = "S3-awstestbucket747"




# Forward all query strings, cookies and headers

forwarded_values {

query_string = false




cookies {

forward = "none"

}

}







viewer_protocol_policy = "allow-all"

min_ttl = 0

default_ttl = 3600

max_ttl = 86400

}




# Restricts who is able to access this content

restrictions {

geo_restriction {

# type of restriction, blacklist, whitelist or none

restriction_type = "none"

}

}




# SSL certificate for the service.

viewer_certificate {

cloudfront_default_certificate = true

}




provisioner "local-exec" {

command = "echo ${self.domain_name}/${aws_s3_bucket_object.object.key} > cloudfront_link.txt"

}




provisioner "local-exec" {

command = "echo ${aws_instance.aws-os-1.public_ip} > ec2_link.txt"

}



}

Step1: Creating JOB-1

This JOB will clone terraform github repository and initialise terraform script and applying it.

No alt text provided for this image

Add your scripts repository here.

No alt text provided for this image


set Poll SCM to * * * * *

No alt text provided for this image

add your SSH site here I am giving my machine root account ssh credentials

No alt text provided for this image

add post build script so that after ssh perform script will automatically executes

This script first check workspace is present or not if present first it destroys and clone new workspace. After cloning terraform init and apply will perform

script:-

if ls /root/ | grep HybridCLoud2
then
echo "Workspace already available"
# destroying old terraform
cd /root/HybridCLoud2/ && terraform destroy --auto-approve
cd /root/
rm -rf /root/HybridCLoud2
git clone https://github.com/Divyansh747/HybridCLoud2.git /root/HybridCLoud2
else
echo "cloning workspace repository"
git clone https://github.com/Divyansh747/HybridCLoud2.git /root/HybridCLoud2
fi

#initialise terraform workspace
cd /root/HybridCLoud2/ && terraform init

#running terraform code
cd /root/HybridCLoud2/ && terraform apply --auto-approve



Step-2: Create JOB-2

In this job we will run testrun.sh script

testrun.sh script

chmod 400 /root/HybridCLoud2/tls_key.pem

scp -i   /root/HybridCLoud2/tls_key.pem -o "StrictHostKeyChecking no" /root/HybridCLoud2/test.sh  ec2-user@$(cat /root/HybridCLoud2/ec2_link.txt):/home/ec2-user/

scp -i   /root/HybridCLoud2/tls_key.pem -o "StrictHostKeyChecking no" /root/HybridCLoud2/cloudfront_link.txt  ec2-user@$(cat /root/HybridCLoud2/ec2_link.txt):/home/ec2-user/

ssh -tt -i /root/HybridCLoud2/tls_key.pem -o "StrictHostKeyChecking no" ec2-user@$(cat /root/HybridCLoud2/ec2_link.txt) "chmod 777 test.sh; ./test.sh"


This script first give permission to our aws key then we will copy test.sh script to our aws instance and copy cloudfront_link.txt to aws instance

and perform ssh and executes test script on aws server machine.

test.sh script

#!/bin/bash

echo "## adding script in index.html ##"

sudo echo "<script>\$(function(){\$(\"img[src='']\").attr(\"src\", 'https://$(cat cloudfront_link.txt)');});</script>">> /var/www/html/index.html

echo "## index.html successfully updated ##"
echo "## restarting httpd service ##"

sudo systemctl restart httpd

echo "## httpd successfully restarted  ##"
echo "## exiting script... ##"

This script will run on aws instance and here we redirect one JQuery code which copy cloudfront_link.txt(contains cloudfront link) and place it inside <img src=''> tag then restarting httpd.service so that out page will refresh with new cloudfront link which holds image address.

Follow steps same as below screenshots

No alt text provided for this image
No alt text provided for this image

Set build after other project and select JOB-1 as upstream job

No alt text provided for this image

Perform execute shell script using SSH

bash /root/HybridCLoud2/testrun.sh
echo "## webpage link:- ##"
cat /root/HybridCLoud2/ec2_link.txt
No alt text provided for this image

To launch website copy webpage link and paste it to url bar.

or

copy link from ec2_link.txt file available in your workspace directory.

Step-3: Create Build Pipeline view

#First download Build Pipeline Plugin and install it.

No alt text provided for this image

click on plus icon visible on above screenshot

No alt text provided for this image

select build pipeline view and set name same as above screenshot

No alt text provided for this image

Now select Initial job as JOB-1

No alt text provided for this image

Successfully created all JOB required by this task.

github repository:-

https://github.com/Divyansh747/HybridCLoud2.git

https://github.com/Divyansh747/Terraform_AWS-task-2.git

要查看或添加评论,请登录

Divyansh Rahangdale的更多文章

社区洞察

其他会员也浏览了