How to create complete infrastructure for cloud based project using AWS, Terraform and Jenkins ?

How to create complete infrastructure for cloud based project using AWS, Terraform and Jenkins ?

Hello Everyone,

?????????? ?????????????????? ???? ????????????????????, ???? ???????????? ???? ?????? ???????? ???? ?????????? ???????? ???????????????? ???????????????????? ?????? ?? ???????????? ???????? ?????? ?????? ?????? ???????????? ???????????????????????? ???? ?????? ?????????? ???? ?????? ?????? ???????????? ??????????????????????; ?????????????????? ???????????????????? ?????????? ???????? ???? ???????? ???? ???????????????? ?????????????? ?????? ????????????.

Problem Overview :

  1. We have to create infrastructure such that it will include various services of AWS. We have to apply EC2, EBS, S3, Key pairs, Security groups, CDN, and many more things together to run our application on cloud.
  2. In this problem we have to use Terraform for managing the infrastructure and Jenkins for automation.

Steps to follow :

  1. First of all we have to create key pairs for access of instances, We are also going to create security group which allows exposure of port 80 with tcp protocol.
  2. Now we are going to create EC2 instance and use key and security group.
  3. After this we are going to create EBS volume and mount it to EC2 instance at the folder where our code is located.
  4. Developer will upload code into github repository, This repository will also contains some images .
  5. We have to create S3 bucket to store all the static files and images of our application.
  6. After this huge setup we will create couple of jenkins job, one for deploying images to S3 bucket, and one for moving code from github to source of our application.
  7. We will create delivery pipeline here to visualize these jobs graphically.
  8. At last we will create cloudfront using s3 bucket which will help in faster delivery of content of our application.
No alt text provided for this image

So here is my solution for this problem . . . .

Before getting started with the actual part of the solution we have to certain configurations. We are going to create "aws profile" in our local machine so that we can use aws command in our machine.

To create IAM role and download credentials you can refer this document . . .

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

I have created one role in my aws account and also downloaded credential file to my machine.

aws configure --profile=<YOUR profile name>

We have to run above command in order to select your profile or to create one. After running this command we will have to fill some credentials which will be available in credential file, which we have already downloaded.

AWS Access Key ID [****************QZKO]:

AWS Secret Access Key [****************Zf2k]:

Default region name [ap-south-1]:

Default output format [None]:


Make sure you will fill correct information. Also we have to keep this credentials file private so that no one can else will access it.

Now as we as we have to create our whole infrastructure using terraform, we will first setup our workspace.

First of all install terraform in your local machine.

https://learn.hashicorp.com/terraform/getting-started/install.html

How to run terraform script :

  • To run terraform script in our machine we have to first change directory to the directory contaning all .tf files.
  • If we are willing to run our scripts for very first time, then we have use this command
terraform init


  • Afterwords just use this command to run the script
terraform apply

NOTE : Test your script after creation of every script.

So lets get started . . .

No alt text provided for this image
provider "aws" {
    region = "ap-south-1"
    profile = "admin"

}

Using this code snippet we configure terraform to work for aws.

As we have to create a web server we will require operating system, and as we are going to use aws for this we will require instance. Inorder to launch instance there are few things which we have to create prior to this. First is to create key pair and second one is to create security group for our instance and the final thing is to create volume so that issue of ephermal storage will be solved.

EC2 Security group :

A security group acts as a virtual firewall for your instance to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance.
No alt text provided for this image
No alt text provided for this image
resource "aws_security_group" "secg" {
  name        = "secg"
  description = "Httpd and ssh"
  vpc_id      = "vpc-38564a50"


  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "secg"  
}


}

Here we have allowed SSH because I have to copy everything to my instance using ssh. HTTP is there as our clients are going to access our website using this protocol.

EC2 Key Pair :

Amazon EC2 uses public key cryptography to encrypt and decrypt login information. Public key cryptography uses a public key to encrypt a piece of data, and then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair.
No alt text provided for this image
resource "tls_private_key" "keygen" {
  algorithm   = "RSA"
}


resource "aws_key_pair" "sshkey111" {
  key_name   = "sshkey111"
  public_key = "${tls_private_key.keygen.public_key_openssh}"
}

We have to create key using resourse called tls_private_key. This code may fail due unavailability of plugins, So before running

terraform apply

this command make sure to run

terraform init

This will automatically install required plugins for key generation.

EBS Volume :

An Amazon EBS volume is a durable, block-level storage device that you can attach to one instance or to multiple instances at the same time. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application.
No alt text provided for this image

I have created 1 GB volume here.

Now we are all set to launch our instance, but we still have an issue how will we connect volume to instance. For this purpose we will require volume attachment. But this resource requires id's of both volume and instance so we will first create instance.

EC2 Instance creation :

An instance is a virtual server in the AWS cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance.

We have to use this instance for hosting our web server hence we will install and setup few things to setup our environment.

No alt text provided for this image
resource "aws_instance" "webserver" {
  ami            =      "ami-0447a12f28fddb066"
  availability_zone = "ap-south-1b"
  instance_type  =      "t2.micro"
  key_name       =      "sshkey1111"
  security_groups=      [ "secg" ]


  user_data=<<-EOF
    #! /bin/bash
    sudo yum install httpd -y
    sudo systemctl start httpd 
    sudo systemctl enable httpd
    EOF


  tags = {
    Name = "webserver"
  }


}

We have created instace and attached key and security groups which we have created earlier in this project. Here user_data helps us to write our script and it will run all the commands in our instance after successful launch of it.

We also have to attach volume to this instace. As we have both the services available now so we will create our resource for attaching volume to instance.

EBS Volume attachment :

To attach an EBS volume to an instance using the console
No alt text provided for this image
resource "aws_volume_attachment" "storage-attachment"{
  device_name   = "/dev/xvdh"
  volume_id     = "${aws_ebs_volume.storage.id}"
  instance_id   = "${aws_instance.webserver.id}"
}

Here, We have used device name as "/dev/xvdh" because we have to create volume partition named xvdh in our instance. Other things are quiet self explanatory.

After attachment just update instance.tf file as -

resource "aws_instance" "webserver" {
	  ami            =      "ami-0447a12f28fddb066"
	  availability_zone = "ap-south-1b"
	  instance_type  =      "t2.micro"
	  key_name       =      "sshkey1111"
	  security_groups=      [ "secg" ]
	  iam_instance_profile = "${aws_iam_instance_profile.s3_profile.name}"
	

	  user_data=<<-EOF
	    #! /bin/bash
	    sudo yum install httpd -y
	    sudo systemctl start httpd 
	    sudo systemctl enable httpd
	    sudo mkfs -t ext4 /dev/xvdh
	    sudo mount /dev/xvdh /var/www/html
	    EOF
	

	  tags = {
	    Name = "webserver"
	  }
	

	}

Now we are all set to launch our webserver as we have fullfield all of the requirements.

We can just paste our code in workspace of httpd server i.e. /var/www/html and we are get to go.

But what if we have to store some static objects like images, videos, documents.

In this case we will require object storage. Service for object storage in aws is known as S3.

S3 Bucket:

Object storage built to store and retrieve any amount of data from anywhere. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

Lets say we are going to use some photographs in our website. Those photos will never change in any update, or may change but most of the times they will remain same. In such conditions we can store our data in S3 bucket.

No alt text provided for this image
resource "aws_s3_bucket" "bk2bk" {
  bucket = "bk2bk"
  acl    = "private"
  region = "ap-south-1"


  tags = {
    Name = "bk2bk"
  }
}


We have intentionally kept access of this bucket to private.

S3 Public Access Block :

S3 Block Public Access provides controls across an entire AWS Account or at the individual S3 bucket level to ensure that objects never have public access, now and in the future.

We know that in order to access any content from S3 bucket, it should be public but I have kept it private because it is not necessary to keep entire bucket public. In other situation if we have multiple buckets with us and we have to control there access anytime from single place. AWS provides public access block for in order to achieve this I have used that here.

No alt text provided for this image
resource "aws_s3_bucket_public_access_block" "bk2bk_public" {
    bucket = "bk2bk"
    block_public_acls = false
    block_public_policy = false
}

Do we have our server ready ?

No, though our environment is ready and it is smart enough to launch our webserver now but any server is incomplete without the code or website to deploy on it.

We have created some code. To manage our code we are using github. We have to copy our code from and paste it in /var/www/html folder of our instance.

I am going to use Jenkins in order to perform these tasks.

What we have to do on Jenkins ?

First of all jenkins will look into the github repo for update in code as soon as update will found it will pull all the code. After this we have two tasks, anything there in code folder we have to copy it into the EBS volume and anything in Images folder should go into S3 bucket.

Job 1 : To copy all the code from github to EBS volume

Lets create a job first, we have created job named EBS deployer. First of all add source code manager in the job.

No alt text provided for this image

We are going to copy code to EBS volume over SSH protocol, SSH provides service called SCP i.e. secure copy which will help us to do the task.

Secure Copy (SCP)

In Unix, you can use SCP (the scp command) to securely copy files and directories between remote hosts without starting an FTP session or logging into the remote systems explicitly.

Jenkins provides plugin for this service called Hudson SCP publisher plugins ,but we can also use normal ssh plugins I have used ssh plugins for it named Publish over SSH and SSH Plugin.

No alt text provided for this image

Install this plugin and restart jenkins.

After successful installation of both plugins we have to configure SSH. In Jenkins > Manage Jenkins > Configure system > Publish over SSH

No alt text provided for this image

We can also add path to key here instead of actual key.

After this we will look forward for configuration of our job.

No alt text provided for this image
sudo cp -rfv * /cloud_server

This command is there to copy all the content of github repository to our local system.

sudo scp -r -i /root/.ssh/sshkey1111.pem /cloud_server/code/* 13.233.253.117:/var/www/html

and this command will help us to copy content from local machine to remote aws instance.

Note : I failed in this step multiple times and found many types of problems while performing this step just for your reference I am giving few articles do refer them if necessary

Link 1 Link Link Link 4

Successful build of this job will result in completion of most of our task. Now we just have to build a job to copy objects to S3 bucket.

Job 2 : To copy objects from github to S3 bucket

Jenkins is a great tool it has plugins for almost everything, it also provides plugin to copy objects to S3 bucket named S3 publisher plugin.

No alt text provided for this image

We will install this plugin and restart our jenkins server.

After successful installation we will configure this plugin in Jenkins > Manage Jenkins > Configure Systems > S3 publisher

No alt text provided for this image

Here, we can use two options viz Access and secret key and IAM role, though I have provided Keys here, but we can also use IAM user here. I have also created IAM user you can get my code in my github repository links to all the required files.

IAM.tf s3_policy.json s3_role.json

You can also refer this video : video_link

Lets move on to the actual configuration of this job. We have to first provide details of SCM system.

No alt text provided for this image

Now, we will activate trigger after building ebs deployer for this job.

No alt text provided for this image

The last and the most important task is to create post build action to copy data from image folder to s3 bucket is this -

No alt text provided for this image

Now we are ready to host our website with all the content inside it.

No alt text provided for this image

But we have to use cloud front in order to reduce latency for the users so we are going to create cloudfront (Content Delivery Network of AWS).

CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
resource "aws_cloudfront_distribution" "front" {
  origin {
    domain_name = "${aws_s3_bucket.bk2bk.bucket_regional_domain_name}"
    origin_id   = "S3-bk2bk"


    custom_origin_config {
      http_port = 80
      https_port =80
      origin_protocol_policy = "match-viewer"
      origin_ssl_protocols = ["TLSv1" , "TLSv1.1" , "TLSv1.2"]
    }
  }


  enabled = true 
  is_ipv6_enabled     = true
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "S3-bk2bk"


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
  
  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["US" , "IN"]
    }
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }


}

I have tested this resource as image is there inside bucket :

No alt text provided for this image

I can access my image using cloudfront domain, that means I have successfully deployed cloudfront.

That's all we have to configure in this task. Finally apply this terraform scripts and upload code over github. Check whether jenkins code is running properly to check it I have used build pipeline.

Build Pipeline :

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

As our code and objects are now deployed , lets test our website using ip

No alt text provided for this image

Yeah, It's working fine.

So finally, The task is completed . . .

Thank you for reading this article. Please press like button if you feel it helpful.




















Vaibhav Gole

Cloud DevOps Engineer skilled in Kubernetes | 2x GCP | AWS | Expertise in Kubernetes Ecosystem | Automation with Ansible, Bash and Terraform

4 年

Nice Work buddy

要查看或添加评论,请登录

Ajinkya Khandave的更多文章

社区洞察

其他会员也浏览了