CLOUD COMPUTING PART-1

CLOUD COMPUTING PART-1

AUTOMATING THE AWS INFRASTRUCTURE USING TERRAFORM (IaaC)

Hello guys welcome to my first article of cloud computing part-1 in this article i am going to show you how we can automate the creation of whole infrastructure in AWS (amazon web services) without touching the web-user interface, so basically in single go our entire infrastructure created and your web-app deployed.

before we started first let me explain you some basics terms about CLOUD?

What is CLOUD ?

As we know if we want to run or execute any program we need OPERATING SYSTEM (os), and operating system needs hardware to run, what type of hardware? H/W like RAM (Random accessed memory) and CPU (Central Processing Unit) and this two hardware are work together are known as COMPUTING UNIT. even we need storage for storing the files but for computation we need only Compute unit. now what is cloud ? cloud we can say is a program who manage the existing technologies and this program give us various type of service and we can access those service with the help of internet. and this program is located in all over the world and this is known as DATA CENTER. sometimes we called cloud as cloud computing because as i explained you above about computing unit you will get a idea why we called cloud computing! we are giving computational task/job on the cloud program and this program contact to their compute resources/hardware and give us output by network so instead of our local system do computation job we give the computational job to cloud that's why we called CLOUD COMPUTING.

No alt text provided for this image

Okay we understand what is cloud and computing, but tell me why it is so famous and why companies are using this?? answer is "the main benefit of cloud is COST."

In simplest way i explained cloud computing companies like AWS, GCP, AZURE which give us cloud services are managing their own infrastructure example consider a new Business startups and they want to deploy a website where client can access them & we know for running the website we need hardware, maintaining team, device/management team, engineers tons of things required for running the website so it is huge investment on infrastructure & it takes chunk of money on maintaining, distributing, updating, and even shredding paper copies. By opting cloud computing, a heck load of money is saved Further a business receives more for less through outsourcing computing resources from the third party by deploying cloud technology. This saves expenses incurred on actual resource cost, maintenance, and expenses associated with in-house hosting.

No alt text provided for this image

Now you have been heard about public and private cloud! so what is public and private cloud?

CLOUD COMPUTING MODELS - DEPLOYMENTS

No alt text provided for this image


Public cloud

Public cloud is a services which provide infrastructure and services to the public, and you, or your organisation, either small scale companies or big one. service may be free or sold on-demand basis allowing customers to pay only per usage and this concept is known as pay-as-we-go. In simple way i explain you just renting servers on the internet and these servers are own by company, which has made their hardware available to the whole world wide whoever can pay can come over here rent infrastructure and that is why we called public. one of the most demanding and best example of public cloud is AWS (Amazon Web Services), GCP (Google Cloud Platform), Microsoft Azure etc...

No alt text provided for this image

Benefits:-

  1. Cost Effective :- Public Cloud share same resources with large number of customers it turns out inexpensive.
  2. Reliability:- Public Cloud provides fault tolerance and great availability of resource.
  3. Flexibility:- The Public Cloud can smoothly integrate with private cloud, or other platforms.
  4. Location Independence:- Public Cloud Services are provided through internet, so everyone can use the service of public cloud easily , ensuring location independence.
  5. Costing style:- Public cloud is also based on pay-per-use model and resources are accessible whenever customer need them.
  6. Highly Scalable:- Public Cloud resources can be scale-up or scale-down according to requirement.

Disadvantages

  1. Lack of Security:- Your data is hosted off-site and resources are share publicly, therefore doesn't ensure higher level of security.
  2. Less Customizable:- We can't go inside the public cloud mechanism or we can't see internally which software or packages are used by public cloud so it is less customizable than private cloud.

Private cloud

By the name of private you will get a idea what would be this setup! basically if we have our own local infrastructure of cloud like local servers, software, maintaining engineers, cloud engineers etc in a organisation or company, which are managed and maintained by an organisation internally. private cloud services offered either over the Internet or a private internal network and only to select users instead of the general public. private clouds having a higher level of security because it works internal network so there is no chance that sensitive data are accessible to third-party.

No alt text provided for this image

Benefits:-

  1. Higher level of Security and Privacy:- Private Cloud operations are not available to general public and resources are shared from distinct pool of resources. therefore , it ensure higher level of security and privacy.
  2. More Control:- We can customise our resources as per our need means more control on hardware than public cloud because it is accessed only within an organization.

Drawback of Private Cloud:-

  1. High price and Restricted Area of Operation:- private cloud are not as cost effective than public also for setup the private cloud need hardware and we have to invest on hardware, Also it is only accessible within organisation means very difficult to deploy globally.
  2. Limited Scalability:- The private cloud can be scaled only within capacity of internal hosted resources.
  3. Need Additional skills:- To maintain private cloud organisation required skilled person who can maintain private cloud infrastructure.


Now you will understand what is public and private but wait what is hybrid cloud??

Hybrid Cloud

As we know business are increasing day by day, existed business are scaling their business for gaining, connecting, reaching to more and more customers, hundreds/thousands of new startup come-up every year, so rather than invest money on creating the infrastructure companies uses cloud computing service for saving the cost and free from maintaining the infrastructure. but in this public cloud service whatever companies are using they realise a problem, they realise on public cloud might be chances their sensitive data may be leaked & companies guys feels insecure about their data. Consider a new bank are opened and they opt public cloud for maintaining their infrastructure, after using the public cloud some security guy or IT guy of bank feel insecure because their public sensitive data are on public cloud might be chances it will be leaked! and questions is why i host my sensitive data on public cloud? or any third party vendor? and also sometimes some service are very costly/paid on public cloud so from there private cloud concept come in market. but we having a issue with private cloud is that we have to maintain our whole infrastructure also costly. now what we have to do ??? for this we came up with new concept/idea to blend of public and private clouds, mixture of public+private cloud! that's why we called this concept is Hybrid Cloud.

So hope you understand what is hybrid cloud! but let me explain you with example which i take above about bank. banks having very sensitive data or lets take An organisation having very sensitive data of users/clients etc they can't rely on third party cloud service or public cloud so they can store sensitive data on their private cloud or local data center simultaneously robust computational job we can give to the resources of a managed public cloud! and A hybrid cloud relies on a single plane of management there is no need to manage each cloud environment separately.

No alt text provided for this image

Benefits of Hybrid cloud:-

  1. Scalability:- Hybrid cloud offers mixture of scalability like public, private cloud scalability.
  2. Flexibility:- It offers secure resources as well as scale-able public cloud resources.
  3. Cost Efficiency:- Hybrid cloud is the setup of public and private and we know public clouds are most effective than private clouds. Therefore, Hybrid cloud can be cost effective.
  4. Security:- It strategically provides ability to give IT leaders increased control over their sensitive data.
  5. Most businesses do not utilise the same level of computation power every day so rather than paying for those additional resources to sit idle for most of the year, an organisation can save on costs by using their private resources only when necessary.

Disadvantages of Hybrid cloud:-

  1. Network issue:- Setup the networking might be complex due to presence of public and private cloud.
  2. Infrastructure Dependency:- The hybrid cloud model is dependent on internal IT infrastructure, therefore it is necessary to ensure redundancy across data centers.

Community Cloud

A Cloud which is share it resources, services to group of several community or organization, it share the data, information between the specific community, and the whole cloud system was owned and operated by more than one organization in the community.

No alt text provided for this image

Benefits of community cloud:-

  1. Cost effective:- Community cloud is cost-effective because the whole cloud is being shared by several organizations or communities.
  2. Security:- Community cloud is more secure than the public cloud but less than private cloud.
  3. It provides collaborative and distributive environment.

Disadvantages:-

  1. Since all the data is located at one place, so it must be careful in storing data in community cloud because it might be accessible to others.
  2. It is also challenging to allocate responsibilities of security and cost among orgainzations.


There are bunch of things about cloud like Cloud Service Model, Cloud Architecture, Cloud Computing technologies etc so we will see in another article of cloud computing basic where we cover some basics about cloud so stay tune. ?? 


Now from here we are going to build our infrastructure with the help of IaaC /IaC (infrastructure as a code) by this i am deploying my website on AWS cloud with the help of TERRAFORM. before tell you what is terraform first let me explain about IaaC/IaC.

About Infrastructure as a Code

Long time back in companies data-centers System administrator work was very tough and time consuming because they have to setup the servers, networks, software's, hardware manually and repetitively and this repetition cause human errors resulting in miss-configuration also if any issue caused by hardware or hardware failure this scenario was very critical and users/staff will face downtime.

Now time has been change. today every companies opting cloud and this cloud will be managed by cloud engineers or system administrator using WEB interface, APIs to interact with cloud resources through commands, code etc. this interaction options reduce human errors and state of the infrastructure well maintained.

Infrastructure as a code is also a key and foundation for DevOps practices, basically IaaC or IaC is the management and provisioning of IT infrastructure using machine-readable configuration files, so it is both human and machine readable. in simple way i explain Infrastructure as a code is the process/technique where we eliminating the manual effort for creating the IT resource management like manually provision and managing servers, and other infrastructure resources by some sort of codes.

In this era there are thousands of applications deploying into real servers or production servers everyday even some 100-1000 of applications are removing too and we know for running the applications we need a infrastructure, and according to application need infrastructure is constantly scaled up and down in response to developer and user/clients demands. you can imagine this scenario how difficult to manage those all things by manually, so for solving this issue Infrastructure as a Code makes things easier and automation possible.


Things to remember-
Infrastructure as Code and automation are two very similar terms, but both are different things if i explain-

Infrastructure as code used only if you want to maintaining the configuration or state of the data center infrastructure
but
Automation deals more with the process for automatically pushing that state into the infrastructure and maintaining it.


If you are planning to use IaaC Most important things to remember that to understand difference between a declarative or an imperative approach to infrastructure automation.

  • If you are a developer you are writing the code you are most concerned with what we want as the answer, or what would be returned This is the form or style of programming is known as declarative programming so you only need to express what you want to do and without explicitly listing commands or steps that must be performed in order to achieve what you want to do Let’s look at an example.

Example ( real life ):- suppose we want to build a car in a programming  i was writing an imperative program for building a car, it would go something like this:

  1. Build the foundation
  2. Put in the framework
  3. Install the basic utilities
  4. Add the engine
  5. Finishing final touches

In this imperative program, I have told you the exact steps to take in order to build the car. These instructions aren’t the most detailed in the world, but I have told you all the steps you need to take in order to arrive at a finished product.

  • In case of imperative language in which we care about how we get to an answer, step by step. We want the same result ultimately, but we are telling the complier to do things a certain way in order to achieve that correct answer we are looking for Let’s look at an example.

Example ( real life ):- Declarative is about the WHAT. Building a car declarative would include the following steps:

  • I don’t care how you build it, but I want a nice comfort seat , a silent engine, and a big SUV type car .

Simple right? In this declarative program, I have told you the outputs that I want. I know that if I give you inputs in the form of money, I will get the desired outputs...

i'm sure you are thinking why i explain there imperative declarative concept above right??

Because in DevOps, or in Automation world there are bunch of tools and some tools are Infrastructure as a Code so it is necessary to know IaC tools that you are using is working on declarative or imperative.

IaC Examples:-

  • Terraform
  • Cloudformation (aws)
  • Google Deployment Manager
  • Chef
  • Puppet
  • Ansible

If you heard about Chef, puppet, ansible tools in DevOps Tools chain , these tools are only used for configuration , but terraform is infrastructure as code primary provisioning tool but has some ability to do some configuration.

Infrastructure as a Code (IaaC) Benefits :-

  • Improved productivity and achieve automation
  • Cost savings
  • Speed and simplicity
  • Documentation of Infrastructure in form of code
  • Disaster recovery
  • Quick provisioned development environments.
  • Standardised environment builds
  • Shareable: Code can be shared among developers
  • Auditable: If something goes wrong in the code by the help of Version Control we can roll-back and rebuild if we are using code.

Our whole work is on Terraform tool let me explain about what is Terraform:-

What is Terraform ?

Terraform is the software/product/tool from HashiCorp, this tool is working on infrastructure as a code principle. Terraform having their own language called HCL (Hashicorp Configuration Language), which something like JSON.

Why we are using terraform ? because it has a capability to communicate to all the majors cloud service like AWS, GCP, AZURE etc and some private cloud like OpenStack even 160 different different providers for , building, changing, managing, versioning our infrastructure safely and efficiently etc.

Benefit of Terraform why we used this? :-

There are several tools offering for infrastructure as code but terraform have some advantages over this like:-

  • Build infrastructure
  • Change infrastructure
  • Versioning the infrastructure
  • Multi-Provider
  • Immutable infrastructure
  • Easily for hybrid-cloud architecture creation


No alt text provided for this image

Now from here i am going to start my practical these are the instructions what we will perform :-

  1. Set the provider.
  2. First we launch EC2 Instances with some configuration of firewall/security groups which allows port 80 and create a key for login into the instance.
  3. Launch EBS volume and mount that volume to web-server document root directory /var/www/html
  4. Developer have uploaded the website code on the GitHub Repository also repo have some image. i just pull the code from GitHub repo and put into the /var/www/html folder
  5. Create S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change permission to readable.
  6. Create a CloudFront using s3 bucket (which contains images) and use the CloudFront URL to update in code in /var/www/html.. by using CloudFront make it globally accessable to whole world.

Now you will thinking what are these words like EC2 , CloudFront , EBS , Security Groups etc. let me explain you about these terms:-

EC2 instances:-

EC2 stands for Elastic Compute Cloud this service provides scalable/elastic computing capacity in the Amazon Web Services (AWS) cloud.. basically it give us an compute unit such as RAM, CPU. Using this service can eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.

AWS EBS:-

EBS stands for Elastic Block Storage this services provides block storage for our instances. we can consider block storage as a HDD where we can do partitions, format using file-system, and mount on certain folder.

AWS S3:-

Amazon Simple Storage Service (Amazon S3) is an object storage service. here object means "file", it is the Global service by amazon where we can upload/download the object & this storage is used for persistent storage. it give us availability, durability.

Security Groups:-

It is type of a virtual firewall for your EC2 instances or your network to control incoming/inbound/ingress and outgoing/outbound/egress traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance.

CloudFront:-

CloudFront is the service which provides CDN as a service (content delivery network as a service) This service work on various edge networks across the globe through which we can access the media related things like image,video fetch very quickly by this service we can scale our business worldwide and the users will face very less latency .

As we know in real world daily some changes, modification, update happen in website code it kind of dynamic but some contents like picture, videos, images rarely change this content like static and sometime we surf multiple website therefore caches will be generate at the client side system so in case if developer change some static content like images in website client/user will not get any update on their page due to cache are saved in their local system.

For solving this issue we create S3 bucket and put our static content like images,videos and CloudFront creates a local cache of the static data from S3 bucket and distribute to all the edge locations of AWS  so the client will get very less-latency also get the updated content.

Requirements:-

  1. First we have to create an account on AWS and create IAM user
  2. Download terraform link> here and install it as per your OS , also add in your environment path as per your OS
  3. Download and install AWS CLI v2 from here, also add in your environment path as per your OS
  4. Configure your AWS IAM profile on aws cli

Before we start and creating the code first let me tell you some basics about input variable To make the infrastructure code re-usable and version controlled, we need to parameterise the configuration with the help of variables, let me show you with one example then we proceed!


variable "key" {
 type = "string"
}

Between { } it is block body

type: Valid values are a string, list, and map. if no type field is provided, the variable type inferred based on default. if no default is provided, the variable type assumed to be string.. here "string" is the value and "key" is the key

default: it sets a default variable. if no default is provided, terrafrom will raise a error if a value is not provided by the user

description: you can give description field for a human-friendly descriptions for the variables.

Now let jump to executing our plan on AWS Cloud :)

STEP-1:

Configure AWS IAM Profile On AWS cli

Before configure AWS IAM profile on CLI , first login with root account on AWS console create your IAM in my case i create "ashu" and give some desired permission. then get your access key and secret key from your aws root account, it is your security credentials so first download it.

No alt text provided for this image

Go to My Security Credentials

No alt text provided for this image

Download your access key and secret key from here. it download in excel .csv format file

No alt text provided for this image

Configure your IAM profile on AWS CLI by putting manually access key and secret key. also you can see how many profiles you have by 'aws configure list-profiles' command.

STEP-2:

Creating a Workspace

I create a WorkSpace where i manage all the files ,i created on Desktop named Hybrid_task1 then i create a file using visual studio code named "hybrid_task1.tf", .tf extension represent terraform file.

No alt text provided for this image

STEP-3:

Set provider for terraform as AWS

First we have to give provider, here provider means which platform/cloud or service we have to contact like aws,gcp etc here i give "aws" .

No alt text provided for this image
Explaination of each attribute, keys, values :-


provider "aws" ( here i give provider aws means terraform will contact to aws )

profile = "ashu" ( here i used profile ashu for contacting to my aws account }

region = "ap-south-1" ( AWS have several datacenters on different different locations countries so we have to specify in which datacenter we are creating our infrastructure )

STEP-4:

Creating a Security Group

We've to create a firewall rule/security groups configuration for our instance, Protocol i configured HTTP (port=80) , HTTPS (port=443) to connect to websites which is hosted in my instances and SSH (port=22) so this will allow SSH (secure shell) for remote login.

No alt text provided for this image
No alt text provided for this image
Explaination of each attribute, keys, values :-


resource "aws_default_vpc" "main" { 
  tags = {
    Name = "Default VPC"
  }
}

( A default VPC is a logically isolated virtual network in the AWS cloud that is automatically created for your AWS account the first time you provision Amazon EC2 resources. When you launch an instance without specifying a subnet-ID, your instance will be launched in your default VPC )

resource "aws_security_group" "project1_first_sg" {
  name        = "sg_for_webserver"
  description = "allow ssh and http, https traffic"
  vpc_id      =  aws_default_vpc.main.id 


( Creating aws security group which allow ssh, http, https traffic on default vpc )

  ingress {                     

( Ingress is traffic that enters from outside world )

    description = "inbound_ssh_configuration" (human readable description)
    from_port   = 22            ( ssh port 22 The start port )
    to_port     = 22            ( ssh port 22 The End port )
    protocol    = "tcp"         ( Type of protocol is "tcp" )
    cidr_blocks = ["0.0.0.0/0"] 
}

( Classless inter-domain routing (CIDR) is a set of Internet protocol (IP) standards that is used to create unique identifiers for networks and individual devices. so here i specify a range of IPv4 addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block 0.0.0.0/0 means all IPv4 addresses will be allowed to routing )


 
egress {

 (  the outbound traffic originating from within a network )
    
    description = "all_traffic_outbound"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"  
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {     

( Ingress is traffic that enters from outside world )

  description = "http_configuration"  ( human readable description )
  from_port         = 80  ( http port 80 start point )
  to_port           = 80  ( http port 80 The End port ) 
  protocol          = "tcp" ( Type of protocol is "tcp" )
  cidr_blocks       = ["0.0.0.0/0"] }  

( all IPv4 addresses will be allowed to routing )



  ingress {               

( Ingress is traffic that enters from outside world )

  description = "https_configuration"  (human readable description)
  from_port         = 443  ( https port 443 start point )
  to_port           = 443  ( https port 443 The End port )
  protocol          = "tcp"  ( Type of protocol is "tcp" )
  
  cidr_blocks       = ["0.0.0.0/0"] }  

( all IPv4 addresses will be allowed to routing )


  tags = {
    Name = "project1_sg1" 
}
}

( Used for giving tag/name to our resources for differentiate from others , useful in management, in my case i give a tag "project1_sg1"  )


output "firewall_task1_sg1_info" {
  value = aws_security_group.project1_first_sg.name
} 

( while creating complex infrastructure, terraform stores all attribute values for all your resources, but you may interested in a few values, such as ip, vpn address, id etc so output defines values that will be highlighted to the user when terraform applies and can be queried easily using the "output" command. in my case i used aws_security_group.project1_first_sg.name  )  




STEP-5:

Create a Key-Pair

Here we have generate the keys using tls_private_key and create a aws key-pair by the help of terraform, & save the key locally in my workspace in .pem extension. It is used for login into the instance

No alt text provided for this image
No alt text provided for this image
Explaination of each attribute, keys, values :-


resource "tls_private_key" "instance_key1" { 

( Creating the private TLS "transport layer security" keys and give resource name "instance_key1" )

   algorithm = "RSA" ( Used RSA algorithm It is an asymmetric cryptographic algorithm )

   rsa_bits  = 4096 ( key size in bits )

}

resource "aws_key_pair" "key_pair1" 

  ( Creating aws key using aws_key_pair resource and give name to this resource "key_pair1" )

  key_name   = "project_key1" 

( give name to our key "project_key1" this key we attach to our instance )

  public_key = "${tls_private_key.instance_key1.public_key_openssh}" 

( Creating the public key with the help of private key , The public key and private key are generated together and tied together. Both rely on the same very large secret prime numbers so in Public-key cryptography, or asymmetric cryptography, that uses pairs of keys: public keys, which may be disseminated widely, and private keys, which are known only to the owner. )

  depends_on = [  tls_private_key.instance_key1 ] 

( rely or depend on resource tls_private_key.instance_key1 )

}

resource "local_file" "save_project_key1" 

( This local_file resource is used for saving the content into a file and save in your workspace, and i give name to this resource is "save_project_key1" ) 

  content = "${tls_private_key.instance_key1.private_key_pem}" 

( Give content/data what i want to be inside the file so in my case i give private key content )
  

  filename = "project_key1.pem" ( Give filename )

depends_on = [
   tls_private_key.instance_key1, aws_key_pair.key_pair1 ] 

     ( depend on or rely on above two resources      tls_private_key.instance_key1, aws_key_pair.key_pair1 )
 


STEP-6:

Create an EC2 Instance

In this we've to create AWS-instances so we've using AMI (amazon machine image) of Amazon Linux 2 AMI (HVM), SSD Volume Type. after successfully launch making connection to the instance via SSH by using provisioner "remote-exec" in terraform , after successfully connection established to our instances there several commands will run for installing the software like apache webserver, php intepreter, GIT after successfully installed it will start the web-server services and enable it..

  • What is AMI ? it is just a template from which you can start your an EC2 instances, in AMI various types of image available like windows,linux etc. AMI have all the things which is required to boot the OS, Applications etc
No alt text provided for this image
No alt text provided for this image
Explaination of each attribute, keys, values :-


variable "ami_instance_id " { 

( i create a variable for ami instance id every AMI have unique id and here i used amazon linux ami )

 default = <ami_id> 

   ( here i give ami id of amazon linux , you can get your AMI id from aws             marketplace  )

}

resource "aws_instance" "project1_instance" { 

( Here i give aws_instance resource type which is for launching the aws instances and give name to resources is "project1_instance" that we can use as a reference )
  
  ami = "${var.ami_instance_id}" ( here i put variable name which i assign ami id )  
  
  instance_type = "t2.micro" ( Specification of instances ex- 1gib ram 2 cpu etc )
  
  key_name = aws_key_pair.key_pair1.key_name  ( Give key to instance for login )
  
  security_groups = [ "${aws_security_group.project1_first_sg.name}" ] 

              ( give security group to instances )
  

  availability_zone  = "ap-south-1a" ( Give zone where my instance should be launch )

  connection { 

( "connection {"provisioners require access to the remote resource via SSH )
  
    type = "ssh"  ( The connection type that should be used like ssh )
    
    user = "ec2-user" (The user that we should use for the connection here i used ec2-user is default user given by ec2-instances)
    
    private_key = tls_private_key.instance_key1.private_key_pem ( given private key for login ,  established the connection )
    
    host     = aws_instance.project1_instance.public_ip ( The public ip address of instances where resource can connect ) 

}

provisioner "remote-exec" { 

( The remote-exec provisioner invokes a script on a remote resources after it is created. bascially it means if you want to run any script or command remotely then we can use this )
  
     inline = [

       ( here i give multiple linux commands to execute into the instance )
 
"sudo yum -y install httpd  php git" ( For installing the httpd php and git  software into instance )

"sudo systemctl start httpd" ( starting the httpd service )

"sudo systemctl enable httpd", ( Enable the service of httpd means during reboot we don't have to manually start the service of httpd )

]
}


tags = { Name = "project1_webserver" } } 

( Used for giving tag to our resources for differentiate from others or in terms of management we can use )


output "instance1_az" {
  value = aws_instance.project1_instance.availability_zone }  

( fetching output values of project1_instance.availability_zone  )  


output "instance1_id" {
  value = aws_instance.project1_instance.id
} 

(for fetching output values of the instance id )

output "public_ip_webserver" {
    value = aws_instance.project1_instance.public_ip
}

( fetching output values of the ip address of instance )


 

 

STEP-7:

Create an AWS EBS Volume

Now here i am going to create 1GIB size of block storage for my instance in EBS storage service of AWS. block storage is a Storage-type which having a file-system, or you can think as a Hard-Disk, where we can partitions, format in file-systems, mount on desired folder. Why we are using it ? because This is only way where we can store our data persistent & data isn’t lost even if the instances gonna be terminated.

No alt text provided for this image
No alt text provided for this image
Now i'm going to describe each keys and attributes:-


resource “aws_ebs_volume” “ebsvol1” { 

( this resources is used for creating the ebs volume on aws. and ebs is the regional based service so we have to define its availability zone where we have to create )

  availability_zone = "ap-south-1a" ( in this zone our ebs service be launch make sure your instance should be same location )

  type = "gp2" 

( it is the specification of storage device General Purpose SSD ( gp2 ) volumes. General Purpose SSD ( gp2 ) volumes offer cost-effective storage that is ideal for a broad range of workloads. )

  size = 1 ( here i give 1GiB size of our volume default it takes in GiB )

  tags = { Name = "project1_ebs1" } 

( used for giving tag to our resources for differentiate from others or in terms of management we can use )

  depends_on = [ aws_instance.project1_instance,] } 

( basically this is used for pipeline means if this resource run successfully then this resource run  so by name "depends on" means it is depend on some resource )

output "ebs_vol1_info" { value = aws_ebs_volume.ebs_vol1.id } ( fetching output values of ebs_vol1.id i need id )                    

STEP-8:

Attaching the EBS Volume to Instance

In this i attaching my EBS volume to my instances then i Format my EBS volume in EXT-4 Format type and after i mount to the Document Root Directory of web-server /var/www/html lastly i run git clone command for cloning/downloading my website code.

No alt text provided for this image
Explaination of each attribute, keys, values :-

resource "aws_volume_attachment" "ebs_vol1_attach" { 

( This resources is used for attaching the existed or above i created a ebs volume to attach on running instances )

  device_name = "/dev/xvdh" 

(  Device name like every device in linux having this file All Linux device files are located in the /dev directory which is an integral part of the root (/) filesystem. )

  volume_id   = "${aws_ebs_volume.ebs_vol1.id}"  

( here i define which volume i have to attach so here i define a variable concept which can fetch ebs_vol1 id )

  instance_id = "${aws_instance.project1_instance.id} 

        ( using variable concept i give instance id )

  force_detach = true 

( here i give true parameter to force_detach means either instance terminated or running or device be in running state or idle i can detach any time )


  depends_on = [ aws_instance.project1_instance,
                                                  aws_ebs_volume.ebs_vol1,                                                                       ] 

( First these resource project1_instance and ebs_vol1 successfully created then this resource will run  )


  connection { 
     type = "ssh"  
     user = "ec2-user" 
     private_key = tls_private_key.instance_key1.private_key_pem 
     host     = aws_instance.project1_instance.public_ip } 




8- provisioner "remote-exec" { 
  
     inline = [
                
          
"sudo mkfs.ext4  /dev/xvdh", (Format the device in ext4 filesystem)

"sudo mount  /dev/xvdh  /var/www/html", (mount /dev/xvdh device to /var/www/html 
folder) 

"sudo rm -rf /var/www/html/*", (Delete all files inside this folder before git clone)

"sudo git clone https://github.com/ashu0530/webpage.git /var/www/html/"

     ( Downloading my website code.)
              
               ] 

}}



  

STEP-9:

Create Snapshot of EBS volume

After successfully mounting my EBS Volume, then i create a AWS EBS SNAPSHOTS. why? because it give us availability + also back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots.

No alt text provided for this image
No alt text provided for this image
Explaination of each attributes, keys and values :-
 
resource "aws_ebs_snapshot" "project1_snapshot" { 

( This resources is used for creating the snapshots for existed volume )

  volume_id  = "${aws_ebs_volume.ebs_vol1.id}" 

( by using variable concept i define ebs_vol1 id )

  tags       = {
    Name = "project1_ebs_snap"
 } 

                ( give tags to this resources )
 
  depends_on = [
    aws_volume_attachment.ebs_vol1_attach,
  ]
}  

( depends on ebs_vol1_attach means if aws_volume_attachment successfully run then this resource run so it is depending on attachment resources )



output "task1_snapshot_id" {
  value = aws_ebs_snapshot.project1_snapshot.id 
} 

           (Give output value of snapshots id on prompt )

STEP-10:

Creating AWS S3-Bucket

Creating AWS S3 bucket & change the permission to public readable, S3 is the highly scalable object storage.

  • Every S3 bucket name must be unique across all existing bucket.
  • Bucket names should be in a lowercase letter or number.
No alt text provided for this image
No alt text provided for this image
Explaination of each attribute, keys, values :-


resource "aws_s3_bucket" "project1_bucket" { 

( by this resource i am creating aws s3 bucket, it is global service by amazon web services )

    bucket = "project1_webserver_bucket" 

( you can say bucket means folder, and every bucket should be unique )
 
    acl    = "public-read"  

( set the the bucket ACL "access control list" to “public-read”  means everyone can view it. )
    
    force_destroy = true  

( force_destroy = true   here it means if we destroy our infrastructure then our bucket and its object will forcefully destroyed either bucket is empty or not! )
 
    tags   = {
        Name = "project1-bucket"     
        Environment = "Production"
   }
}

( Give two tags one is Name and second one is Environment for management purpose )

output "project1_bucket_id" {
    value = aws_s3_bucket.project1_webserver_bucket.id
}

( Getting the value of bucket id )


STEP-11:

Give Bucket Public Access Policy

Applying permissions or ACLs policy to our bucket, so by this i am allowing public access to my Amazon S3 resources. By default, new buckets, access points, and objects don't have public access.

No alt text provided for this image
No alt text provided for this image
Explaination of each attribute, keys, values :-


resource "aws_s3_bucket_public_access_block" "project1_bucket_public_access_policy" {

( Creating a resource for "aws_s3_bucket_public_access_block" and i give name to this resource is "project1_bucket_public_access_policy"  

    bucket = "${aws_s3_bucket.project1_bucket.id}"  

 ( i provide here bucket id  where this resource implement the permission )

    block_public_acls = false ( Disabling the block public for your buckets and objects. )


    block_public_policy = false ( Disabling the public block policy and allowing them to publicly share the bucket or the objects whatever it contains. )


    restrict_public_buckets = false
  }     

   
( allowing access to an access point or bucket with a public policy to only AWS services and authorized users within the bucket owner's account. This setting allow all cross-account access to the access point or bucket (except by AWS services), while still allowing users within the account to manage the access point or bucket. )


STEP-12:

Uploading the image/object to S3-bucket

No alt text provided for this image
No alt text provided for this image
Explaination of each attribute, keys, values :-


resource "aws_s3_bucket_object" "project1_object" {

( This resource will used for managing the object like uploading the object into bucket etc )


    bucket = aws_s3_bucket.project1_bucket.bucket ( here i bucket name assign )
    
    key    = "project1_image" (  name of the object , any name you can give )
    
    acl    = "public-read" ( It is canned ACLs. Each canned ACL has a predefined set of grantees and permissions. here i give "public-read" Bucket and object are The AllUsers group gets READ access. )

    source = "C:/Users/Ashutosh/Desktop/pic1.jpg" ( give local location of our file/object which i want to upload in my bucket )
      

    depends_on = [
    aws_s3_bucket.project1_bucket,  ( Rely on previous resources "aws_s3_bucket.project1_bucket")
  ]      
}


output "project1_bucket_domain_name" {
  value = aws_s3_bucket.project1_bucket.bucket_regional_domain_name
}

( Give output The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL. )

STEP-13:

Creating a AWS CloudFront Distribution

Creating a AWS CloudFront, we have created a cloudfront distribution which would provide us CDN(Content Delievery Network) for delivering the content faster.

No alt text provided for this image
No alt text provided for this image


Explaination of each attribute, keys, values :-




locals {
  s3_origin_id = aws_s3_bucket.project1_bucket.bucket
}

( i have assign one local variable name "s3_origin_id" in this variable i put my bucket )



resource "aws_cloudfront_distribution" "project1_cloudfront" {
  origin {
      domain_name = "${aws_s3_bucket.project1_bucket.bucket_regional_domain_name}"
      origin_id   = "${local.s3_origin_id}"
      custom_origin_config {
          http_port = 80
          https_port = 443
          origin_protocol_policy = "match-viewer"
          origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
    }
  }

( This resource will create aws cloudfront distribution and i give name to this resource is "project1_cloudfront" )

origin {  

( Here we configure origin setup we know CloudFront gets the content from the origins & serves it to clients via a world-wide network of edge servers. Each origin is either S3 bucket or a HTTP server so here we configuring origin Settings ) 

domain_name = "${aws_s3_bucket.project1_bucket.bucket_regional_domain_name}" 

( As we know s3 is global service by aws but it uses regional endpoint so here i give my bucket regional domain name, from this CloudFront get static objects for this origin )

origin_id   = "${local.s3_origin_id}" ( Give bucket which is unique to origin_id )

custom_origin_config {   ( configuration of origin  )
          http_port = 80  ( Custom Origin Config Argruments listen on 80 port )
          https_port = 443 ( cf request to origin listen on 443 port )
          
          origin_protocol_policy = "match-viewer"  

( origin_protocol_policy = "match-viewer" by this CloudFront communicates with our origin using HTTP or HTTPS, depending on the protocol of the viewer request. )

          

         origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]

( Specify latest as well as old TLS/SSL protocol that CloudFront can use when it establishes an HTTPS connection to our origin. ) 
 
   }

  }




  enabled         = true ( the distribution is enabled to accept end user requests for content. )

  is_ipv6_enabled = true  ( Enable if any user came with ipv6 address content will be distributed )

  comment             = "building_cf"  ( Any comments you want to include about the cf distribution. )

  
  default_root_object = "index.php" 

(  if you designate the file index.php as your default root object, if a request for cf url :- https://dxxxxxxxxxx.cloudfront.net/ then it returns:-  https://dxxxxxxxxxx.cloudfront.net/index.php )



  default_cache_behavior { 

( Most of the cases a website is the collection of static and dynamic pages. and we have to plan or build a strategy to anyhow accommodate both, and CloudFront does the caching of data in the Edge Locations to speed up the access of the website across the world. Once the content is cached to the CloudFront, it stays there till Time To Live(TTL) expires which makes static pages ideal for the situation. so for this we have to configure default cache behavior )


      allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]

( A controls which HTTP methods CloudFront processes and forwards to your S3 bucket or your custom origin. in my case i allowed "DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT" methods )

      cached_methods   = ["GET", "HEAD"]

(A controls whether CloudFront caches the response to requests using the specified HTTP methods. )

      target_origin_id = "${local.s3_origin_id}" ( Give S3 bucket id where this cache behavior going to implement )
      


      forwarded_values {
          query_string = false
          cookies {
              forward = "none"
          }
      }
( This options will not forward all query strings, cookies and headers to origin )



      viewer_protocol_policy = "redirect-to-https"

(viewer protocol policy that you want viewers to use to access your content in CloudFront edge locations in my case HTTP requests are automatically redirected to HTTPS requests. )

      min_ttl                = 0 

( minimum time to live in seconds default "0" , it will not cache the content and contact Origin Server for each time you refresh the pages. )

      default_ttl            = 3600 

(how much time you want to cache the files. in my case 3600 second )

      max_ttl                = 86400 

( maximum amount of time, in seconds, that you want your static content to stay in CloudFront caches before CloudFront queries your origin to see whether the object has been updated. )

}


  price_class = "PriceClass_All" 

( Choose the price class that corresponds with the maximum price that you want to pay for CloudFront service. ) 


  
  restrictions {
      geo_restriction {
      restriction_type = "whitelist"
      locations        = ["IN"]
    }
  }

( if you want to block users in selected countries from accessing your content. you can use this options , i give restriction_type "whitelist" and location india as countries from which indian users can access my content. helpful to restrict the users )

  
  tags = {
    Name        = "project1_clouddfront"
    Environment = "production"
  }

( Give two tags to my cf resource one is Name and second is Environment )

  

  viewer_certificate {
      cloudfront_default_certificate = true
 
  }

( if you want users to use HTTPS to request your objects/static content and you're using the CloudFront domain name for your distribution. then it will create SSL certificate for the service. )
  
 
 depends_on = [
      aws_s3_bucket_object.project1_object
  ]

( Depend/Rely on above aws_s3_bucket_object.project1_object resource )
 
} 


output "cloudfront_domain_name" {
  value = aws_cloudfront_distribution.project1_cloudfront.domain_name

}

( Give output of cloudfront domain name in terminal ) 


STEP-14:

Saving the AWS CLOUDFRONT DISTRIBUTION Domain name locally

No alt text provided for this image
resource "null_resource" "cf_ip"  {
 provisioner "local-exec" {
     command = "echo  ${aws_cloudfront_distribution.project1_cloudfront.domain_name} > domain_name.txt"


( If you need to run provisioners that aren't directly associated with a specific resource, you can associate them with a null_resource. )

( provisioner "local-exec"  run on local machine so it will execute locally ) 






   }
  depends_on = [   aws_cloudfront_distribution.project1_cloudfront, ]


}

( rely on above resource )


STEP-15:

Updating/modifying website code

Updating/modifying my website code by adding my cloudfront distribution URL of object for fast and smooth delivery.

No alt text provided for this image
Explaination of each attribute, keys, values :-

resource "null_resource" "project1_add_image"  {
    connection {
        type = "ssh"
        user = "ec2-user"
        private_key = tls_private_key.instance_key1.private_key_pem
        host     = aws_instance.project1_instance.public_ip
  } 


    
    provisioner "remote-exec" {  
        inline = [ 
           
( If you need to run provisioners that aren't directly associated with a specific resource, you can associate them with a null_resource.

Connecting to instance via ssh remotely for executing some commands  )          



"sudo sed -i '1i<img src='https://${aws_cloudfront_distribution.project1_cloudfront.domain_name}/project1_image.jpg' alt='ME' width='380' height='240' align='right'>' /var/www/html/index.php",

( this command will first go to inside webserver document root /var/www/html  edit the first line of a webpage, here i put some html tags one of the important tag i have to explain is <img src          ?=<cloudfront distribution url> through this i give my cloudfront domain through which my page will get the content ) 
              


"sudo sed -i '2i<p align='right'> <a href='https://www.dhirubhai.net/in/ashutosh-pandey-43b94b18b'>Visit To My LinkedIn Profile >>>> :) </a></p>' /var/www/html/index.php",

( it will edit 2nd line of webpage, and here i give my linkedin profile by href attribute )

        ]                      
          
  } 
    depends_on = [    
aws_cloudfront_distribution.project1_cloudfront, 
 ]
 }

( Depend or rely on above resource )


Step-16:

Opening my web-page in Chrome browser automatically

It will automatically Opening my web-page in Chrome browser

No alt text provided for this image
resource "null_resource" "ChromeOpen"  { 

( If you need to run provisioners that aren't directly associated with a specific resource, you can associate them with a null_resource. here i give name ChromeOpen to this resource )
     

     provisioner "local-exec" { 

            (  it will locally execute on the system  )
           
          command = "start chrome ${aws_instance.project1_instance.public_ip}"  
     }

         ( Give command to open chrome with my instance url )
     

depends_on = [ null_resource.project1_add_image,
  
   ]  

    ( Rely on above project1_add_image resource )     
}


There are some Terraform commands required to implement best practice and setup whole configuration:-



Step-17:

Initialising the Terraform Plugins

Initialising the Terraform Plugins By applying some terraform command

The terraform init command is used to download necessary plugins from internet which is associated with what i write in my code in a working directory.

No alt text provided for this image


Step-18:

The terraform plan command

The terraform plan command is used when you execute terraform plan, Terraform will scan all *.tf files in your directory and create the plan. Terraform that allows a user to see which actions Terraform will perform prior to making any changes.

No alt text provided for this image


Step-19:

The terraform validate command

Checking the terraform file which I've created "hybrid_task1.tf" by The terraform validate command , this command is used to validate/check the syntax of the Terraform files, if error found in form of syntax then display an error on prompt.

No alt text provided for this image

Note:- It showing some warning because i using old version of terraform which is 0.11 so might be some syntax doesn't recognise by terraform but it will work..


Step-20:

Final Command For Creating the infrastructure over the cloud

The terraform apply command

Now we have a desired state so we can execute the plan, means it will create the infrastructure on the cloud. There are two commands for applying the execution 1- terraform apply ( ask yes or no ) and 2- terraform apply -auto-approve ( run without asking "yes" or "no" )

No alt text provided for this image

That's all guys our Infrastructure has created successfully & our website deployed.

No alt text provided for this image


Step-21:

Destroy the Entire Infrastructure

As we create Entire infrastructure on AWS cloud by terraform apply command so there is one destructive command which can destroy the whole infrastructure in on go

We use terraform destroy command to destroy the Terraform-managed infrastructure.


No alt text provided for this image

?

Thank-you Guys for Reading my Article

Hope my article would help, leave your valuable feedback's. For any queries or suggestion feel free to ask.

Stay tune for my next article where we will enhance this setup with another service of AWS which will make it more better. ?? 

GitHub :- URL














?

Punit Kumar Gautam

ServiceNow Developer || Javascript || HTML || CSS || DSA Problem Solving || Java || J2EE || Spring Boot || Microservices || Spring MVC || MySQL || MongoDB || Oracle DB || PostgresSQl

4 年

Great bhai

Abhishek Vishwakarma

Java || Spring Boot || Flutter || Dart || Software Developer at Tata Consultancy Services

4 年

Great

Sachin Kashyap

Site Reliability Engineer

4 年

nice explanation...bro

要查看或添加评论,请登录

社区洞察

其他会员也浏览了