AWS Automation With Terraform

AWS Automation With Terraform

Hello and welcome to all.In this article we will be getting some hands on knowledge over some of the leading technology and that is Cloud Computing with Automation by Terraform.Before moving on further with the task let us first understand the agenda.

This is the hands on task which will do following things -

  1. Create the key and security group (Port 80 Open for http request).
  2. Launch Amazon Elastic Compute Cloud (Amazon EC2) instance.
  3. Launch Amazon Elastic Block Store (EBS) instance and that will be mounted on the instance created in Step-2.
  4. Get Web Server Code from the Github and copy it on /var/www/html
  5. Create Amazon Simple Storage Service (Amazon S3) Bucket,and deploy the static content(mainly images).
  6. Using Amazon CloudFront,which is a fast content delivery network (CDN) from AWS.Create URL which is to be updated in code afterwards

What is Public Cloud?

The public cloud is a cloud service hosted by third-party cloud service providers on hardware shared by multiple customers. The cloud service provider handles all responsibilities associated with managing and maintaining cloud services.

Benefits of the Public Cloud

For companies using the public cloud, outsourcing cloud management and infrastructure saves time and uses fewer resources than building, owning, and operating their own cloud infrastructure. Other perks include scalability and ease of access.

What Is AWS?

AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer an organization tools such as compute power, database storage and content delivery services.More than 100 services comprise the Amazon Web Services including -

  • Compute
  • Storage databases
  • Data management
  • Hybrid cloud
  • Networking
  • Security
  • Big data management
  • Artificial intelligence (AI)

What Is Terraform?

Terraform (https://www.terraform.io/) is an open source project by Hashicorp written in golang (https://golang.org/). It lets you define cloud resources (servers, s3 buckets, lambda functions, IAM policies, etc.) in code and check them into a source control. You can then “execute” the configuration and create/modify/delete all the cloud resources with a single command.

If you have any resources in AWS/Google Cloud/Azure, etc. its a high likelihood that terraform can improve your workflow and make management of your cloud resources a breeze! I have used it with AWS, so, most of this post will discuss terraform in context of AWS. But, it works fine with Google Cloud, Azure, Alibaba cloud, etc.

Using Terraform:

  • terraform init : Creates a .terraform directory
  • terraform plan : Outputs how terraform interprets the main.tf file and what resources it will create/modify/delete. Its a dry-run. Which is very critical because you would like know exactly what changes it will do your cloud resources. Surprises are bad!
  • terraform apply : Reads the main.tf and makes all the changes to the cloud. This step outputs a .tfstate file that contains identifiers of cloud resources. This generated file is very important and you should never edit this manually(Recommended).
A best practice is to set up a terraform role in IAM on AWS, use that to manage resource access to terraform and then execute it on the machine with that role.

Steps toward our work -

Before proceeding ahead with terraform use your Command Prompt to configure IAM role as Profile in your AWS CLI.

C:\Users\Azeemushan>aws configure --profile ProfileName
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
  1. Now Create a file with Extention .tf in seperate folder and run the following command to initialise with Terraform Environment and downloading and Installing Terraform Plugins.
provider "aws" 
{
	region   = "ap-south-1"
    profile  = "ProileName"
	
}

2. Creating Key

resource "aws_key_pair" "t1"
{
	  key_name   = "mykey"
	  public_key = "ssh-rsa .......(SomeRandomString)"
}
variable "keyName"
{
    type = string
    default = "myKey"
}

3.Creating Security Groups and setting Ingress for allowing SSH connection and HTTP requests through port 80 for WebServer.

resource "aws_security_group" "mySecuritygroup" {
	  name        = "mySecurityGroup"
	  description = "Allow SSH AND HTTP"
	  vpc_id      = ".......(to be replaced by your ID)"
	


	  ingress {
	    description = "SSH"
	    from_port   = 22
	    to_port     = 22
	    protocol    = "tcp"
	    cidr_blocks = [ "0.0.0.0/0" ]
	  }
	

	  ingress {
	    description = "HTTP"
	    from_port   = 80
	    to_port     = 80
	    protocol    = "tcp"
	    cidr_blocks = [ "0.0.0.0/0" ]
	  }
	

	  egress {
	    from_port   = 80
	    to_port     = 80
	    protocol    = "tcp"
	    cidr_blocks = ["0.0.0.0/0"]
	  }
      egress {
	    from_port   = 22
	    to_port     = 22
	    protocol    = "tcp"
	    cidr_blocks = ["0.0.0.0/0"]
	  }
	

	  tags = {
	    Name = "mySecuritygroup"
      }
}

4. Creating S3 bucket

resource "aws_s3_bucket" "myS3bucket"
{
	    bucket = "myS3bucket"
	    acl    = "public-read"

	    tags = 
        {
		     Name    = "myS3bucket"
	    }
	    versioning 
        {
		     enabled =true
	    }
}

5.Its time for Content Delivery through Cloud Front

resource "aws_cloudfront_distribution" "myCloudFront"
{
	    origin
        {
	        domain_name = "yourdomain.s3.amazonaws.com"
	        origin_id = "S3-myS3bucket" 
	
	        custom_origin_config {
	            http_port = 80
	            https_port = 80
	            origin_protocol_policy = "match-viewer"
	            origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
	        }
	    }
	       
	    enabled = true
	

	    default_cache_behavior
        {
	        allowed_methods = 
                    ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
	        cached_methods = ["GET", "HEAD"]
	        target_origin_id = "S3-myS3bucket"
	

	        # This is for Forwarding all query strings, cookies and headers
	        forwarded_values {
	            query_string = false
	        
	            cookies {
	               forward = "none"
	            }
	        }
	        viewer_protocol_policy = "allow-all"
	        min_ttl = 0
	        default_ttl = 3600
	        max_ttl = 86400
	    }
	    # Setting up Restriction for accessing the content
	    restrictions {
	        geo_restriction {
	            # type of restriction, blacklist, whitelist or none
	            restriction_type = "none"
	        }
	    }
	
	    # SSL certificate for the service.
	    viewer_certificate {
	        cloudfront_default_certificate = true
	    
       }
}

6. Creating EBS Volume to store the contents for WebServer.

resource "aws_ebs_volume" "myEBS" {
	  availability_zone = "ap-south-1a"
	  size              = 1

	  tags = {
	    Name = "myEBS"
	  }
}
resource "aws_volume_attachment" "myAttachment" {
	 device_name = "/dev/sdf"
	 volume_id = aws_ebs_volume.myEBS.id
	 instance_id = aws_instance.myInstance.id
	}

7. Creating EC2 instance and making it ready for web server by installing httpd,git and cloning the web-page into the desired folder i.e /var/www/html.

resource "aws_instance" "myInstance" 
{ 
    ami = "ami-0447a12f28fddb066"  
    instance_type = "t2.micro" 
    availability_zone = "ap-south-1a" 
 
user_data = <<-EOF 
                #! /bin/bash 
                sudo yum install httpd -y 
                sudo systemctl start httpd 
                sudo systemctl enable httpd 
                sudo yum install git -y 
                mkfs.ext4 /dev/xvdf1 
                mount /dev/xvdf1 /var/www/html 
                 
           git clone https://github.com/azeemushanali/Portfolio.git /var/www/html
      
 EOF 
 
    tags = { 
        Name = "WebServer"    
    } 
    key_name = var.myKey 
    security_groups  = [ aws_security_group.mySecuritygroup ] 
}



Akash Pandey

Cloud Architect ? 6X Azure ? 1X Databricks ? ACE Certified ? MCT

4 年

Nice

回复

要查看或添加评论,请登录

Azeemushan Ali的更多文章

  • Hands-On with AWS- EKS

    Hands-On with AWS- EKS

    What is AWS? Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and…

社区洞察

其他会员也浏览了