AWS Cloud Automation using Terraform

AWS Cloud Automation using Terraform

AWS

Amazon Web Services(AWS) is a cloud service from Amazon, that provides on demand cloud computing platform and APIs to individuals, comapnies, government, on a metered pay-as-you-go basis. Using AWS we can create and deploy any type of application in the cloud.

TERRAFORM

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Steps:

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Pre-requisites:

  • You should have an AWS account,
  • Create an IAM user and download the credentials file in which they have provided the Access and secret key which we use for logging in through CLI.
  • Download AWS CLI and give the path for the same by editing the environment variables.
  • Use the credentials file for logging into AWS using aws configure command.
  • Download terraform and give the path for the same.
  • Create a profile using aws configure –profile ‘profilename’ command. We create a profile because if we want to share our code using some SCM(Source Code Management) Tool we don’t want to reveal the access and secret keys so it’s good practice to create a profile.

So Let's Begin:

Firstly we have to tell terraform about the provider and we can do so by using the following code:

provider "aws" {
  region     = "ap-south-1"
  profile    = "task"
}


Next I have created a S3 bucket in AWS and uploaded an image. I have also changed the permission of S3 bucket to public readable.

resource "aws_s3_bucket" "bucket" {
  bucket = "mybucket112211"
  acl    = "private"


  tags = {
    Name = "Mybucket"
  }
}

resource "aws_s3_bucket_object" "image" {
depends_on = [
aws_s3_bucket.bucket,
]
bucket = "mybucket112211"
	key = "tsk1.png"
        source = "C:/Users/Ridham/Desktop/hybrid multi-cloud/tasks/task_2/tsk2.png"
	etag = filemd5("C:/Users/Ridham/Desktop/hybrid multi-cloud/tasks/task_2/tsk2.png")
	
	acl = "public-read"
}

resource "aws_s3_bucket_public_access_block" "type" {
  bucket = "${aws_s3_bucket.bucket.id}"
  block_public_acls   = false
  block_public_policy = false
}

No alt text provided for this image

After creating the S3 bucket I have created a cloudfront distribution which would provide us CDN(Content Delievery Network) for delivering the content faster. There are multiple small data centers known as edge locations and they are solely meant for delivering the content and using the cloudfront service we can store our content in these edges for faster delivery. I have also saved the domain name of the distribution in publicip.txt file so that we can update our html/php code.

locals {
  s3_origin_id = "myS3Origin"
}
resource "aws_cloudfront_distribution" "webcloud" {
  origin {
    domain_name = "${aws_s3_bucket.bucket.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"
custom_origin_config {
    http_port = 80
    https_port = 80
    origin_protocol_policy = "match-viewer"
    origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
    }
  }
enabled = true
default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"
forwarded_values {
    query_string = false
cookies {
      forward = "none"
      }
    }
viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }
viewer_certificate {
    cloudfront_default_certificate = true
  }
}
resource "null_resource" "wepip"  {
 provisioner "local-exec" {
     command = "echo  ${aws_cloudfront_distribution.webcloud.domain_name} > publicip.txt"
   }
}
No alt text provided for this image

Next I have used the cloudfront distribution url for updating the code and then pushed it to github.

Then I have created a key-pair which we will attach with our instance.

resource "aws_key_pair" "key" {
key_name = "mykeyy"
public_key="ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAryh7wbLe3IvfHCLmrc1fbXw1d1dwM7VQN029wAphsKi/gzzWdTLlafUi+Teuo1Ze84sPAb3IxUw5ewwED/N0hTy/7YgvBEX08FTU8X1eH06AtD8Zyf6kAbwXrjO2SGkz/TJ3gebhqfrDu3iYEG1Uo1JKgg284ce8cAd9G3/U5FD/LKdajGmLTAHLIoxp3WHBpRW9ciOK9+JQL9SGnYYF62+++h4fMCc/lyX4A/Sy7UJ7pCFP+ZjsRZ8V6SOXTpy+4PrrdqoDC/NMqs/5pBdBn8ORRk43WjUP8LsvTBEw3AvkMSMgazWl/Ov68tVN3UiwUE9vEQbB0mExsZrixvNYJw== rsa-key-20200713"
}

No alt text provided for this image

Then I have created a security group allowing the ports 80 for HTTP protocol, port 22 for ssh protocol and port 2049 for NFS protocol. Also I have given some outbound/egress rules so that we can use yum command for installing Apache webserver and git clone command for downloading our code in /var/www/html folder.

resource "aws_security_group" "t2_sg" {
  name        = "add_rules"
  description = "Allow HTTP inbound traffic"
  vpc_id      = "vpc-00968b68"

  ingress {
    description = "HTTP from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp" 
    cidr_blocks=["0.0.0.0/0"]
 }
  ingress {
    description = "SSH from VPC"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp" 
    cidr_blocks=["0.0.0.0/0"]
 }
  ingress {
    description = "NFS"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
   egress {
    description = "HTTP from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp" 
    cidr_blocks=["0.0.0.0/0"]
  }
   egress {
    description = "HTTP from VPC"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp" 
    cidr_blocks=["0.0.0.0/0"]
  }
  tags = {
    Name = "t2_sg"
  }
}

No alt text provided for this image

Now I have retrieved the information about vpc id and subnet id which will be used for creating our EFS file system.

data "aws_vpc" "default" {
  default="true"
}
data "aws_subnet" "subnet" {
  vpc_id = data.aws_vpc.default.id
  availability_zone="ap-south-1a"
}

Next I am creating an EFS file system to provide persistent storage to our instance.

EFS: Amazon EFS is a fully-managed service that makes it easy to set up, scale, and cost-optimize file storage in the Amazon Cloud. With a few clicks in the AWS Management Console, you can create file systems that are accessible to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) and support full file system access semantics (such as strong consistency and file locking).

Why we use EFS instead of EBS: The main problem of EBS is that its a regional service which means that if we create EBS storage in one subnet and launch our instance in another subnet then we cannot connect this EBS storage to our instance and also if we attach EBS storage to one instance then we can't attach the same storage to other instances which is not the case for EFS.

resource "aws_efs_file_system" "t2_efs" {
  creation_token = "t2_efs"

  tags = {
    Name = "efssystem"
  }
}
resource "aws_efs_mount_target" "mount" {
  depends_on = [aws_efs_file_system.t2_efs]
  file_system_id = aws_efs_file_system.t2_efs.id
  subnet_id      = data.aws_subnet.subnet.id
  security_groups = [ aws_security_group.t2_sg.id ]
}

No alt text provided for this image

Next I have created the instance using EC2 service of AWS in which I have attached the key-pair and the security group that I created above and also used remote provisioner for connecting to the instance and running the required commands for installing the required softwares. I have also saved the public ip of the instance in the publicip.txt file through which we can access our website.

resource "aws_instance" "inst" {
  ami           = "ami-052c08d70def0ac62"
  instance_type = "t2.micro"
  key_name = "mykeyy"
  vpc_security_group_ids=[aws_security_group.t2_sg.id]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Ridham/Desktop/hybrid multi-cloud/mykeyy.pem")
    host     = aws_instance.inst.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo setenforce 0",
      "sudo yum install amazon-efs-utils -y",
      "sudo yum install nfs-utils -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
      ]
  }

  tags = {
    Name = "task2os"
  }
}

resource "null_resource" "local1"  {
	provisioner "local-exec" {
	    command = "echo  ${aws_instance.inst.public_ip} > publicip.txt"
  	}
}

No alt text provided for this image

Next I have formatted the EFS volume and also mounted it to the /var/www/html folder so that even if our instance is terminated our data would be stored in this volume permanently. Also I have cloned the code form github to /var/www/html folder because its the working directory of our webserver. For formatting, mounting the volume and cloning the code I have again used the remote provisioner for connecting to the instance and running the required commands.

resource "null_resource" "remote2"  {

  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Ridham/Desktop/hybrid multi-cloud/mykeyy.pem")
    host     = aws_instance.inst.public_ip
  }

provisioner "remote-exec" {
    inline = [
       "sudo mount -t efs ${aws_efs_file_system.t2_efs.id}:/ /var/www/html",
       "echo '${aws_efs_file_system.t2_efs.id}:/ /var/www/html efs _netdev 0 0' | sudo tee -a sudo tee -a /etc/fstab",
       "sudo rm -rf /var/www/html/*",
       "sudo git clone https://github.com/ridham-ux/hybrid_task_2.git /var/www/html/",
    ]
  }
}


After creating the instance, configuring the webserver and mounting the volume to the folder I used the publicip.txt file in which I saved the public ip of the instance and using this ip we can access the website.

No alt text provided for this image

For downloading the required plugins we can use the following commands:

terraform init

For creating the whole infrastructure we can use the following command:

terraform apply --auto-approve
No alt text provided for this image

We can destroy the whole infrastructure using the following command:

terraform destroy --auto-approve
No alt text provided for this image

GitHub repository for the code:

THANK YOU!!


Rohan Singh Shekhawat

Founder & CoFounder at Stoild Private Limited | RHCSA Certified| HADOOP| MLOPS| KUBERNETES| DOCKER| OPENSHIFT| OPENSTACK| DEVOPS| HYBRID CLOUD

4 年

Great buddy

要查看或添加评论,请登录

Ridham Dogra的更多文章

  • Launching WordPress integrated with AWS RDS using Kubernetes and Terraform

    Launching WordPress integrated with AWS RDS using Kubernetes and Terraform

    AWS RDS Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational…

  • Flutter Media Player

    Flutter Media Player

    Flutter is Google’s UI toolkit for building beautiful, natively compiled applications for mobile, web, and desktop from…

  • AWS VPC with public, private subnets, internet gateway and NAT gateway

    AWS VPC with public, private subnets, internet gateway and NAT gateway

    VPC(Virtual Private Network) Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section…

  • Launching AWS VPC with public and private subnets using Terraform

    Launching AWS VPC with public and private subnets using Terraform

    VPC(Virtual Private Network) Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section…

  • Deploy a Webapp over AWS-EKS Cluster

    Deploy a Webapp over AWS-EKS Cluster

    AWS-EKS ?Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that makes it easy for you to run…

    2 条评论
  • Launching Webserver on AWS cloud using Terraform

    Launching Webserver on AWS cloud using Terraform

    AWS Amazon Web Services(AWS) is a cloud service from Amazon, that provides on demand cloud computing platform and APIs…

    6 条评论
  • MlOps :Machine Learning Integrated with DevOps

    MlOps :Machine Learning Integrated with DevOps

    In the machine learning world one of the biggest problem is getting the desired accuracy and for this we have to use…

  • Facial recognition using Transfer learning

    Facial recognition using Transfer learning

    In today's world we have huge amount of data known as big data which creates a problem for us while training our model…

  • Devops Project:2

    Devops Project:2

    PROBLEM STATEMENT: 1. Create container image that’s has Jenkins installed using dockerfile 2.

    4 条评论
  • DevOps Project

    DevOps Project

    DevOps is Development and Operation's Collaboration, It's a Union of Process, People and Working Product that enable…

社区洞察

其他会员也浏览了