Deploy Infrastructure(website) On AWS and integrating with EFS(storage) using Terraform

Deploy Infrastructure(website) On AWS and integrating with EFS(storage) using Terraform

Here I have updated my Task1 by Performing all the steps as done before but instead of EBS storage class here I am using EFS service.

Amazon Elastic File System (EFS) :

Amazon Elastic File System is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources.

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

Problem Statement:

1. Create a Security group that allows the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. A developer has uploaded the code into GitHub repo also the repo has some images.

6. Copy the Github repo code into /var/www/html

7. Create an S3 bucket, and copy/deploy the images from Github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Flowchart of problem Statement:

No alt text provided for this image


Prerequisite for this Task:

  1. Setup Terraform in your OS .
  2. Have some knowledge about some basic commands of the Linux Operating System.
  3. Have an account on AWS and Github.
  4. Some basic commands of terraform.
 terraform apply --auto-approve  -> Builds or manages the infrastructure


 terraform destroy --auto-approve -> Destroy infrastructure created by Terraform

Software Required :

  • AWS CLI
  • Terraform

if you want more about these softwares then please go through this article. I hope you will get lots of knowledge about this.

Let's do the task :

This code sets profile created by the AWS configure command, this code should be run on cmd :

No alt text provided for this image

Code Explanation:

The code begins with here this code selects the provider or profile for login into the AWS .

provider "aws" {
	  region     = "ap-south-1"
	  access_key = "your_access_key"
	  secret_key = "your_secret_key"
	}

Step 1: Create a Security group that allows port 80.

resource "aws_security_group" "task2sg"{
	name = "task2_sg"
	description="allow ssh and http traffic"
	

	ingress{
	from_port =22
	to_port =22
	protocol ="tcp"
	cidr_blocks=["0.0.0.0/0"]
	  }
	ingress{
	from_port =80 
	to_port =80
	protocol ="tcp"
	cidr_blocks =["0.0.0.0/0"]
	  }
	ingress {
	protocol   = "tcp"
	from_port  = 2049
	to_port    = 2049
	cidr_blocks = ["0.0.0.0/0"]
	  }
	egress {
	from_port   = 0
	to_port     = 0
	protocol    = "-1"
	cidr_blocks = ["0.0.0.0/0"]
	  
	  }
	

	}

Output:

No alt text provided for this image

Step 2: Launch EC2 instance and install the apache server as well as start the service of httpd.

resource "aws_instance" "task2aim" {
	  ami           = "ami-0447a12f28fddb066"
	  instance_type = "t2.micro"
	  key_name = "mykey"
	  security_groups =["${aws_security_group.task2sg.name}"]
	  connection {
	    type     = "ssh"
	    user     = "ec2-user"
	    private_key = file("C:/Users/hackcoderr/Desktop/hmc/mykey.pem")
	    host     = aws_instance.task2aim.public_ip
	  }
	

	

	  provisioner "remote-exec" {
	    inline = [
	      "sudo yum install httpd  php git -y",
	      "sudo systemctl restart httpd",
	      "sudo systemctl enable httpd",
	    ]
	  }
	 tags = {
	    Name = "task2_os"
	  }
	}

Output:

No alt text provided for this image

Step 3: Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html.

resource "aws_efs_file_system" "task2efs" {
	  depends_on = [
	    aws_instance.task2aim
	  ]
	  creation_token = "volume"
	

	

	  tags = {
	    Name = "task2_efs"
	  }
	}
	

	

	resource "aws_efs_mount_target" "alpha" {
	  depends_on =  [
	                aws_efs_file_system.task2efs
	  ] 
	  file_system_id = "${aws_efs_file_system.task2efs.id}"
	  subnet_id      = aws_instance.task2aim.subnet_id
	  security_groups = [ aws_security_group.task2sg.id ]
	

	}
	

	

	

Output:

No alt text provided for this image

Step 4: Upload the code into GitHub repo also the repo has some images.

	resource "null_resource" "null2"  {
		provisioner "local-exec" {
		    command = "echo  ${aws_instance.task2aim.public_ip} > public_ip.txt"
	  	}
	}
	resource "null_resource" "null3"  {
	

	

	depends_on = [
	    aws_efs_mount_target.alpha
	  ]
	

	connection {
	    type     = "ssh"
	    user     = "ec2-user"
	    private_key = file("C:/Users/hackcoderr/Desktop/hmc/mykey.pem")
	    host     = aws_instance.task2aim.public_ip
	  }
	

	

	provisioner "remote-exec" {
	    inline = [
	      "sudo mount -t '${aws_efs_file_system.task2efs.id}':/ /var/www/html",
	      "sudo rm -rf /var/www/html/*",
	      "sudo git clone https://github.com/hackcoderr/Mini-Project /var/www/html/" 
	    ]
	  }

	}

Output:

No alt text provided for this image

Step 5: Creating an S3 bucket and bucket policy.

// Creating S3 bucket
	resource "aws_s3_bucket" "task2s3" {
	  bucket = "task2s3bucket"
	  acl    = "private"
	  tags = {
	    Name = "task2_s3"
	  }
	}
	locals {
	  s3_origin_id = "myS3Origin"
	}
	output "task2s3" {
	  value = aws_s3_bucket.task2s3
	}
	

	

	// Creating Origin Access Identity
	resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
	  comment = "Some comment"
	}
	output "origin_access_identity" {
	  value = aws_cloudfront_origin_access_identity.origin_access_identity
	}
	

	

	// Creating bucket policy
	data "aws_iam_policy_document" "s3_policy" {
	  statement {
	    actions   = ["s3:GetObject"]
	    resources = ["${aws_s3_bucket.task2s3.arn}/*"]
	    principals {
	      type        = "AWS"
	      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
	    }
	  }
	  statement {
	    actions   = ["s3:ListBucket"]
	    resources = ["${aws_s3_bucket.task2s3.arn}"]
	    principals {
	      type        = "AWS"
	      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
	    }
	  }
	}
	resource "aws_s3_bucket_policy" "example" {
	  bucket = aws_s3_bucket.task2s3.id
	  policy = data.aws_iam_policy_document.s3_policy.json
	}
	

	

Output:

No alt text provided for this image

Step 6: Create a Cloudfront using s3 bucket(which contains images).

resource "aws_cloudfront_distribution" "s3_distribution" {
	  origin {
	    domain_name = aws_s3_bucket.task2s3.bucket_regional_domain_name
	    origin_id   = local.s3_origin_id
	    s3_origin_config {
	      origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
	    }
	  }
	  enabled             = true
	  is_ipv6_enabled     = true
	 default_cache_behavior {
	    allowed_methods  = ["GET", "HEAD"]
	    cached_methods   = ["GET", "HEAD"]
	    target_origin_id = local.s3_origin_id
	    forwarded_values {
	      query_string = false
	      cookies {
	        forward = "none"
	      }
	    }
	    viewer_protocol_policy = "redirect-to-https"
	    min_ttl                = 0
	    default_ttl            = 3600
	    max_ttl                = 86400
	  }
	  restrictions {
	    geo_restriction {
	      restriction_type = "none"
	    }
	  }
	  viewer_certificate {
	    cloudfront_default_certificate = true
	  }
	}

Output:

No alt text provided for this image

Step 7: Use the Cloudfront URL to update in code in /var/www/html.

resource "null_resource" "null4"  {
	

	

	          provisioner "local-exec" {
		    command = "aws s3 cp C:/Users/hackcoderr/Desktop/hmc/task2/ss/image.png s3://task2s3bucket --acl public-read"
	            
	  	}
	}   
No alt text provided for this image

Now, Task is completed. This is the final view of my webpage.

No alt text provided for this image

If you will face any defficulti in this task so you can see this github code.link is below.In case you have any qeury in this task then you can ping me. contact details is given my linkedin profile.

Github Link: https://github.com/hackcoderr/Deploy-Infrastructure-website-On-AWS-and-integrating-with-EFS-using-Terraform

thank you to reading......




要查看或添加评论,请登录

Sachin Kashyap的更多文章

社区洞察

其他会员也浏览了