Creating AWS infrastructure with AWS: EFS using Terraform

Creating AWS infrastructure with AWS: EFS using Terraform

Task details:

  • Create the key and security group which allow the port 80.
  • Launch EC2 instance.
  • In this Ec2 instance use the key and security group which we have created in step 1.
  • Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
  • Developer have uploaded the code into github repo also the repo has some images.
  • Copy the GitHub repo code into /var/www/html
  • Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  • Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html..

Amazon Elastic File System provides a simple, scale-able , fully managed, elastic NFS file system for use with AWS Cloud services and on-premises resources. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily.

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances.

Connecting with the provider "AWS".

provider "aws" {
       profile= "shradha_seth"
       region = "ap-south-1"
}

creating vpc

resource "aws_vpc" "myvpc" {
  cidr_block       = "192.168.0.0/16"
  instance_tenancy = "default"
  enable_dns_hostnames = true

}

creating subnets

resource "aws_subnet" "publicsubnet" {
  vpc_id     = "${aws_vpc.myvpc.id}"
  cidr_block = "192.168.0.0/24"
  availability_zone = "ap-south-1a"
  map_public_ip_on_launch = true

}


creating internet gateway

resource "aws_internet_gateway" "mygate" {
  vpc_id = "${aws_vpc.myvpc.id}"

}

create route tables

resource "aws_route_table" "my-rt" {
  vpc_id = "${aws_vpc.myvpc.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.mygate.id}"

  }

}

resource "aws_route_table_association" "public-rt" {
  subnet_id      = aws_subnet.publicsubnet.id
  route_table_id = aws_route_table.my-rt.id

}

create security groups

resource "aws_security_group" "efs-sg" {
  vpc_id      = "${aws_vpc.myvpc.id}"




  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }




  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  } 
  
  ingress {
    description = "NFS"
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }




  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

create efs

resource "aws_efs_file_system" "myefs" {
}




resource "aws_efs_mount_target" "efs-tar" {
  file_system_id = "${aws_efs_file_system.myefs.id}"
  subnet_id      = "${aws_subnet.publicsubnet.id}"
  security_groups = ["${aws_security_group.efs-sg.id}"]
}

create instance

resource "aws_instance" "efs-demo" {
ami           = "ami-08706cb5f68222d09"
instance_type = "t2.micro"
key_name      = "tfkey"
subnet_id     = "${aws_subnet.publicsubnet.id}" 
vpc_security_group_ids = ["${aws_security_group.efs-sg.id}"]
user_data = <<-EOF




   	#! /bin/bash
	sudo su - root
	sudo yum install httpd -y
        sudo service httpd start
	sudo service httpd enable
 	sudo yum install git -y
        sudo yum install -y amazon-efs-utils 
        sudo mount -t efs "${aws_efs_file_system.myefs.id}":/ /var/www/html
	mkfs.ext4 /dev/sdf	
	mount /dev/sdf /var/www/html
	cd /var/www/html
	git clone https://github.com/SyedWasilAbidi/CHILLI
	  
EOF
}

After this we will create a S3 bucket and a pipeline which will take the data from the github repository and the code-pipeline will further transfer that data directly into the S3 bucket.

code for s3 bucket

resource "aws_s3_bucket" "mybucketfortask2" {
  bucket = "mybucketfortask2"
  acl    = "public-read"
}

code for pipeline

resource "aws_iam_role" "codepipeline_role" {
  name = "test-role1"




  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "codepipeline.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF
}






resource "aws_iam_role_policy" "codepipeline_policy" {
  name = "codepipeline_policy"
  role = "${aws_iam_role.codepipeline_role.id}"




  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect":"Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:GetBucketVersioning",
        "s3:PutObject"
      ],
      "Resource": [
        "${aws_s3_bucket.mybucketfortask2.arn}",
        "${aws_s3_bucket.mybucketfortask2.arn}/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "codebuild:BatchGetBuilds",
        "codebuild:StartBuild"
      ],
      "Resource": "*"
    }
  ]
}
EOF
}


#code to create a code-pipeline


resource "aws_codepipeline" "codepipeline" {
  name     = "shradhapipe"
  role_arn = "${aws_iam_role.codepipeline_role.arn}"




  artifact_store {
    location = "${aws_s3_bucket.mybucketfortask2.bucket}"
    type     = "S3"
    }
  




  stage {
    name = "Source"




    action {
      name             = "Source"
      category         = "Source"
      owner            = "ThirdParty"
      provider         = "GitHub"
      version          = "1"
      output_artifacts = ["SourceArtifact"]




      configuration = {
        Owner  = "ss1998-seth"
        Repo   = "picture"
        Branch = "master"
	OAuthToken = "92155a8dddcdfa6f2f15e6d20dbdeb87a85cf1d9"
      }
    }
  }




  stage {
    name = "Deploy"




    action {
      name            = "Deploy"
      category        = "Deploy"
      owner           = "AWS"
      provider        = "S3"
      input_artifacts = ["SourceArtifact"]
      version         = "1"




      configuration = {
        BucketName = "${aws_s3_bucket.mybucketfortask2.bucket}"
	Extract = "true"
        ObjectKey = "/*.jpg"
      }
    }
  }


}

creating cloudfront distribution

resource "aws_cloudfront_distribution" "s3_distribution" {
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "aws_s3_bucket.mybucketfortask2.bucket"




    forwarded_values {
      query_string = false




      cookies {
        forward = "none"
      }
    }




    viewer_protocol_policy = "allow-all"
  }
enabled = true
origin {
        domain_name = "${aws_s3_bucket.mybucketfortask2.bucket_domain_name}"
        origin_id   = "aws_s3_bucket.mybucketfortask2.bucket"
    }




restrictions {
        geo_restriction {
        restriction_type = "none"
    }
  }




  viewer_certificate {
    cloudfront_default_certificate = true
  }
}

After writing this code save it in folder and run the following commands from cli

  • terraform init (to download the plugins)
No alt text provided for this image
  • terraform validate (to check the errors)
  • terraform apply (to run the code)
No alt text provided for this image


No alt text provided for this image

And then when you will check the aws gui , all the required things will be launched.

No alt text provided for this image


No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

output:

No alt text provided for this image

Hence, the given task is successfully completed

Dont forget to run the following command to destroy the whole infrastructure launched

  • terraform destroy

THANK YOU.!


要查看或添加评论,请登录

Shradha Seth的更多文章

  • k-mean clustering in security domain

    k-mean clustering in security domain

    ? Clustering:- Clustering is one of the most common exploratory data analysis technique used to get an intuition about…

    1 条评论
  • Face Detection using python

    Face Detection using python

    Task 06 ??????? Team Task Task Description ?? ?? Create a program that perform below mentioned task upon recognizing a…

    2 条评论
  • Javascript!!

    Javascript!!

    ?? Task 7.2 - ?? Write a blog explaining the usecase of javascript in any of your favorite industries.

  • Confusion Matrix and cyber security

    Confusion Matrix and cyber security

    Confusion matrix is a fairly common term when it comes to machine learning. Today I would be trying to relate the…

  • GUI Application On Docker Container?? ??????

    GUI Application On Docker Container?? ??????

    ?? Task Description?? ?? GUI container on the Docker ?? Launch a container on docker in GUI mode ?? Run any GUI…

  • Deploying Simple Machine Learning Model inside Docker Container

    Deploying Simple Machine Learning Model inside Docker Container

    Task Description ?? ?? Pull the Docker container image of CentOS image from Docker Hub and Create a new container ??…

    1 条评论
  • Creating VPC Infrastucture and NAT gateway using Terraform

    Creating VPC Infrastucture and NAT gateway using Terraform

    The goal is to create a scenario in which we will create our own virtual private cloud (VPC) with a public and a…

    2 条评论
  • Creating VPC Infrastucture: Terraform & Hosting WordPress

    Creating VPC Infrastucture: Terraform & Hosting WordPress

    We have to create a web portal for our company with all the security as much as possible. So, we use Wordpress software…

  • LAUNCH NEXT CLOUD WITH EKS

    LAUNCH NEXT CLOUD WITH EKS

    AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a…

社区洞察

其他会员也浏览了