Creating AWS infrastructure with AWS: EFS using Terraform
- Create the key and security group which allow the port 80.
- Launch EC2 instance.
- In this Ec2 instance use the key and security group which we have created in step 1.
- Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
- Developer have uploaded the code into github repo also the repo has some images.
- Copy the GitHub repo code into /var/www/html
- Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
- Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html..
Amazon Elastic File System provides a simple, scale-able , fully managed, elastic NFS file system for use with AWS Cloud services and on-premises resources. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily.
Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances.
Connecting with the provider "AWS".
provider "aws" { profile= "shradha_seth" region = "ap-south-1"
}
creating vpc
resource "aws_vpc" "myvpc" { cidr_block = "192.168.0.0/16" instance_tenancy = "default" enable_dns_hostnames = true }
creating subnets
resource "aws_subnet" "publicsubnet" { vpc_id = "${aws_vpc.myvpc.id}" cidr_block = "192.168.0.0/24" availability_zone = "ap-south-1a" map_public_ip_on_launch = true
}
creating internet gateway
resource "aws_internet_gateway" "mygate" { vpc_id = "${aws_vpc.myvpc.id}"
}
create route tables
resource "aws_route_table" "my-rt" { vpc_id = "${aws_vpc.myvpc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.mygate.id}" }
}
resource "aws_route_table_association" "public-rt" { subnet_id = aws_subnet.publicsubnet.id route_table_id = aws_route_table.my-rt.id
}
create security groups
resource "aws_security_group" "efs-sg" { vpc_id = "${aws_vpc.myvpc.id}" ingress { description = "HTTP" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "NFS" from_port = 2049 to_port = 2049 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] }
}
create efs
resource "aws_efs_file_system" "myefs" { } resource "aws_efs_mount_target" "efs-tar" { file_system_id = "${aws_efs_file_system.myefs.id}" subnet_id = "${aws_subnet.publicsubnet.id}" security_groups = ["${aws_security_group.efs-sg.id}"] }
create instance
resource "aws_instance" "efs-demo" { ami = "ami-08706cb5f68222d09" instance_type = "t2.micro" key_name = "tfkey" subnet_id = "${aws_subnet.publicsubnet.id}" vpc_security_group_ids = ["${aws_security_group.efs-sg.id}"] user_data = <<-EOF #! /bin/bash sudo su - root sudo yum install httpd -y sudo service httpd start sudo service httpd enable sudo yum install git -y sudo yum install -y amazon-efs-utils sudo mount -t efs "${aws_efs_file_system.myefs.id}":/ /var/www/html mkfs.ext4 /dev/sdf mount /dev/sdf /var/www/html cd /var/www/html git clone https://github.com/SyedWasilAbidi/CHILLI EOF }
After this we will create a S3 bucket and a pipeline which will take the data from the github repository and the code-pipeline will further transfer that data directly into the S3 bucket.
code for s3 bucket
resource "aws_s3_bucket" "mybucketfortask2" { bucket = "mybucketfortask2" acl = "public-read" }
code for pipeline
resource "aws_iam_role" "codepipeline_role" { name = "test-role1" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "codepipeline.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF } resource "aws_iam_role_policy" "codepipeline_policy" { name = "codepipeline_policy" role = "${aws_iam_role.codepipeline_role.id}" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect":"Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:GetBucketVersioning", "s3:PutObject" ], "Resource": [ "${aws_s3_bucket.mybucketfortask2.arn}", "${aws_s3_bucket.mybucketfortask2.arn}/*" ] }, { "Effect": "Allow", "Action": [ "codebuild:BatchGetBuilds", "codebuild:StartBuild" ], "Resource": "*" } ] } EOF } #code to create a code-pipeline resource "aws_codepipeline" "codepipeline" { name = "shradhapipe" role_arn = "${aws_iam_role.codepipeline_role.arn}" artifact_store { location = "${aws_s3_bucket.mybucketfortask2.bucket}" type = "S3" } stage { name = "Source" action { name = "Source" category = "Source" owner = "ThirdParty" provider = "GitHub" version = "1" output_artifacts = ["SourceArtifact"] configuration = { Owner = "ss1998-seth" Repo = "picture" Branch = "master" OAuthToken = "92155a8dddcdfa6f2f15e6d20dbdeb87a85cf1d9" } } } stage { name = "Deploy" action { name = "Deploy" category = "Deploy" owner = "AWS" provider = "S3" input_artifacts = ["SourceArtifact"] version = "1" configuration = { BucketName = "${aws_s3_bucket.mybucketfortask2.bucket}" Extract = "true" ObjectKey = "/*.jpg" } } } }
creating cloudfront distribution
resource "aws_cloudfront_distribution" "s3_distribution" { default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "aws_s3_bucket.mybucketfortask2.bucket" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" } enabled = true origin { domain_name = "${aws_s3_bucket.mybucketfortask2.bucket_domain_name}" origin_id = "aws_s3_bucket.mybucketfortask2.bucket" } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }
After writing this code save it in folder and run the following commands from cli
- terraform init (to download the plugins)
- terraform validate (to check the errors)
- terraform apply (to run the code)
And then when you will check the aws gui , all the required things will be launched.
output:
Hence, the given task is successfully completed
Dont forget to run the following command to destroy the whole infrastructure launched
- terraform destroy
THANK YOU.!