Launching Web-Server using AWS-EFS and Terraform

Launching Web-Server using AWS-EFS and Terraform

TASK DESCRIPTION

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided the security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html.

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html.

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Why EFS is better then EBS?

EBS

Every server needs a drive. EBS is essentially a cloud-based storage for the drives of your virtual machines. EBS is designed to store data in blocks attached to an Amazon EC2 instance, similar to a local disk drive on your physical machine. You need to mount EBS onto an Amazon EC2 instance.

EFS

EFS is on the other hand automatically scalable — this means that you need not to be worried about your running applications as there won’t be any problems even if the workload suddenly becomes higher — the storage will automatically scale itself. Now, if the workload decreases — the storage will itself scale down, so that you don’t pay anything for the part of storage that you don’t use.

Amazon EFS is especially helpful for running servers, shared volumes, big data analysis, and any scalable workload you can think of.


Following are some of the differences:

Storage Type

EBS: Block Storage

EFS: Object storage

Performance

EBS: Hardly scalable

?Manually scale the size of the volumes without stopping instance.

?Baseline performance of 3 IOPS per GB for General Purpose volume

?Use Provisioned IOPS for increased performance

EFS: Scalable

?Highly Scalable Managed Service

?Supports up to 7000 file system operations per second

Data Stored

EBS:

?Data stored stays in the same Availability zone.

?Replicas are made within the AZ for higher durability

EFS:

?Data stored in AWS EFS stays in the region.

?Replicas are made within the region

Data Access

EBS: Can only be accessed by a single Amazon EC2 instance

EFS: Can be accessed by 1 to 1000s of EC2 instances from multiple AZs, concurrently

File System

EBS: Supports various file systems, including ext3 and ext4

EFS: File storage service for use with AWS EC2. EFS can be used as network file system for on-premise servers too using AWS Direct Connect.

Encryption

EBS: Uses an AWS KMS–Managed Customer Master Key (CMK) and AES 256-bit Encryption standards

EFS: Uses an AWS KMS–Managed Customer Master Key (CMK) and AES 256-bit Encryption standards

Availability

EBS: 99.99% available

EFS: Highly available (No public SLA)

Durability

EBS: 20 times more reliable than normal hard disks

EFS: Highly durable (No public SLA)

Availability Zone Failure

EBS: Cannot withstand AZ failure without point-in time EBS Snapshots

EFS: Every file system object is redundantly stored across multiple Availability Zones so it can survive one AZ failure.

Storage Size

EBS: Maximum storage size of 16 TB

EFS: No limitation on the size of the file system

File Size Limitation

EBS: No limitation on file size in EBS disk

EFS: Single files have a maximum size of 47.9TiB

Data Throughput and I/O

EBS: SSD- and HDD-backed storage types. Use of SSD backed and Provisioned IOPS is recommended for dedicated IO operations as needed

EFS: Default throughput of 3GB/s for all connected client



TERRAFORM CODE -
  1. We will tell the Terraform which provider to use and set the profile and region where to built the application.
provider "aws" {
  region     = "ap-south-1"
  profile    = "yashika"
}

2. Create a VPC having certain range of IP Address's.

resource "aws_vpc" "vpc_task2" {
                cidr_block = "192.168.0.0/16"
                instance_tenancy = "default"
                enable_dns_hostnames = true
                tags = {
                  Name = "vpc_task2"
                }
              
            }

3. Create a Public Subnet.

 resource "aws_subnet" "subnet1" {
                vpc_id = aws_vpc.vpc_task2.id
                cidr_block = "192.168.0.0/24"
                availability_zone = "ap-south-1a"
                map_public_ip_on_launch = "true"
                tags = {
                  Name = "subnet1"
                }
              
         }

4. Create an Internet Gateway.

resource "aws_internet_gateway" "gw" {
                vpc_id = aws_vpc.vpc_task2.id
                tags = {
                  Name = "gw"
                }
              
          }

5. Create an Route Table and assosciate to our Public Subnet.

             resource "aws_route_table" "rt" {
                vpc_id = aws_vpc.vpc_task2.id


                route {
                  cidr_block = "0.0.0.0/0"
                  gateway_id = aws_internet_gateway.gw.id
                }


                tags = {
                  Name = "rt"
                }
              }




              resource "aws_route_table_association" "rt_asso" {
                subnet_id = aws_subnet.subnet1.id
                route_table_id = aws_route_table.rt.id
              }

6. Create a Security Group which has a inbound rule to allow port 22 for SSH , 80 so that client can connect to website and 2049 for NFS so that client can access the website.

resource "aws_security_group" "tcp" {
	name      = "allow_tcp"
	vpc_id      = aws_vpc.vpc_task2.id


	
	ingress { 
		from_port    = 80
		to_port      = 80 
		protocol     = "tcp"
		cidr_blocks = ["0.0.0.0/0"]
	}

	ingress { 

		from_port    = 22
		to_port      = 22
		protocol     = "tcp"
		cidr_blocks = ["0.0.0.0/0"]
	}	


	 ingress {

        from_port   = 2049
        to_port     = 2049
        protocol    = "tcp"
        cidr_blocks = [ "0.0.0.0/0"]


         }

	 egress {

         from_port       = 0
         to_port         = 0
    	 protocol        = "-1"
    	 cidr_blocks     = ["0.0.0.0/0"]
  	}

	
	tags = {
		Name = "allow_tcp"
	}
  }

7. Create a EFS Storage.

resource "aws_efs_file_system" "efs_task2" {
   creation_token = "efs"
   performance_mode = "generalPurpose"
   throughput_mode = "bursting"
   encrypted = "true"
   tags = {
     Name = "efs_task2"
   }
 }


resource "aws_efs_file_system_policy" "policy" {
  file_system_id = aws_efs_file_system.efs_task2.id
  policy = <<POLICY
{
    "Version": "2012-10-17",
    "Id": "efs-policy-wizard-c45981c9-af16-441d-aa48-0fbd69ffaf79",
    "Statement": [
        {
            "Sid": "efs-statement-20e4323c-ca0e-418d-8490-3c3880f60788",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Resource": "${aws_efs_file_system.efs_task2.arn}",
            "Action": [
                "elasticfilesystem:ClientMount",
                "elasticfilesystem:ClientWrite",
                "elasticfilesystem:ClientRootAccess"
            ],
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "true"
                }
            }
        }
    ]
}
POLICY
}


resource "aws_efs_mount_target" "efs_mount" {
  file_system_id  = aws_efs_file_system.efs_task2.id
  subnet_id = aws_instance.myins.subnet_id
  security_groups = [aws_security_group.tcp.id]
}

8. Launched an EC2 Instance.

resource  "aws_instance"  "myins" {
	ami             = "ami-0447a12f28fddb066"
	instance_type   = "t2.micro"
	key_name        =  "mykey"
  	subnet_id       = aws_subnet.subnet1.id
        security_groups = [aws_security_group.tcp.id]
	tags = {
		Name = "myos2"	
	}
}


resource "null_resource" "mount"  {
  depends_on = [
    aws_efs_mount_target.efs_mount,
  ]
	connection {
    		type        = "ssh"
    		user        = "ec2-user"
   		private_key = file("C:/Users/Ragesh Tambi/Downloads/mykey.pem")
    		host        = aws_instance.myins.public_ip
  	}


	provisioner "remote-exec" {
    		inline = [
      			    "sleep 30",
			    "sudo setenforce 0",
      			    "sudo yum install -y httpd git php amazon-efs-utils nfs-utils",
      			    "sudo systemctl start httpd",
      			    "sudo systemctl enable httpd",
     			    "sudo chmod ugo+rw /etc/fstab",
      			    "sudo echo '${aws_efs_file_system.efs_task2.id}:/ /var/www/html efs tls,_netdev' >> /etc/fstab",
      			    "sudo mount -a -t efs,nfs4 defaults",
     			    "sudo rm -rf /var/www/html/*",
    			    "sudo git clone https://github.com/Yashika-Khandelwal/cloud_task2.git /var/www/html/"
    ]
  }
}

9. Create a S3 bucket.

resource "aws_s3_bucket" "s3-bucket-task2" {
	bucket = "s3-bucket-task2"
	acl    = "public-read"
	force_destroy  = true
  	cors_rule {
    	  allowed_headers = ["*"]
    	  allowed_methods = ["PUT", "POST"]
    	  allowed_origins = ["https://s3-bucket-task2"]
    	  expose_headers  = ["ETag"]
    	  max_age_seconds = 3000
  }
	depends_on = [
   	  null_resource.mount,
  ]
}




resource "aws_s3_bucket_object" "upload" {
	bucket 		 = aws_s3_bucket.s3-bucket-task2.id
	key    		 = "dog.jpg"
	source 		 = "C:/Users/Ragesh Tambi/Desktop/dog.jpg"
	acl	         = "public-read"
	
}

10. Create a CloudFront.

resource "aws_cloudfront_distribution" "s3_distribution" {
	origin {
    		domain_name = aws_s3_bucket.s3-bucket-task2.bucket_regional_domain_name
   		origin_id   = "S3-${aws_s3_bucket.s3-bucket-task2.bucket}"
		custom_origin_config {
            		http_port = 80
            		https_port = 443
            		origin_protocol_policy = "match-viewer"
            		origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
        	}
	}
    	default_root_object = "dog.jpg"
    	enabled = true




  
    	custom_error_response {
        	error_caching_min_ttl = 3000
        	error_code = 404
        	response_code = 200
        	response_page_path = "/dog.jpg"
    	}




    	default_cache_behavior {
        	allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        	cached_methods = ["GET", "HEAD"]
        	target_origin_id = "S3-${aws_s3_bucket.s3-bucket-task2.bucket}"




        	#Not Forward all query strings, cookies and headers
        	forwarded_values {
            		query_string = false
	    		cookies {
				forward = "none"
	    	}
            
        }




        	viewer_protocol_policy = "redirect-to-https"
        	min_ttl = 0
        	default_ttl = 3600
        	max_ttl = 86400
    }






    	restrictions {
        	geo_restriction {
            		# type of restriction, blacklist, whitelist or none
            		restriction_type = "none"
        }
    }






    	viewer_certificate {
        	cloudfront_default_certificate = true
    }
}

11. Created a null resource which will add the CloudFront link in our code.

resource "null_resource" "mypic"  {
  depends_on = [
    null_resource.mount,
    aws_cloudfront_distribution.s3_distribution,
  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("C:/Users/Ragesh Tambi/Downloads/mykey.pem")
    host     = aws_instance.myins.public_ip
  }
  provisioner "remote-exec" {
    inline = [
        "sudo chmod ugo+rw /var/www/html/index.php",
        "sudo echo '<img src=https://${aws_cloudfront_distribution.s3_distribution.domain_name}/dog.jpg alt='YASHIKA' width='500' height='600'</a>' >> /var/www/html/index.php"
    ]
  }
}

12. Created another null resoucre which will automatically retrieve the public ip of our instance and open it in chrome. This will land us on the home page of our website that is present in /var/www/html.

# -- Starting chrome for output


resource "null_resource" "nulllocal1"  {




	provisioner "local-exec" {
	    command = "start chrome  ${aws_instance.myins.public_ip}"
  	}
}

13. To see the CloudFront IP Address.

output "cloudfront_ip_addr" {
  value = aws_cloudfront_distribution.s3_distribution.domain_name
}

After performing the above steps, run the following commands -

terraform init

No alt text provided for this image
terraform validate

No alt text provided for this image
terraform apply --auto-approve
No alt text provided for this image
No alt text provided for this image

We can see all the deployments in our AWS Management Console -

VPC
No alt text provided for this image
SUBNET
No alt text provided for this image
INTERNET GATEWAY
No alt text provided for this image
ROUTE TABLES
No alt text provided for this image
EFS STORAGE
No alt text provided for this image
EC2 INSTANCE
No alt text provided for this image
S3 STORAGE
No alt text provided for this image
No alt text provided for this image
CLOUDFRONT
No alt text provided for this image
No alt text provided for this image

Look of the Website -

No alt text provided for this image

THANK YOU !!


要查看或添加评论,请登录

Yashika Khandelwal的更多文章

  • Creating AWS VPC with Private and Public Subnet, Internet Gateway, NAT Gateway to Deploy MySQL and WordPress.

    Creating AWS VPC with Private and Public Subnet, Internet Gateway, NAT Gateway to Deploy MySQL and WordPress.

    Project Description: Performing the following steps: 1. Write an Infrastructure as code using terraform, which…

  • FACE RECOGNITION SYSTEM

    FACE RECOGNITION SYSTEM

    A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a…

    6 条评论
  • Launching AWS VPC, Public and Private Subnets using Terraform

    Launching AWS VPC, Public and Private Subnets using Terraform

    Task Description:- 1) Write a Infrastructure as code using terraform, which automatically create a VPC. 2) In that VPC…

    2 条评论
  • Integration of Docker, Jenkins, Git and GitHub

    Integration of Docker, Jenkins, Git and GitHub

    Task Description :- 1. Create container image that’s has Jenkins installed using Dockerfile.

    2 条评论
  • MLOps Task-1

    MLOps Task-1

    Project on Integration of Jenkins, Git, GitHub and Docker TASK DESCRIPTION:- JOB#1 If Developer push to dev branch then…

  • ReplicaSet by Terraform

    ReplicaSet by Terraform

    As there is no resource available in Terraform for ReplicaSet, so we will use Deployment in Terraform. This was the…

    3 条评论
  • CREATING AN INFRASTRUCTURE OF AWS USING TERRAFORM

    CREATING AN INFRASTRUCTURE OF AWS USING TERRAFORM

    Task Description :- Create the Key and Security group (which allows port 80). Launch the EC2 instance.

    10 条评论
  • DevOps and Machine Learning

    DevOps and Machine Learning

    Project on the integration of Machine Learning with DevOps Task Description : 1. Create container image that’s has…

    8 条评论

社区洞察

其他会员也浏览了