Launching AWS instance with EFS using Terraform
Hybrid Multi Cloud Computing Task-2
This article will describe the use of Terraform to launch an instance over AWS Cloud with a NFS file system for persistent data storage i.e., using and creating an EFS for the instance. This instance will be used as a Webserver to be deployed for the clients in the public world; the Document Root (/var/www/html for Apache) will be mounted to EFS for persistent storage. Also using Terraform create a S3 bucket to store static data such as images into it. Thus to minimize the latency and transfer data securely, we will also create a CloudFront Distribution.
Terraform is a powerful tool created by HashiCorp to implement Infrastructure As A Code (IAAS) to provision and manage any cloud, infrastructure or service and helps to automate the workflow. The Terraform code used in this project is available on the GitHub repository mentioned at the end or visit here.
To use Terraform, first create an IAM user in an account on AWS Cloud and attach Policies. For personal use-cases, attach 'PowerUserAccess'. Download the Access key provided. Also on the EC2 dashboard, create a new 'key-pair' to use for the instance launched or use Terraform itself to create one.
Create a file in some workspace on the local machine with extension '.tf' . The code is also available on the GitHub repository link provided at the end.
On-top of the file declare the Service Provider as AWS, the IAM user created as Profile and the region to use for launching the infrastructure.
// mention IAM user in profile provider "aws" { region = "ap-south-1" profile = "<IAM User>" }
Before launching an instance, a pre-created security-group must exist to attach to it. This security-group will act as a firewall to allow inbound traffic via SSH and HTTP protocols only. Use the code below to create one. Mention the VPC-ID assigned by the AWS.
// give security group a name eg., sg_group resource "aws_security_group" "sg_group" { name = "TLS" description = "Allow TLS inbound traffic" vpc_id = "<VPC ID>" ingress { description = "SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "http" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "sg_group" } }
Use the code below to launch an instance with the specifications provided which includes the AMI, type of Instance, Subnet ID and key to use (same as created above). Specify the path of key under 'connection' as the value of 'private_key'; use file function.
// give the instance a name eg., tfinst resource "aws_instance" "tfinst" { ami = "ami-0447a12f28fddb066" // provided by Amazon or create one instance_type = "<Instance type>" // eg., t2.micro key_name = "<Key Name>" // eg., keyos1 (same as above) security_groups = [ aws_security_group.sg_group.id ] subnet_id = "<Subnet ID>" // assigned by AWS connection { type = "ssh" user = "ec2-user" private_key = file("Key Path") // file-path of key host = aws_instance.tfinst.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install httpd php git -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd", ] } tags = { Name = "tfinst" } }
The provisioner 'remote-exec' used above will install the required softwares to configure the instance as a webserver. To provide persistent storage to the instance, instead of EBS we will be using the AWS EFS service. Amazon Elastic File System provides a scalable and managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand without disrupting applications and can be used by multiple instances at the same time as a centralized storage.
To create a EFS file system, use the codes given below. EFS requires a 'mount-target' to be created in some region or Availability Zone. To create a mount target, a VPC with a Subnet must also be created.
resource "aws_efs_file_system" "task2-efs" { creation_token = "task2-efs" tags = { Name = "Task-2-volume" } } resource "aws_efs_access_point" "efs-access" { file_system_id = aws_efs_file_system.task2-efs.id } resource "aws_vpc" "task2-vpc-efs" { cidr_block = "10.0.0.0/16" } resource "aws_subnet" "task2-subnet" { vpc_id = aws_vpc.task2-vpc-efs.id availability_zone = "ap-south-1a" // mentions the AZ to use cidr_block = "10.0.1.0/24" } resource "aws_efs_mount_target" "task2-efs-mount" { file_system_id = aws_efs_file_system.task2-efs.id subnet_id = aws_subnet.task2-subnet.id }
Mount the EFS volume to the instance just launched using the codes given below. The provisioner 'remote-exec' used below will first mount the EFS volume to the Document Root of webserver (eg., /var/www/html for Apache); clone a GitHub repository (with the webpage codes to be deployed) to some workspace and copy the same codes to the Document Root.
resource "null_resource" "mount_vol" { depends_on = [ aws_efs_mount_target.task2-efs-mount, ] connection { type = "ssh" user = "ec2-user" private_key = file("<Key-Path>") // file-path of key and name host = aws_instance.tfinst.public_ip } provisioner "remote-exec" { inline = [ "sudo mount ${aws_efs_mount_target.task2-efs-mount.mount_target_dns_name} /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/sarthakSharma5/cloud2.git /workspace", "sudo cp -r /workspace/* /var/www/html/", ] } }
Either create one or use the same GitHub repository for testing purposes.
The codes being used till yet can be used to launch the webserver with persistent storage. But to deploy static data such as images we will be using S3 and CloudFront services as mentioned above.
Create an S3 bucket with use of the code below; to upload an image as an object in the S3 bucket, provide path of a local image as the value of 'source'. Note that the S3 bucket is made publicly available as read-only.
resource "aws_s3_bucket" "terraform_bucket_task_2" { bucket = "task2-tf-efs" // give a name to the bucket acl = "public-read" versioning { enabled = true } tags = { Name = "terraform_bucket_task_2" Env = "Dev" } } resource "aws_s3_bucket_public_access_block" "s3BlockPublicAccess" { bucket = aws_s3_bucket.terraform_bucket_task_2.id block_public_acls = true block_public_policy = true restrict_public_buckets = true } resource "aws_s3_bucket_object" "terraform_bucket_task_2_object" { depends_on = [ aws_s3_bucket.terraform_bucket_task_2, ] bucket = aws_s3_bucket.terraform_bucket_task_2.bucket key = "cldcomp.jpg" // provide key-name eg., image name acl = "public-read" source = "<Path of Local_Image>" // image: cldcomp.jpg } // Provide Path of Local Image to upload as an object as value of source
Also to create a CloudFront Distribution, use the codes given below. You may update the Geo-restrictions if required to allow limited viewers.
resource "aws_cloudfront_distribution" "terraform_distribution_2" { origin { domain_name = "cldcomp.jpg" // key used for s3 bucket object origin_id = "Cloud_comp" custom_origin_config { http_port = 80 https_port = 80 origin_protocol_policy = "match-viewer" origin_ssl_protocols = [ "TLSv1", "TLSv1.1", "TLSv1.2" ] } } enabled = true default_cache_behavior { allowed_methods = [ "DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT", ] cached_methods = [ "GET", "HEAD" ] target_origin_id = "Cloud_comp" // use same as above for origin_id forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }
Finally, the complete Infrastructure As a Code (IAAS) is ready to use. Use the following commands on some terminal or command-line after moving to the same workspace.
terraform init terraform validate terraform apply
- The first command above is used to initialize a working directory containing Terraform configuration files.
- The second command is be used to validate/check for any errors/warnings in the file.
- The third command will be used to run the file. Type 'yes' when prompted for approval.
t may take a few minutes, but finally the complete infrastructure will be successfully launched. Output of the terraform apply command:
Login to the AWS console to cross check. On the EC2 dashboard, the instance 'tfinst' is successfully running. Use the Public-IP of the instance to view the webpage.
Use a web-browser and use Public IP for URL to view the webpage.
Also a volume named 'Task-2-volume' is created and can be seen on the list of File Systems under EFS Service.
Note that a CloudFront Distribution with Origin 'cldcomp' is also created and can be viewed under CloudFront service.
The S3 bucket named 'task2-tf-efs' can be seen already created under S3 Bucket Lists containing the same image uploaded from the local system.
To use the image stored as an Object in the S3 bucket an update is required to the webpage. Login to the instance via SSH protocol using the same key provided in the code; use the command below. After login, edit the webpage present in the Document Root using vi editor.
ssh -l ec2-user <Instance_PublicIP> -i <key>.pem
Click the object i.e., image present in S3 bucket; copy the link provided as 'Object URL' and use in the code.
Save and exit the file. Now open some web-browser window and use the same Public IP as the URL.
The webpage is updated with the same image uploaded i.e., present in the S3 bucket. Note that the image will not take a lot of time to load, possible only because of the CloudFront service.
Finally the main goal is achieved to use Terraform to create an Infrastructure As a Code to launch a webserver with persistent data. The same task described in this article is performed but for persistent storage EFS service is used instead of EBS. The Terraform code and a sample webpage for testing is available on the GitHub repository given below:
I would like to thank Mr. Vimal Daga and LinuxWorld Informatics Pvt. Ltd. for guiding me and helping me learn multiple concepts of Hybrid Cloud Computing in the easiest way possible.