Deploying Infrastructure Using Terraform
Terraform enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language, or optionally JSON. In this article we will use AWS as our cloud service and deploy an ec2 instance that will use some very popular AWS services like CloudFront and S3. In this article we will cover the following things:
- Setting AWS profile and region
- Creating a security group that allows SSH and HTTP
- Creating an EBS volume
- Creating an ec2 instance
- Attach and mount EBS volume to your ec2 instance.
- Creating a S3 bucket
- Uploading project files on S3 from GitHub
- Creating a CloudFront distribution
Setting AWS profile and region
You can make a new directory for your terraform files. Make a new file with the .tf extension.
profile - This is the AWS profile name as set in the shared credentials file.
region - This set the default region for aws.
provider "aws" { profile = "default" region = "us-east-1" }
By setting the profile key to default, terraform will automatically search your credential file in available in the .aws folder. For more information you can visit AWS CLI
Creating a security group that allows SSH and HTTP
Security groups act as a firewall for our deployments. You can set many rules for inbound and outbound requests. Just to give you an example we are setting up SSH and HTTP.
#Security group with HTTP and SSH resource "aws_security_group" "web_with_tcp" { name = "web_with_tcp" description = "allow ssh and http traffic" ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
name - to set the name of the security group.
ingress - to set the inbound rules.
egress - to set the outbound rules.
cidr_blocks = ["0.0.0.0/0"]
The above snippet means that the connections are allowed from anywhere in the world.
Creating an EBS volume
An Amazon EBS volume is a durable, block-level storage device that you can attach to your instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. EBS volumes are flexible. For current-generation volumes attached to current-generation instance types, you can dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes. You can create EBS using terraform:
#EBS Volume resource "aws_ebs_volume" "web_storage" { availability_zone = "us-east-1a" type = "gp2" size = 1 tags = { Name = "web_storage" } }
availability_zone - to set the AZ
type - to set the type of EBS volume
size - to set the size
tags - to provide suitable tag
Creating an ec2 instance
ec2 is aws computing solution. You can launch an ec2 instance using terraform:
#ec2 instance resource "aws_instance" "ec2_web" { ami = "ami-01d025118d8e760db" availability_zone = "us-east-1a" instance_type = "t2.micro" key_name = "enterme" security_groups = ["${aws_security_group.web_with_tcp.name}"] user_data = <<-EOF #! /bin/bash sudo su - root yum install httpd -y yum install git -y yum update -y service httpd start chkconfig --add httpd EOF tags = { Name = "web_httpd" } }
You can choose the required AMI by changing the ami key. Also notice that httpd and git is installed in OS using yum to facilitate your website.
Attach and mount EBS volume to your ec2 instance.
#Attach EBS volume resource "aws_volume_attachment" "ebs_attach" { device_name = "/dev/sdc" volume_id = "${aws_ebs_volume.web_storage.id}" instance_id = "${aws_instance.ec2_web.id}" force_detach = true } #Mount EBS and Add data files. resource "null_resource" "get_files" { connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/chayy/Downloads/private_key.pem") host = aws_instance.ec2_web.public_ip } provisioner "remote-exec" { inline = [ "sudo mkfs -t ext4 /dev/xvws", "sudo mount /dev/xvdc /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/chay2199/bootstrap_101.git /var/www/html/", ] } depends_on = ["aws_volume_attachment.ebs_attach"] }
A lot of things are happening in the above code snippet. First we are attaching the volume and setting force_detach to true. Force detach helps terraform to destroy the EBS without any errors if required. In linux system EBS becomes busy after mounting thus force detach is required.
While in mount, we are using null_resource which means that the code will run on our machine. Notice that the location of private key is provided.
Next remote exec is used to format and mount the drive inside the OS and then git clone is used to copy all the GitHub files to the server directory i.e /var/www/html
Creating a S3 bucket
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9's) of durability, and stores data for millions of applications for companies all around the world. To create an S3 bucket:
#Create S3 bucket resource "aws_s3_bucket" "buckymakesabucket" { bucket = "buckymakesabucket" acl = "public-read" force_destroy = true cors_rule { allowed_headers = ["*"] allowed_methods = ["PUT", "POST"] allowed_origins = ["https://buckymakesabucket"] expose_headers = ["ETag"] max_age_seconds = 3000 } }
Note that the bucket name should be unique in this case as a unique url will be generated for every object later on. Also PUT and POST methods are allowed to edit and add new objects to the bucket.
Uploading project files on S3 from GitHub
For this we'll be using a bat file that will help us execute some code locally and then we will sync our GitHub project folder to the S3 bucket.
That bat file:
git clone https://github.com/chay2199/bootstrap_101.git aws s3 sync C:/Users/chayy/terraform-aws-instance/bootstrap_101/ s3://buckymakesabucket/ aws s3api put-object-acl --bucket buckymakesabucket --key index.html --acl public-read aws s3api put-object-acl --bucket buckymakesabucket --key aboutus.html --acl public-read aws s3api put-object-acl --bucket buckymakesabucket --key contactus.html --acl public-read
The .tf file code snippet
#Upload to S3 bucket resource "null_resource" "remove_and_upload_to_s3" { provisioner "local-exec" { command = "C:/Users/chayy/terraform-aws-instance/s3_upload.bat" } depends_on = ["aws_s3_bucket.buckymakesabucket"] }
Creating a CloudFront distribution
Now finally we are moving towards making a CloudFront distribution. Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
# Create Cloudfront Distribution resource "aws_cloudfront_distribution" "web_dist" { origin { domain_name = "${aws_s3_bucket.buckymakesabucket.bucket_regional_domain_name}" origin_id = "S3-${aws_s3_bucket.buckymakesabucket.bucket}" custom_origin_config { http_port = 80 https_port = 443 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } # By default, show index.html file default_root_object = "index.html" enabled = true # If there is a 404, return index.html with a HTTP 200 Response custom_error_response { error_caching_min_ttl = 3000 error_code = 404 response_code = 200 response_page_path = "/index.html" } default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "S3-${aws_s3_bucket.buckymakesabucket.bucket}" #Not Forward all query strings, cookies and headers forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "redirect-to-https" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } # Distributes content to all price_class = "PriceClass_All" # Restricts who is able to access this content restrictions { geo_restriction { # type of restriction, blacklist, whitelist or none restriction_type = "none" } } # SSL certificate for the service. viewer_certificate { cloudfront_default_certificate = true } }
Here notice that our origin is our S3 bucket.
And finally to get the cloudfront ip as output add:
output "show_cloudfront_ip" { value = aws_cloudfront_distribution.web_dist.domain_name }
To run the file, open cmd and go to the terraform directory. Type:
terraform init terraform apply
That's it. Opening the output IP will take you to the index.html page.
You can also destroy your deployment using terraform destroy command.