Automated Website Deployment Using Terraform on AWS
DISHA BHATTACHARYA
Associate Software Engineer @NRIFT | B. Tech in CSE from STCET | DevOps Enthusiast
When client hit a Website, how come at any place, the web page shows up without any delay, though may the server is on the other side of the world? The contents of the web page, even the images and all the graphics show up in less than a second.
What makes it possible?
All of this is possible with the advent of Cloud Services. With Different Services that Cloud Computing Technologies provide, we can build a distributed Infrastructure, where the code of our website can come from EBS Volume directly attached to the server and heavy static content like images, videos etc we can put in the Cloudfront, a fast content delivery network (CDN) service of AWS that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
Why we need Infrastructure-as-a-Code?
Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. By employing infrastructure as code, you can deploy your infrastructure architectures in many stages. That makes the whole software development life cycle more efficient, raising the team’s productivity to new levels.
IaC tools can vary as far as the specifics of how they work, but we can generally divide them into two main types: the ones that follow the imperative approach, and the ones who follow the declarative approach. If you think the categories above have something to do with programming language paradigms, then you’re spot on!
The imperative approach “gives orders.” It defines a sequence of commands or instructions so the infrastructure can reach the final result.
A declarative approach, on the other hand, “declares” the desired outcome. Instead of explicitly outlining the sequence of steps the infrastructure needs to reach the final result, the declarative approach shows what the final result looks like.
Examples of IaC tools — Terraform, Chef, Ansible, Puppet etc.
What is Terraform?
Terraform is an open source tool by HashiCorp. It’s written in Go. It enables the declarative configuration of a infrastructure in structured text files so they can be managed like any other source code in a version control system. This configuration can be used to plan, set up, change, and even dismantle an environmen
Workflow of the Task :
Using Terraform we will do multiple things in a proper sequence:
- Create an AWS Instance — EC2
- Install required dependencies, modules, softwares
- Create an EBS for persistent storage
- Attach, Format and Mount it in a folder in the instance
- Clone the code sent by developer on GitHub in the folder
- Create an S3 Bucket for storage of static data
- This will be sent to all the edge locations using CloudFront
- Finally loading the webpage on your favourite browser automatically
Building Terraform Code :
- Authentication using IAM :
provider "aws"{ region = var.region profile = "dishab" }
provider keyword helps Terraform to download the useful plugins for the Cloud Provider used to build the Infrastructure.
profile is given so that you need not login through the code. The cloud engineer just provides the profile name and the terraform code will pick up credentials from the local system as shown in the figure:
2. Security Group Creation :
resource "aws_security_group" "my_sg"{ name = var.security_group_name description = "Allow TLS inbound traffic" ingress { description = "HTTP Access" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "SSH Access" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "ping" from_port = -1 to_port = -1 protocol = "icmp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = var.security_group_name } }
Ingress specifies the incoming traffic in the web server. 3 Inbound Rules given to the Security Group are :
- SSH for testing purpose so that remotely we can connect to the AWS EC2 instance.
- HTTP so that traffic can hit on the website.
- ICMP to check ping connectivity.
Egress has been set to all ports so that outbound traffic originating from within a network can go.
3. Launching web srver as EC2 Instance :
resource "aws_instance" "web_os" { depends_on = [ aws_security_group.my_sg ] ami = var.ami_id instance_type = "t2.micro" key_name = "mykey" security_groups = [ "my_sg" ] connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/Disha/Downloads/mykey.pem") host = aws_instance.web_os.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install httpd php git -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd" ] } tags = { Name = "web_os" } }
we have to download the required tools, so that our webpage can be deployed, managed and can be viewed by the client.
- So first we create an EC2 instance and specify all the required details.
- Then we create a connection so that we can do SSH on the the instance, to install the tools.
- Then using Provisioner "remote-exec" we go to the remote system and using the inline method run multiple commands that are compatible with the Linux flavour I am using.
4. Create, format, Mount EBS Volume & Cloning Website code to It :
resource "aws_ebs_volume" "my_ebs" { availability_zone = aws_instance.web_os.availability_zone size = 1 tags = { Name = "my_ebs" }
}
Then create another EBS storage, to make the data persistent. Make sure that you make the EBS in the same availability zone in which the instance is launched.
resource "aws_volume_attachment" "my_ebs_attach" { depends_on = [ aws_ebs_volume.my_ebs ] device_name = "/dev/sdh" volume_id = aws_ebs_volume.my_ebs.id instance_id = aws_instance.web_os.id force_detach = true connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/Disha/Downloads/mykey.pem") host = aws_instance.web_os.public_ip } provisioner "remote-exec" { inline = [ "sudo mkfs.ext4 /dev/xvdh", "sudo mount /dev/xvdh /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/disha1822/Website-From-AWS.git /var/www/html/", ] }
}
Again using provisioner "remote-exec" we use the inline method to run the commands:
- sudo mkfs.ext4 /dev/xvdf : create partition
- sudo mount /dev/xvdf /var/www/html : mount the partition
- sudo rm -rf /var/www/html/* : empty the folder because git clone works only in empty directory
- sudo git clone https://github.com/disha1822/Website-From-AWS.git /var/www/html/ : git clone to get the webpage files
5. Creating S3 Bucket & Uploading Static Content as S3 Object :
resource "aws_s3_bucket" "web-image" { bucket = var.bucket_name acl = "public-read" region = var.region tags = { Name = var.bucket_name }
}
resource "aws_s3_bucket_object" "web_image_object" { depends_on = [ aws_s3_bucket.web-image ] bucket = aws_s3_bucket.web-image.bucket acl = "public-read" key = "/images/cloud.jpg" source = "C:/Users/Disha/Pictures/cloud/AWS_Terraform.jpg" tags = { Name = var.bucket_name } }
We have used a keyword depends_on. This is used because terraform code doesn’t work in a sequential manner. So to prevent a condition where the bucket is not yet created but object is trying to get uploaded we are doing this.
Like this the object will start to upload only after the bucket creation is done.
key is the name and path of the file that will show in the bucket and source is the path of the file I want to upload in the bucket.
6. Cloudfront Creation :
variable "oid" { type = string default = "S3-" } locals { s3_origin_id = "${var.oid}${aws_s3_bucket.web-image.id}" }
resource "aws_cloudfront_distribution" "tf_cloudfront" { depends_on = [ aws_s3_bucket.web-image, aws_s3_bucket_object.web_image_object ] origin { domain_name = aws_s3_bucket.web-image.bucket_regional_domain_name origin_id = local.s3_origin_id } enabled = true default_root_object = "index.html" default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "${local.s3_origin_id}" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/Disha/Downloads/mykey.pem") host = aws_instance.web_os.public_ip } provisioner "remote-exec" { inline = [ "sudo su <<END", "echo \"<img src='https://${aws_cloudfront_distribution.tf_cloudfront.domain_name}${aws_s3_bucket_object.web_image_object.key}'>\" >> /var/www/html/index.php", "END", ] } }
we create the rest of the code by providing the domain name & origin ID.
- I have created a variable where I have set a value to “S3-”.This is because when we check the ID using the command aws_s3_bucket.s3bucket.id it gives only the ID but from GUI we know that the ID provided starts with S3-
- Then we set the default_cache_behavior which is a required block of code.
- Then we set the viewer_protocol_policy specifying the default and maximum TTL.
- Then we can set any restrictions if required (whitelist & blacklist).
- Then we have to set the viewer_certificate as true.
Now finally one last thing we have to do. From this cloudfront the URL that is provided to the bucket object, we have to put in the code send to us by the developer so that the client can see it.
For this again I have made an SSH connection using Connection & Provisioner.
7. Browsing the Public IP of Web Server :
resource "null_resource" "browse_server"{ depends_on = [ aws_volume_attachment.my_ebs_attach, aws_cloudfront_distribution.tf_cloudfront ] provisioner "local-exec"{ command = "start chrome https://${aws_instance.web_os.public_ip}/" } }
I am running the chrome command on my local machine using "local-exec" Provisoner to launch the EC2 instance using the IP of the instance.
Building the Whole Infrastructure Created above :
1.Without plugins Terraform will not work. For downloading the Plugins:
>terraform init
2. Then validate your terraform script by running the following command:
> terraform validate
3. To apply the desired changes, run the following command:
> terraform apply
4. To confirm the planned actions, type: yes