Automating a secure web-server deployment using Terraform, AWS and Jenkins
Let me first take you for a short tour of the infrastructure I am talking about here.
The base system will be on AWS accepting tcp inbounds at port 81. Docker will be installed and run on the top of this ec2 instance. Inside docker, a container will run and the base system port 81 will be forwarded to port 80 of this container. This container is pre-configured as a web server. Now, a Jenkins job will get triggered when a new commit happens in the respective github repository. It'll clone the Terraform code and the website code. Then the job will initialize the terraform in that working directory and deploy the whole infrastructure on AWS. The moment this job will be done with, another job will download the respective images in the github repo and upload them in a s3 bucket on AWS, which has acl of public-read.
Now, why docker? Simply because it makes my webserver more secure. The /var/www/html folder of docker container will be linked to /web in the base system. Okay, let's go through it all.
First, I made an IAM user with administrative access and have generated its access and secret key. Then, using aws configure command, I added the credentials.
A security group will define the ingress and egress rules. Here, I allowed ssh and http in port 81.
resource "aws_security_group" "allow_http" { name = "allow_http" ingress{ description = "allowing tcp for http port:80" from_port = 81 to_port = 81 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress{ description = "allowing ssh" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress{ from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "allow_http81_and_ssh" } }
Time to define the ec2 instance now.
resource "aws_instance" "web_infra"{ ami = "ami-005956c5f0f757d37" instance_type = "t2.micro" security_groups = ["allow_http"] key_name = "mykey" tags = { Name = "webos" } }
Use output to print what you want. I printed the allocated public IP of my instance.
output "ip_output" { value = aws_instance.web_infra.public_ip }
Now creating a null resource to configure docker inside my instance. I used remote-exec provisioner to install docker, pull the container image from docker hub and run it. You can also use Ansible to do it, and it'll actually be more efficient.
resource "null_resource" "config_docker" { provisioner "remote-exec" { inline = [ "sudo yum -y install httpd", "sudo service httpd start", "sudo yum -y install docker", "sudo service docker start", "sudo docker pull vimal13/apache-webserver-php", "sudo docker run -dit --name webos -p 81:80 -v /web:/var/www/html vimal13/apache-webserver-php" ] connection { type = "ssh" user = "ec2-user" private_key = file("/home/eric/Downloads/mykey.pem") host = "${aws_instance.web_infra.public_ip}" } } depends_on = [null_resource.to_deploy_backup] }
Then I created a separate EBS volume of 1 GB and attached it to running instance. Why another volume ? I'll explain it in next step.
resource "aws_ebs_volume" "web_infra_ebs"{ availability_zone = aws_instance.web_infra.availability_zone size = 1 tags = { Name = "webos_ebs" } } resource "aws_volume_attachment" "web_vol_attach" { device_name = "/dev/sdg" volume_id = aws_ebs_volume.web_infra_ebs.id instance_id = aws_instance.web_infra.id force_detach = true }
I created another null resource to format the new ebs volume and mount it on /web folder in my base os. This'll do two things. Our data will stay in this volume even after the instance gets terminated. And, one can create snapshots and copy it to other regions.
resource "null_resource" "to_deploy_backup" { provisioner "remote-exec" { inline = [ "sudo mkfs.ext4 /dev/sdg", "sudo mkdir /web", "sudo mount /dev/xvdg /web", "sudo rm -rf /web/*", "sudo yum -y install git", "sudo git clone https://github.com/sourabh-burnwal/hybrid_cloud_training /web" ] connection { type = "ssh" user = "ec2-user" private_key = file("/home/eric/Downloads/mykey.pem") host = "${aws_instance.web_infra.public_ip}" } } }
Creating a separate s3 bucket where the website images will be uploaded.
resource "aws_s3_bucket" "image_source" { bucket = "mywebimages" acl = "public-read" region = "ap-south-1" tags = { Name = "bucket for web" } }
Then, I created a CloudFront service which will provide a CDN.
locals{ s3_origin_id = "yours3origin" } resource "aws_cloudfront_distribution" "webcloudfront" { origin { domain_name = "${aws_s3_bucket.image_source.bucket_regional_domain_name}" origin_id = "${local.s3_origin_id}" custom_origin_config { http_port = 81 https_port = 81 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "${local.s3_origin_id}" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }
Now, it's time to automate with Jenkins.
I created a job, which will get triggered whenever a new commit happens in corresponding github repo. It'll go there in every 2 minutes and check if any new commit has happened. You can change this according to your needs. This job will clone that repo and run terraform init and apply command to deploy the whole infrastructure on AWS. Cool, right ;)
Finally, I created another job which will get triggered when the previous job's build will be stable. It'll download the images needed for the website from github repo and upload them to my s3 bucket. To do this, first I installed a new plugin of s3. And configured it. Then, later at the time of creating the job, this plugin will enable us to automate the uploading thing.
Voila !!! An almost fully automated web-server is ready. Now, whenever the website developers upload a new code and commit it. Jenkins will know and deploy this new web-server on AWS. Interesting right? Anyway, If I missed something or any query you have, drop a comment. I'm still learning and will be more than happy to get some suggestions too.
Software Engineer @TCS digital, Algorithms, Java ReactJs
4 年Great Sourabh