Creating a WebServer Infrastructure on AWS using Terraform

Creating a WebServer Infrastructure on AWS using Terraform

Few years ago, World started using cloud for their infrastructure as it saves a lot of their cost for the hardwares and saves a lot of time in setup of network and the infrastructure and with the world moving to the agile development environment, time is very crucial for every organization. In this article, I am going to show you a demo of How to build a complete infrastructure as a code (IAAC) to create a key pair for connection to our running ec2 instance, create a security group to allow ssh and clients requests to our webserver, launch an ec2 instance, setup our webserver on it, create a s3 bucket and setup cloudfront from it all in one click from a single piece of code, so, Let's Start.

Terraform is a provisioner to manage any cloud service from it's single centralized system without having the knowledge of those which you want to integrate. You can easily work with multi cloud environment so here we will use it to build our first infrastructure as a code with it. For this, you need to have terraform installed in your system and configure a profile with your name in aws cli and after that we are going to write our code, for every resource we need from AWS (Amazon Web Services).

The extension of the file for terraform is .tf, and the code for our resources is :-

No alt text provided for this image

First of all we have created some variables whose values will be entered by user like which profile to use, what name you want to give to your key, instnace, ebs, security group and bucket etc. Here we start by informing terraform that we want resources from "aws" to create our infrastructure and set our region to "ap-south-1" (Mumbai) and we will use the profile entered by user for the credentials to access their aws account.

After that we have created a "private key" which we will use to login and connect using ssh to our ec2 instances launched in AWS, we have saved it to a file named "mykey.pem" and set it's permission to read-only for our security using "chmod 400".

No alt text provided for this image

After that we setup a public key for the ssh and here we give it a name too which we can see in the key pairs section of our ec2 service in aws. Now, we know that default port no. of a webserver is 80 which allows our clients to connect and access the website and default port no. for ssh is 22 which allows the ssh connections to our system, so we created a "security group" to allow connections at both these ports from any ip address of the world (["0.0.0.0/0"]) and also allowed all outbound protocols so that we can perform our desired operations from the ec2 instances.

No alt text provided for this image

Now that we have our key and security groups ready, we are ready to launch our ec2 instance. So, for that we have created a resource called "aws_instance". In that, we have specified :-

  1. Which ami or operating system we want in our instance using ami-id
  2. Which type of instance we want to launch, basically like technical specification of RAM and CPU
  3. Then we have specified the key want to use to login to our instance which security group we want to allocate to the instance (the ones created above)

After the instance is launched, we will now create an "ebs volume" which is like an external hard disk for persistent storage for our code so that even if the instance gets terminated, we will still have our code with us as it is but made it dependent on our instance so that it should be created only after our instance is launched. Then we are going to attach this or plug it into our ec2 instance using "aws_volume_attachment" and we have made it dependent on volume creation so that it shouldn't try to attach the volume before it is created. You can give any unique name to our hard disk, here I have given "/dev/sdh". Then we made a remote connection to the ec2 instance using connection type ssh to run some commands in our ec2 instance.

No alt text provided for this image

Here in our instance running on aws, we have installed httpd webserver and git. After that we have formatted the ebs volume to create a partition and then mounted it on our webserver directory "/var/www/html" so that we never lose it's data. After we mount it, then we download the code from github into our webserver folder and start our server so that the website can be live for the clients.

After the main part is done, now we will create a "s3 bucket" to store our static data like images etc. and then we will create cloudfront service to deliver the content from s3 at faster speed to our clients across the globe. We create a private s3 bucket in the same region where we have launched our instance and then we uploaded a photo in that s3 bucket using the resource "s3_bucket_object" and set it's permission for public to view and download. We have made it dependent on s3 bucket so that it shouldn't try to upload the data inside bucket before the bucket is created.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Now comes the main part, here we will configure cloudfront for our s3 bucket to deliver the content at faster speed to the clients in other regions. We provide it the s3 bucket id and name for which we want to setup cloudfront and setup rules for it to be accessed. We redirect it to https for the security of data and we haven't setup any geo restriction because we don't want to miss any client from any part of the world. Then we setup a "null_resource" just to execute some more commands according to our need.

No alt text provided for this image

Here we replace the url for image in our code with the cloudfront url using sed command so that the image is displayed in the website and there is no latency for the image to be downloaded at client side and then we restart our webserver for the changes to take effect. Hence, we achieved all the things we were looking for using a single provisioner called Terraform.

After writing the code, Initialize terraform using the command - terraform init. it will download all the plugins required to get our resources from the aws and then you are good to go and in one click, your infrastructure will be launched. You can use this same code or file to launch the exactly same infrastructure in any system without wasting any time.

After the initialization, To create the infrastructure, run the command - terraform apply. Enter the values for all the variables, now just sit and relax. Terraform will handle everything for you and at the end it will give you the IP Address in the "output - webip" where you can access your website. The screenshots of the resources are attached below :-

For the key, I entered the value as - "mykey" :-

No alt text provided for this image

I named our security group as - "allow_tls" :-

No alt text provided for this image

For the ebs storage, I named it - "webebs" and we set the size for it 1 GB in the code :-

No alt text provided for this image

The instance is named - "webos" :-

No alt text provided for this image

The webiste looks like this after the code is downloaded from github :-

No alt text provided for this image

The s3 bucket is named - "dheeth-bucket" - It should be a unique name to create url for your bucket :-

No alt text provided for this image

The screenshot of image uploaded in s3 bucket :-

No alt text provided for this image

Here we have our cloudfront distribution setup in aws for our s3 bucket :-

No alt text provided for this image

After replacing the url with cloudfront, our final website looks like this :-

No alt text provided for this image

The Terraform Code for this Infrastructure is available on github - Click Here for Terraform Code

I hope you liked this Article and comment your feedback and suggestions down below. I'll be eagerly waiting for those.

Ankit Grover

Working as Devops Engineer at RChilli

4 年

keep it up pawan and good job??

要查看或添加评论,请登录

Pawan Kumar的更多文章

  • DevOps Automation using Groovy in Jenkins Job DSL

    DevOps Automation using Groovy in Jenkins Job DSL

    Whenever we talk about DevOps, it comprises of 2 words Dev + Ops meaning Developers sharing some load of the Operations…

    3 条评论
  • Integrating Prometheus & Grafana Keeping their Data Persistent in K8s

    Integrating Prometheus & Grafana Keeping their Data Persistent in K8s

    You must have always wondered about keeping the data of your monitoring system having prometheus and grafana running as…

    2 条评论
  • Automating Continuous Deployment in DevOps using Jenkins

    Automating Continuous Deployment in DevOps using Jenkins

    What are we going to build :- Create container image that has Jenkins installed using dockerfile. When we launch this…

    1 条评论
  • Rolling Updates With Docker Jenkins and Kubernetes

    Rolling Updates With Docker Jenkins and Kubernetes

    Almost each and everything keeps improving and innovating in this world and this is necessary because if you don't, you…

  • Automating CI/CD Pipeline in DevOps

    Automating CI/CD Pipeline in DevOps

    In this Article, We are going to Automate the Deployment and Monitoring of a Web App through the Jenkins running in…

    5 条评论
  • Creating a Basic Pipeline in DevOps

    Creating a Basic Pipeline in DevOps

    This is a Project based on the first 8 days of DevOps Assembly Lines training under our mentor Vimal Daga Sir and In…

    2 条评论
  • A big thanks to IIEC Rise

    A big thanks to IIEC Rise

    Finally completed the Docker project and this is my joomla cms running on the docker containers. Few days ago, I wasn't…

社区洞察

其他会员也浏览了