Launching complete Web-Application Infrastructure on AWS with terraform
Cloud computing with AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 175 fully-featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
What is Terraform?
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.
The problem
So, you need some kind of cloud-deployed software and you’ve already decided on some Infrastructure as a Service provider (IaaS - one of the many something as something acronyms that’s all the rage) such as AWS or Azure. If you require a large set of infrastructure, for a complex distributed application, for instance, all of a sudden you find yourself spending a lot of your free time and weekends in the AWS console.
Live configuration of services and the bringing up and down of resources becomes a frequent endeavor, especially during the developmental/experimental stages. Moreover, with all those services, it’s much more likely you’ll forget to destroy one of them; hiding out of sight while the AWS bill creeps up insidiously.
Instead of trying to rigidly document the exact provisioning process, there must be a better way…
The Solution
What is infrastructure as code?
Infrastructure as Code, in simple terms, is a means by which we can write declarative definitions for the infrastructure we want to exist and using them with a provisioning tool that deals with the actual deployment. This means that we can code what we want to be built, provide necessary credentials for the given IaaS provider, kick off the provisioning process, pop the kettle on and come back to find all your services purring along nicely in the cloud… or a terminal screen full of ominous warnings about failed deployment and “unrecoverable state” and a deep sense of growing unease (but not often, don’t worry ??).
Let's see how we can build our infrastructure with Terraform
Before getting started, we should ready with these following things
- AWS account
- IAM user created
- AWS CLI tool installed
- Terraform installed
- Environment variable set for Terraform
Provider
Before Terraform will be able to do anything useful, you must specify which IaaS provider you’re using. This will be the cue to download the plugins necessary to read from and write to the hosting service.but, for accessing the AWS terraform needs your credentials. it is not good practice to give your credentials directly. the better way is, create a user profile on your local machine and tell terraform to get access from this profile.
C:\Users\Sourabh Miraje>aws configure --profile myprofile AWS Access Key ID [****************GZON]: AWS Secret Access Key [****************dHxd]: Default region name [ap-south-1]: Default output format [None]:
Creating Key-Pair :
Amazon EC2 uses public-key cryptography to encrypt and decrypt login information. Public key cryptography uses a public key to encrypt a piece of data, and then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair.
resource "tls_private_key" "task1_key" { algorithm = "RSA" } module "key_pair" { source = "terraform-aws-modules/key-pair/aws" key_name = "Task1_Key" public_key = tls_private_key.task1_key.public_key_openssh }
Creating security group :
We create a security group allowing port 22 for ssh login and allowing port 80 for HTTP protocol.
resource "aws_security_group" "task1-ssh-http" { name = "task1-ssh-http" description = "allow ssh and http traffic" ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
Create an AWS instance :
Here we are going to launch one ec2 instance for our server. As soon as the ec2 instance get installed, we use our ssh credentials to enter into the OS and we install git software and apache web server. i.e. httpd.
In Amazon Linux 2 AMI, I used a yum command to install the software.
Provisioner:
To run the command on CLI of a local or remote machine we use provisioner.it is always used inside the resources.
resource "aws_instance" "mytask1" { ami = "ami-0447a12f28fddb066" instance_type = "t2.micro" key_name = "Task1_Key" security_groups = ["${aws_security_group.task1-ssh-http.name}"] tags = { Name = "mywebserver" } connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.task1_key.private_key_pem host = aws_instance.mytask1.public_ip } provisioner "remote-exec"{ inline = [ "sudo yum install httpd git -y", "sudo systemctl start httpd", "sudo systemctl enable httpd", ] } }
Creating EBS volume :
An Amazon EBS volume is a durable, block-level storage device that you can attach to one instance or to multiple instances at the same time. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application.
resource "aws_ebs_volume" "myvolume" { depends_on = [ aws_instance.mytask1 ] availability_zone = aws_instance.mytask1.availability_zone size = 1 tags = { Name = "ebsval_for_task" } }
Attaching newly created Volume to existing ec2 instance :
To use the created volume we have to attach the ebs volume to ec2 instance. we require volume id and instance id for attaching purpose.
resource "aws_volume_attachment" "ebs_att" { depends_on = [ aws_ebs_volume.myvalume ] device_name = "/dev/sdh" volume_id = aws_ebs_volume.myvalume.id instance_id = aws_instance.mytask1.id }
Mounting the volume :
After attaching the volume, we have to make the partitions of volume and then it should have to be mounted. without partitions, we can't make use of anything stored inside the volume. the directory which we are going to mount must be empty. for this, again we need a root power so do ssh login with required credentials.
resource "null_resource" "partition_and_mount"{ depends_on = [ aws_volume_attachment.ebs_att ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.task1_key.private_key_pem host = aws_instance.mytask1.public_ip } provisioner "remote-exec"{ inline = [ "sudo mkfs.ext4 /dev/sdh", "sudo mount /dev/sdh /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/Sourabh-The-Creator/mycode.git /var/www/html" ] } }
After mounting we require the code to run our web application, so we can use git software to get the code. we clone our code to directory /var/www/html as apache server search for the requested page here.
Creating a bucket :
Object storage built to store and retrieve any amount of data from anywhere. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
resource "aws_s3_bucket" "taskbucket1" { bucket = "taskbucket1" acl = "public-read" tags = { Name = "taskbucket1" } } locals { s3_origin_id = "myS3Origin" }
Uploading object from local machine to s3 bucket :
resource "aws_s3_bucket_object" "taskbucket1" { bucket = "taskbucket1" key = "sampleimage.jpeg" source = "D:/terraform/task-1/ec2/sampleimage.jpeg" }
Creating Cloudfront :
Amazon CloudFront is a content delivery network (CDN) offered by Amazon Web Services. Content delivery networks provide a globally-distributed network of proxy servers that cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.
resource "aws_cloudfront_distribution" "task_dist" { origin { domain_name = aws_s3_bucket.taskbucket1.bucket_regional_domain_name origin_id = local.s3_origin_id custom_origin_config { http_port = 80 https_port = 80 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }
Now we are pretty good to go and launch our server.
Note: it is good practice to validate terraform code before going to launch directly.
- Go to the directory where you saved your terraform code and open command prompt from there.
- Start with terraform init. it is necessary to download the required plugins.
- terraform validate : (Optional) to check the validation of our code.
- terraform apply: To run the terraform code.
DevOps, Cloud & Performance Engineer | MTS-2 @Cohesity
4 年Great work ???? You will definitely grow to a new height