An Infrastructure Created in AWS Using Terraform. All Activities were Done Only By Using Code Snippets So-Called Automation.
TERRAFORM INTRO
Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a data center infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON. Terraform supports a number of cloud infrastructure providers such as Amazon Web Services, IBM Cloud (formerly Bluemix), Google Cloud Platform, DigitalOcean, Linode, Microsoft Azure, Oracle Cloud Infrastructure, OVH, Scaleway, VMware vSphere or Open Telekom Cloud as well as OpenNebula and OpenStack.
HashiCorp also supports a Terraform Module Registry launched in 2017 during HashiConf 2017 conferences. In 2019 Terraform introduced the paid version called Terraform Enterprise for larger organizations.
Terraform has four major commands: Terraform init, Terraform Plan, Terraform Apply, Terraform Destroy.
The Use-Case
1. Create the key and security group which allows the port 80.
2. Launch EC2 Instance.
3. In this EC2 instance use the key and security group which we have created in step-1.
4.Launch one volume(EBS) and mount that volume into /var/www/html.
5. Developer have uploaded the code into Github repo also the repo has some images.
6. Copy the Github repo code into /var/www/html.
7. Create S3 bucket, and copy/deploy the images from Github repo into the S3 bucket and change the permission to public readable.
8. Create a CloudFront using S3 bucket(which contains images) and use the CloudFront URL to update in code in /var/www/html.
Code Snippets
1. Here we are declaring or defining the provider and to log in to it we are using the configured profile.
provider "aws" { region = "ap-south-1" profile = "ranjit" }
2. Then creating the key-pair using the TLS method and RSA algorithm. This will export the private key and public key as well by using which we can create an instance or launch an instance and also we can dive into the OS through SSH protocol again for SSH we required a private key that can be exported from TLS. Also, we are saving the Private key in our local folder for future use.
resource "tls_private_key" "generated_key" { algorithm = "RSA" rsa_bits = 2048 } resource "local_file" "the_key" { content = tls_private_key.generated_key.private_key_pem filename = "mykey.pem" file_permission = 0400 } resource "aws_key_pair" "key" { //key_name = "my_private_key" public_key = tls_private_key.generated_key.public_key_openssh }
3. Creating VPC(default)
resource "aws_default_vpc" "default" { tags = { Name = "Default VPC" } }
4. Creating a Security Group which allows port 80. And as we will perform SSH so we will add another ingress for port 22.
resource "aws_security_group" "security_groups_created" { //name = "allow_ssh_http" description = "Allow ssh inbound traffic" vpc_id = aws_default_vpc.default.id ingress { description = "http" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "ssh" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "allowing http ssh" } } //output "security"{ //value=aws_security_group.allow_tls //}
5. Launching the EC2 instance by using the created security group and key. Also using SSH for connecting to OS so that we can install required Softwares or web server by using remote-exec provisioner.
resource "aws_instance" "web" { ami = "ami-0447a12f28fddb066" instance_type = "t2.micro" key_name = aws_key_pair.key.key_name security_groups = [ aws_security_group.security_groups_created.name ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.generated_key.private_key_pem host = aws_instance.web.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install git httpd php -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd" ] } tags = { Name = "ranjit AWS EC2 Instance" } }
6. Creating EBS volume and attaching it to the launched instance. As we know that the pre-storage will be deleted if the OS terminated so to avoid data loss we are using block storage which will store all the data's and it won't be affected by any termination that can happen with OS. To store data's in this EBS we have to perform mounting, formatting, etc. Which are available here int step number 7.
//cration of EBS resource "aws_ebs_volume" "ebs1" { availability_zone = aws_instance.web.availability_zone size = 1 tags = { Name = "EBS volume" } } //Attaching EBS with OS resource "aws_volume_attachment" "ebs_att" { device_name = "/dev/sdh" volume_id = aws_ebs_volume.ebs1.id instance_id = aws_instance.web.id force_detach = true }
7. Formatting the attached EBS, to do this first we have to perform partitioning but without partitioning also it will work.Then mounting /dev/xvdh("/dev/sdh" got replaced to "/dev/xvdh") to /var/www/html so that all the data's are inside /var/www/html will be available inside /dev/xvdh.Then copying Github code into /var/www/html.
resource "null_resource" "null1"{ depends_on=[ aws_volume_attachment.ebs_att ] connection { type = "ssh" user = "ec2-user" private_key = tls_private_key.generated_key.private_key_pem host = aws_instance.web.public_ip } provisioner "remote-exec" { inline = [ "sudo mkfs.ext4 /dev/xvdh", "sudo mount /dev/xvdh /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/ranjitben10/multicloud.git /var/www/html" ] } }
8. Creating S3 bucket. Which will store static files like images, videos, etc? We are copying all images from Github and storing them in our local folder. Further, we will upload all files to the created S3 bucket from our local folder. Also, we are checking if the folder exist in our local computer in which all the images gonna copy. Then we are deleting that folder first else "git" command won't copy images to the folder which has some content inside.
//resource "null_resource" "loc"{ //provisioner "local-exec" { //command="IF EXIST C:/Users/ranji/tera/mytest/cloud-images RMDIR /S /Q //C:/Users/ranji/tera/mytest/cloud-images" // } //} resource "aws_s3_bucket" "my-bucket" { bucket = "ui-ux-designs" acl = "public-read" provisioner "local-exec" { command = "git clone https://github.com/ranjitben10/Cloudimages.git cloud-images" } tags = { Name = "S3-Bucket" Environment = "Production" } versioning { enabled= true } }
9. Uploading Images to S3 bucket.
resource "aws_s3_bucket_object" "ui-ux-cloud-image1" { bucket = aws_s3_bucket.my-bucket.bucket key = "EventManagerSignUp.png" source = "cloud-images/EventManagerSignUp.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image2" { bucket = aws_s3_bucket.my-bucket.bucket key = "Training.png" source = "cloud-images/Training.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image3" { bucket = aws_s3_bucket.my-bucket.bucket key = "Manage Profile and can post talents.png" source = "cloud-images/Manage Profile and can post talents.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image4" { bucket = aws_s3_bucket.my-bucket.bucket key = "PostEventByEvent Manager.png" source = "cloud-images/PostEventByEvent Manager.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image5" { bucket = aws_s3_bucket.my-bucket.bucket key = "solution Architecture.png" source = "cloud-images/solution Architecture.png" acl = "public-read" } resource "aws_s3_bucket_object" "ui-ux-cloud-image6" { bucket = aws_s3_bucket.my-bucket.bucket key = "Register To An Event.png" source = "cloud-images/Register To An Event.png" acl = "public-read" }
10. Finally creating CloudFront using S3 bucket to serve files stored in S3 bucket with very low latency.CloudFront is a service that is used to deliver content with less latency.In a nutshell, it is a Content Delivery Network.
resource "aws_cloudfront_distribution" "s3_bucket_distribution" { origin { domain_name = aws_s3_bucket.my-bucket.bucket_regional_domain_name origin_id = local.s3_origin_id custom_origin_config { http_port = 80 https_port = 80 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = local.s3_origin_id forwarded_values { query_string = false cookies { forward = "none" } } min_ttl = 0 default_ttl = 3600 max_ttl = 86400 viewer_protocol_policy = "allow-all" } restrictions { geo_restriction { restriction_type = "blacklist" locations = ["CA", "GB", "DE"] } } viewer_certificate { cloudfront_default_certificate = true } }
Finally, let us have a look at the result!!!
resource "null_resource" "final"{ depends_on=[ aws_instance.web,aws_s3_bucket.my-bucket,null_resource.null1,aws_s3_bucket_object.ui-ux-cloud-image1,aws_s3_bucket_object.ui-ux-cloud-image2,aws_s3_bucket_object.ui-ux-cloud-image3,aws_s3_bucket_object.ui-ux-cloud-image4,aws_s3_bucket_object.ui-ux-cloud-image5,aws_s3_bucket_object.ui-ux-cloud-image6,aws_cloudfront_distribution.s3_bucket_distribution ] provisioner "local-exec"{ command= "start chrome ${aws_instance.web.public_ip}" }
}
Task Completed...