Hybrid Computing Task 1
Naitik Shah
Data Scientist | Expert in Predictive Modeling, Machine Learning & Data Engineering | Python, SQL, Azure, Databricks | Achieved 15% Cost Reduction & Optimized Operations
Why Cloud?
Many companies have a hard time maintaining their data centers. It's also inconvenient for new startups to spend a huge amount on infrastructure. A data center would mean buying a whole system with lots of RAM, CPU & other necessary hardware. Then, hiring some expert guys to set up the whole system & to maintain it. Security, electricity, etc. would add on to the expenditure.
To make things easy, many companies rely on Cloud-Computing. Here, they just have to think about their work & not worry about unnecessary expenditure. Most of the Cloud Providers work on the agreement of Pay-as-we-go, which means that startups don't need a huge amount to set up their business.
Now that the cloud has excited, let's answer the next question, how do I use Cloud?
Almost all major cloud computing service providers provide the GUI version for a more user-friendly experience for naive users. But most companies prefer using the CLI version because it provides more customization, and when you get used to the CLI version, the work is faster.
The solution
The solution lies in using a single method that can be used for all the clouds. One such tool is Terraform. A Terraform code is similar for all clouds and it also helps in maintaining records of what all has been done.
The Question
In this project, I have launched a web server using a terraform code.
Step 1: First of all, configure your AWS profile in your local system using cmd. Fill your details & press Enter.
aws configure --profile naitik2 AWS Access Key ID [****************NQTY]: AWS Secret Access Key [****************b/hJ]: Default region name [ap-south-1]: Default output format [None]:
Step 2: Use Terraform to launch an instance of ec2. Here, I used an AMI for Redhat 8. In this case, I installed and configured Apache Web Services using the Remote Execute Provisioner as well. I have used a key and protection category that was pre-created. You may do the same if you want to build a new one. Make sure you have SSH allowed in the security community on port 22 and HTTP allowed on port 80.
The terraform code is given below:
provider "aws" { region = "ap-south-1" profile = "naitik2" } resource "aws_instance" "test_ins" { ami = "ami-052c08d70def0ac62" instance_type = "t2.micro" key_name = "newk11" security_groups = [ "launch-wizard-1" ] connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/AAAA/Downloads/newk11.pem") host = aws_instance.test_ins.public_ip } provisioner "remote-exec" { inline = [ "sudo yum install httpd php git -y", "sudo systemctl restart httpd", "sudo systemctl enable httpd", "sudo setenforce 0" ] } tags = { Name = "my_os" } }
Step 3: Build a volume of EBS. Here, I have produced a 1 GiB volume. One question that will occur here is that we do not know that our example is launched in which zone of availability. But, we need to start our EBS volume in the same region, otherwise, it's not possible to link it. To fix this, I downloaded the instance's availability zone and used it here.
resource "aws_ebs_volume" "my_vol" { availability_zone = aws_instance.test_ins.availability_zone size = 1 tags = { Name = "my_ebs" } }
Step 4: Now attach your created EBS volume to your instance.
resource "aws_volume_attachment" "ebs_att" { device_name = "/dev/sdd" volume_id = "${aws_ebs_volume.my_vol.id}" instance_id = "${aws_instance.test_ins.id}" force_detach = true }
I have also retrieved the public IP of my instance and stored it in a file locally as it may be used later.
resource "null_resource" "ip_store" { provisioner "local-exec" { command = "echo ${aws_instance.test_ins.public_ip} > public_ip.txt" } }
Step 5: Now, we need to mount our EBS volume to the folder /var/www/html so that it can be deployed by the Apache Web Server. I have downloaded the code from Github at the same location.
resource "null_resource" "mount" { depends_on = [ aws_volume_attachment.ebs_att, ] connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/AAAA/Downloads/newk11.pem") host = aws_instance.test_ins.public_ip } provisioner "remote-exec" { inline = [ "sudo mkfs.ext4 /dev/xvdd", "sudo mount /dev/xvdd /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/naitik2pnd23/Cloud_task1.git /var/www/html/" ] } }
I have also downloaded all the code & images from Github in my local system so that I can automate the upload of images in s3 later.
resource "null_resource" "git_copy" { provisioner "local-exec" { command = "git clone https://github.com/naitik2pnd23/Cloud_task1.git C:/Users/AAAA/Pictures/" } }
Step 6: Now, we create an S3 bucket on AWS. The code snippet for doing the same is as follows -
resource "aws_s3_bucket" "sp_bucket" { bucket = "naitik23" acl = "private" tags = { Name = "naitik2314" } } locals { s3_origin_id = "myS3Origin" }
Step 7: Now that the S3 bucket has been created, we will upload the images that we had downloaded from Github in our local system in the above step. Here, I have uploaded just one pic. You can upload more if you wish.
resource "aws_s3_bucket_object" "object" { bucket = "${aws_s3_bucket.sp_bucket.id}" key = "test_pic" source = "C:/Users/naitik/Pictures/pic1.jpg" acl = "public-read" }
Step 8: Now, we're building our CloudFront and linking it to our S3 bucket. The CloudFront ensures fast distribution of content around the world leveraging AWS 's edge locations.
resource "aws_cloudfront_distribution" "my_front" { origin { domain_name = "${aws_s3_bucket.sp_bucket.bucket_regional_domain_name}" origin_id = "${local.s3_origin_id}" custom_origin_config { http_port = 80 https_port = 80 origin_protocol_policy = "match-viewer" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "${local.s3_origin_id}" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 3600 max_ttl = 86400 } restrictions { geo_restriction { restriction_type = "none" } } viewer_certificate { cloudfront_default_certificate = true } }
Now, we go to /var/www/html & update the link of the images with the link from CloudFront. As of now, only this part is manaul in my project. I'm trying my best to automate it & will update here as soon as I reach to success. Any help in this regard is strongly welcome.
Step 9: Now, we write a terraform code snippet to automatically retrieve the public ip of our instance and open it in chrome. This will land us on the home page of our website that is present in /var/www/html.
resource "null_resource" "local_exec" { depends_on = [ null_resource.mount, ] provisioner "local-exec" { command = "start chrome ${aws_instance.test_ins.public_ip}" } }
Finally, you'll see your home page open up.
Any suggestions are always welcome.
Senior Data Scientist at DTE Energy
4 年Very well explained.