?? Hybrid Multi Cloud Task-1??
Tushar Dighe
DevOps/Cloud Engineer | Kubernetes | GCP | AWS | Azure | Wiz | Databricks| Cycode | Security
To launch Webserver with AWS using Terraform code
Steps required to launch the App using terraform:-
1. Create the key and security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance, use the key and security group which we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /var/www/html.
5. The developer has uploaded the code into Github repo & also the repo has some images.
6. Copy the Github repo code into /var/www/html.
7. Create an S3 bucket, and copy/deploy the images from Github repo into the s3 bucket and change the permission to public readable.
8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.
The process required to launch the app in the detailed format:-
Step 1: Create an account on AWS (Amazon Web Services). Create a key in keypairs of Network & Security. Download the key details for login purpose. The details regarding the key what I have used is shown below.
Step 2: Create a security group for allowing SSH and HTTP protocol.
It will allow SSH and HTTP protocol.
resource "aws_security_group" "allow_my_http" { name = "launch-wizard-7" description = "Allow my HTTP SSH inbound traffic" vpc_id = "vpc-dbe1fcb3" ingress { description = "HTTP" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = [ "0.0.0.0/0" ] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "httpsecurity" }
}
Step 3: Launch EC2 instance. Code related to launching an instance using Terraform is as follows.
resource "aws_instance" "os" { ami = "ami-07db4adf15d7719d1" instance_type = "t2.micro" key_name = "task2" security_groups = [ "launch-wizard-7" ]
Step 4: Now launch EBS volume with a size of 1 GiB & then attach it to EC2 instance.
resource "aws_ebs_volume" "ebs_vol" { availability_zone = aws_instance.os.availability_zone size = 1 tags = { Name = "myfirstos" } } resource "aws_volume_attachment" "Ebs_Att" { device_name = "/dev/sdf" volume_id = aws_ebs_volume.ebs_vol.id instance_id = aws_instance.os.id force_detach = true } output "myos_ip" { value = aws_instance.os.public_ip }
Step 5: Mount that volume into /var/www/html. For this, the connection is needed to EC2 instance. “ remote-exec ” function is required to perform this process. The code is as follows.
resource "null_resource" "nullremote1" { depends_on = [ aws_volume_attachment.Ebs_Att, ] connection { type = "ssh" user = "ec2-user" private_key = file("C:/Users/Admin/Downloads/task2.pem") host = aws_instance.os.public_ip } provisioner "remote-exec" { inline = [ "sudo mkfs.ext4 /dev/xvdf", "sudo mount /dev/xvdf /var/www/html", "sudo rm -rf /var/www/html/*", "sudo git clone https://github.com/dighetushar654/Cloud_Task1.git /var/www/html/" ] } }
→ ‘sudo mount’ is used to mount into /var/www/html. ‘sudo git clone’ is used to copy the Github repo code into /var/www/html.
Step 6: Create a new repository to upload ‘index.html’ on GitHub account. It is used to show my webpage.
Step 7: Create a bucket on S3 service in AWS. And deploy the images from GitHub repo into S3 bucket. Modify the permissions to public readable.
resource "aws_s3_bucket" "job171" { bucket = "job171" acl = "public-read" tags = { Name = "job171" } versioning { enabled =true } } resource "aws_s3_bucket_object" "s3object" { bucket = "${aws_s3_bucket.job171.id}" key = "download.png" source = "C:/Users/Admin/Pictures/download.png" }
Step 8: Create a Cloudfront using S3 bucket. Use the CloudFront URL to update in the code. The code is as follows.
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" { comment = "This is origin access identity" } resource "aws_cloudfront_distribution" "imgcf" { origin { domain_name = "job171.s3.amazonaws.com" origin_id = "S3-job171" s3_origin_config { origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path } } enabled = true is_ipv6_enabled = true default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] target_origin_id = "S3-job171" # Forward all query strings, cookies and headers forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "allow-all" min_ttl = 0 default_ttl = 10 max_ttl = 30 } # Restricts who is able to access this content restrictions { geo_restriction { # type of restriction, blacklist, whitelist or none restriction_type = "none" } } # SSL certificate for the service. viewer_certificate { cloudfront_default_certificate = true } }
→ CloudFront service on AWS.
→ Command “ terraform apply — auto-approve ” is used to run our code with ‘.tf’ extension. The result regarding this command is as follows.
→ The webpage will be displayed using the IP address (https://35.154.206.235/).
→ The resulted webpage is as follows.
For complete code, go through my Github URL https://github.com/dighetushar654/Cloud_Task1.