Hybrid Multi Cloud Task-1
Richard Nadar
Cyber Security Enthusiast | SOC Analyst | Threat Hunting & Threat Intelligence Enthusiast | Learner
Task description:
1. Create the key and security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the key and security group which we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /var/www/html
5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
Key pair:
First I generated anew key with the help of puttygen software and saved as a private key.
S3 bucket:
Bucket is a type of a folder in aws in which we can upload files, images, documents,etc.
Configure:
Here, we provide which cloud we want to use along with the region. In my case it is N.Virginia and its alternative name is us-east-1. Also since I have aws educate account I have used shared credentials.
For accessing instance we require a private key. Here i have created a variable to make the code a bit more dynamic.
Cloud Front:
This CloudFront takes image from S3 Bucket and delivers it to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Behind the scene CloudFront uses edge location for CDN services.
Security group:
It's a firewall which will only allow ssh and http inbound traffic.
Provisioning Instance:
Here we provide the AMI which we want to use along with the private key and if we want terraform to connect to the remote os then we have to use connection keyword too. Also we used provisioner to execute our base os commands (remote host) to setup our webserver and start the services.
Creating EBS volume:
Here I have created a EBS volume which is a persistent storage which would later be attached to instance and if we have want to use this volume after attachment we have to partition>format>mount it.
Attaching EBS volume:
Here i have used force_detach so that if I would like to destroy the whole setup without un-mounting then it would give no error.
Partition, format and mounting of the volume along with copying contents of github to our web server:
Output:
Important note:
First create a seperate workspace and inside workspace create multiple directories for writing the terraform code. When the code is written for the first time in terraform we need to terraform init so that the required plugins are installed by terraform. After that we need to apply the work by terraform apply. We can apply after each block of code for step by step result. For eg: after creating a cloud front we can apply and correspondingly we can check at GUI for reference.
We can also destroy our whole setup just by one click, by doing terraform destroy.