AWS Cloud Project-1

AWS Cloud Project-1

I have sucessfully completed my AWS Cloud Project integration with Terraform where I have created complete infrastructure of AWS Cloud with help of Terraform.

MY TASK

Creating/launching Application using Terraform

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the Github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Prerequisite for this project

  • Terraform support HashiCorp language so we should have knowledge about it.
  • We should have knowlege about Some basic command of Linux operating system.\
  • Remember some basic command of Terraform

--> terraform apply/destroy

--> terraform apply -auto-approve

--> terraform destroy -auto-approve


Going to start explain about my project

Step-1

Firstly specify that we are using AWS service, filling region and profile name

provider "aws" {
 region = "ap-south-1"
 profile = "satyam"
}

Step-2

This Terraform module dynamically generates an SSH key pair, imports it into AWS, and provides the keypair via outputs.

This can be used to create AWS resources with a keypair that doesn't have to exist prior to the Terraform run. This is great for demos! Warning that the private key information will be in the Terraform state, so this isn't well suited currently for production environments.

resource "tls_private_key" "key" {
  algorithm = "RSA"
}
  module "key_pair" {
   source = "terraform-aws-modules/key-pair/aws"
    key_name   = "key123"
    public_key = tls_private_key.key.public_key_openssh

}


output of the private key can be printed via following syntax

output {
value = tls_private_key.key.private_key_pem
}

output of following code......……..

Key pair created

Step-3

security group acts as a virtual firewall for your instance to control incoming and outgoing traffic. When Amazon EC2 decides whether to allow traffic to reach an instance, it evaluates all of the rules from all of the security groups that are associated with the instance.

Creating security group which allows port no. 80 (for webserver) and also allowing the port no. 22( for SSH).

 resource "aws_security_group" "mysg" {
  vpc_id      = "vpc-758f921d"
    ingress {
    description = "Creating SSH security group"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
 
 }

 ingress {
    description = "Creating HTTP security group"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  
}
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
tags = {
 Name = "mysg"

}
}

}

output of following code...…….....

Security group created

Step-4

An EC2 instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for running applications on the Amazon Web Services (AWS) infrastructure.

The remote-exec Provisioner supports both SSH and WinRM type connections.

Launching the EC2 instance from terraform using the key and security group which we created in step-2 and step-3. And going inside the instance via SSH and installing PHP, GIT , and HTTPD(for webservices ).

resource "aws_instance" "TeraOS" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  availability_zone = "ap-south-1b"
  vpc_security_group_ids = ["${aws_security_group.mysg.id}"]
    key_name = "key123" 
  tags = {
    Name = "Amazon OS"
         }
connection {
           type     = "ssh"
           user     = "ec2-user"
           private_key = tls_private_key.key.private_key_pem
           host     = aws_instance.TeraOS.public_ip
                   }
        provisioner "remote-exec" {
           inline = [
             "sudo yum install httpd php git -y ",
             
                    ]
            }
}

output of following code...…….....

Instance Launched sucessfully

Step-5

An Amazon EBS volume is a durable, block-level storage device that you can attach to your instances. EBS volumes persist independently from the running life of an EC2 instance. You can attach multiple EBS volumes to a single instance. The volume and instance must be in the same Availability Zone.

Launching the EBS volume and Attaching with instance with same Availability Zone as Instance have.

resource "aws_ebs_volume" "EbsVol" {
  availability_zone = "ap-south-1b"
  size              = 1
  tags = {
    Name = "EbsVol"
  }
}

resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdd"
  volume_id   = aws_ebs_volume.EbsVol.id
  instance_id = aws_instance.TeraOS.id
  force_detach = true
}

Note:- When you will try to detach/destroy EBS volume from terraform via using command "terraform destroy" then it won't delete EBS volume because your Volume would be in busy mode that time. So use this keyword into your code to force_detach

  • force_detach = true

output of following code...…….....

EBS volume created and attached to instance

Step-6

The null_resource resource implements the standard resource lifecycle but takes no further action.

I am Formatting the EBS volume and Mounting the EBS volume into /var/www/html directory so that data can be persistent. And also cloning the github rep where I uploaded my code so that i can acess it by web-services.

resource "null_resource" "null2" {
depends_on = [
    aws_volume_attachment.ebs_att,
  ]
 connection {
           type     = "ssh"
           user     = "ec2-user"
           private_key = tls_private_key.key.private_key_pem
           host     = aws_instance.TeraOS.public_ip
                   }
      
provisioner "remote-exec" {
           inline = [
             "sudo mkfs.ext4   /dev/xvdd",
             "sudo mount /dev/xvdd  /var/www/html",
             "sudo rm -rf /var/www/html/*",
             "sudo git clone https://github.com/satyamskic/terraform-repo.git  /var/www/html"
                    ]
                           }
}

Note:- whenever you clone GitHub code into any directory, so that particular directory/folder should be empty.

Step-7

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services' (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

Creating the S3 bucket and uploading image into S3 bucket from GitHub repo Dynamically. Name of image is nature.jpg which i uploaded on GitHub.

resource "aws_s3_bucket" "mybucket" {
depends_on = [
    null_resource.null2,
  ]
    bucket  = "satyam9625318589"
    acl = "private"
    force_destroy = true
provisioner "local-exec" {
        command     = "git clone https://github.com/satyamskic/terra-image.git   terra-image"
}
     provisioner "local-exec" {
        when        =   destroy
        command     =   "echo Y | rmdir /s terra-image"
    }
}

resource "aws_s3_bucket_object" "image-upload" {
    bucket  = aws_s3_bucket.mybucket.bucket
    key     = "nature.jpg"
    source  = "terra-image/nature.jpg"
    acl = "public-read"
}

Note:- When you will try to distroy S3 bucket from terraform via using command "terraform destroy" then it won't delete S3 bucket because your S3 bucket is not empty that time so you have to include keyword into your code.

  • force_destroy = true

output of following code...…….....

No alt text provided for this image


Step-8

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as HTML, .CSS, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

Creating CloudFront which distribute my image from S3 bucket in all near edge location.

locals {
  s3_origin_id = "aws_s3_bucket.mybucket.id"
}

resource "aws_cloudfront_distribution" "s3_distribution" {

  origin {
    domain_name = aws_s3_bucket.mybucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id

  }
  enabled             = true
  is_ipv6_enabled     = true
  comment             = "This is nature image"
  default_root_object = "nature.jpg"
  logging_config {
    include_cookies = false
    bucket          = aws_s3_bucket.mybucket.bucket_domain_name
  
  }
   default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      headers      = ["Origin"]
      cookies {
        forward = "none"
      }
    }
    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  price_class = "PriceClass_200"
  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["IN"]
    }
  }

  tags = {
    Environment = "production"
  }
  viewer_certificate {
    cloudfront_default_certificate = true
  }
}

output of following code...…….....

No alt text provided for this image

Step-9

After steps-8, we will get cloud front URL which I upadated in my code by SSH login to instance.

Cloudfront URL to update in code in /var/www/html

resource "null_resource" "null3" {
depends_on = [
    aws_cloudfront_distribution.s3_distribution,
  ]
 connection {
           type     = "ssh"
           user     = "ec2-user"
           private_key = tls_private_key.key.private_key_pem
           host     = aws_instance.TeraOS.public_ip
                   }
      
provisioner "remote-exec" {
           inline = [
    "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/nature.jpg' width='300' lenght='400' >\"  | sudo tee -a /var/www/html/index.html",
 
    "sudo systemctl start httpd"
                    ]
                           }
}

Note:- "sudo tee -a" where sudo use for providing administrator power and tee -a means the append the data.

Step-10

CloudFront URL and the IP of the instance can be printed following code...…...

output "out1" {
value = aws_cloudfront_distribution.s3_distribution.domain_name
}
output  "out2" {
value = aws_instance.TeraOS.public_ip
}

FINAL OUTPUT

No alt text provided for this image


So finally I have done my task AWS-Cloud integrated with Terraform

Providing Entire terraform code

provider "aws" {
 region = "ap-south-1"
 profile = "satyam"
}


resource "tls_private_key" "key" {
  algorithm = "RSA"
}


  module "key_pair" {
   source = "terraform-aws-modules/key-pair/aws"
    key_name   = "key123"
    public_key = tls_private_key.key.public_key_openssh
}



 resource "aws_security_group" "mysg" {
  vpc_id      = "vpc-758f921d"
      ingress {
    description = "Creating SSH security group"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
 ingress {
    description = "Creating HTTP security group"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
tags = {
 Name = "mysg"
}
}




resource "aws_instance" "TeraOS" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  availability_zone = "ap-south-1b"
  vpc_security_group_ids = ["${aws_security_group.mysg.id}"]
    key_name = "key123" 
  tags = {
    Name = "Amazon OS"
         }
connection {
           type     = "ssh"
           user     = "ec2-user"
           private_key = tls_private_key.key.private_key_pem
           host     = aws_instance.TeraOS.public_ip
                   }
        provisioner "remote-exec" {
           inline = [
             "sudo yum install httpd php git -y ",
             
                    ]
                                   }
}




resource "aws_ebs_volume" "EbsVol" {
  availability_zone = "ap-south-1b"
  size              = 1
  tags = {
    Name = "EbsVol"
  }
}




resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdd"
  volume_id   = aws_ebs_volume.EbsVol.id
  instance_id = aws_instance.TeraOS.id
  force_detach = true
}





resource "null_resource" "null2" {
depends_on = [
    aws_volume_attachment.ebs_att,
  ]
 connection {
           type     = "ssh"
           user     = "ec2-user"
           private_key = tls_private_key.key.private_key_pem
           host     = aws_instance.TeraOS.public_ip
                   }
      
provisioner "remote-exec" {
           inline = [
             "sudo mkfs.ext4   /dev/xvdd",
             "sudo mount /dev/xvdd  /var/www/html",
             "sudo rm -rf /var/www/html/*",
             "sudo git clone https://github.com/satyamskic/terraform-repo.git  /var/www/html"
                    ]
                           }
}






resource "aws_s3_bucket" "mybucket" {
depends_on = [
    null_resource.null2,
  ]
    bucket  = "satyam9625318589"
    acl = "private"
    force_destroy = true
provisioner "local-exec" {
        command     = "git clone https://github.com/satyamskic/terra-image.git   terra-image"
}
     provisioner "local-exec" {
        when        =   destroy
        command     =   "echo Y | rmdir /s terra-image"
    }
}




resource "aws_s3_bucket_object" "image-upload" {
    bucket  = aws_s3_bucket.mybucket.bucket
    key     = "nature.jpg"
    source  = "terra-image/nature.jpg"
    acl = "public-read"
}





locals {
  s3_origin_id = "aws_s3_bucket.mybucket.id"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = aws_s3_bucket.mybucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id
  }
  enabled             = true
  is_ipv6_enabled     = true
  comment             = "This is nature image"
  default_root_object = "nature.jpg"
  logging_config {
    include_cookies = false
    bucket          = aws_s3_bucket.mybucket.bucket_domain_name
  
  }
   default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      headers      = ["Origin"]
      cookies {
        forward = "none"
      }
    }
    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }
  price_class = "PriceClass_200"
  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["IN"]
    }
  }
  tags = {
    Environment = "production"
  }
  viewer_certificate {
    cloudfront_default_certificate = true
  }
}



resource "null_resource" "null3" {
depends_on = [
    aws_cloudfront_distribution.s3_distribution,
  ]
 connection {
           type     = "ssh"
           user     = "ec2-user"
           private_key = tls_private_key.key.private_key_pem
           host     = aws_instance.TeraOS.public_ip
                   }
      
provisioner "remote-exec" {
           inline = [
    "echo \"<img src='https://${aws_cloudfront_distribution.s3_distribution.domain_name}/nature.jpg' width='300' lenght='400' >\"  | sudo tee -a /var/www/html/index.html",
 
    "sudo systemctl start httpd"
                    ]
                           }
}






output "out1" {
value = aws_cloudfront_distribution.s3_distribution.domain_name
}
output  "out2" {
value = aws_instance.TeraOS.public_ip
}



Note:- I have uploaded each and every content related to my project on my GitHub URL.

Click on below page.......



Thank you

Aman Kumar Abhishek

Student at ARTH - The School of Technologies

4 年

Good work

Babli Singh

Looking for a Job in java backend developer

4 年

Waooo explanation was very good??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了