Creating a complete cloud Infrastructure Using AWS EFS storage

Creating a complete cloud Infrastructure Using AWS EFS storage

Task Description:

>>Perform the task-1 using EFS instead of EBS service on the AWS as,

>>Create/launch Application using Terraform

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Checkout my link for Task1: https://www.dhirubhai.net/pulse/article-how-we-can-use-terraform-create-cloud-asish-patnaik

Pre-requisite:

  • Should have an aws account and github account
  • Terraform should be installed and configured in your system

Process:

1: Create a profile for aws in your command prompt(refer to task1 link for this) and create a a terraform file. Specify the provider.

provider "aws" {
	profile ="Asish"
	region ="ap-south-1"
}

2: Create a Security group which allows http access(port 80) and SSH(port 22) and NFS(port 2049)(For EFS file System)

resource "aws_security_group" "task2_sg" {
  name        = "task2_sg"
  description = "Allow port 80"
  vpc_id      = "vpc-df9489b7"


  ingress {
    description = "PORT 80"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
   
  ingress{
      description= "NFS"
       from_port= 2049
        to_port= 2049
        protocol="tcp"
        cidr_blocks = ["0.0.0.0/0"]
}


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "allow port 80"
  }
}

3: Create a private key for this task and store it in your local system for later use

resource "tls_private_key"  "mytask2key"{
	algorithm= "RSA"
}




resource  "aws_key_pair"   "generated_key"{
	key_name= "mytask2key"
	public_key= "${tls_private_key.mytask2key.public_key_openssh}"
	
	depends_on = [
		tls_private_key.mytask2key
		]
}




resource "local_file"  "store_key_value"{
	content= "${tls_private_key.mytask2key.private_key_pem}"
 	filename= "mytask2key.pem"
	
	depends_on = [
		tls_private_key.mytask2key
	]
}

4: Now write terraform code to create EFS file system and use mount target resource code to mount our EFS storage

resource "aws_efs_file_system"  "allow-nfs"{
	creation_token="allow-nfs"
  tags={
       Name= "allow-nfs"
 }
}


resource "aws_efs_mount_target"  "efs_mount"{
  file_system_id= "${aws_efs_file_system.allow-nfs.id}"
  subnet_id= "subnet-24ebd14c"
   security_groups= [aws_security_group.task2_sg.id]
}

5: Create an installed and connect to it and install the required softwares needes for the task

resource "aws_instance" "task2os" {
  	ami  = "ami-0447a12f28fddb066"
 	 instance_type = "t2.micro"
 	 key_name = "mytask2key"
  	security_groups= ["task2_sg"]


  connection {
    type     = "ssh"
    user     = "ec2-user"
   private_key= "${tls_private_key.mytask2key.private_key_pem}"
    host     = "${aws_instance.task2os.public_ip}"
  }




  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd   git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }


  tags = {
    Name = "task2os"
  }
}


output "myos_ip" {
  value = aws_instance.task2os.public_ip
}

We might need the public IP of the instance later so,store it in a file in your local system:

resource "null_resource" "nulllocal2"  {
	provisioner "local-exec" {
	    command = "echo  aws_instance.task2os.public_ip > publicip.txt"
  	}
}

6: Now,write code for terraform to configure the EFS storage for our instance clone github and attach it to our apache server(clone the repo to /var/www/html file)

resource "null_resource" "nullremote3"  {


depends_on = [
    aws_efs_mount_target.efs_mount,
  ]
 connection {
    type     = "ssh"
    user     = "ec2-user"
     private_key = "${tls_private_key.mytask2key.private_key_pem}"
    host     = "${aws_instance.task2os.public_ip}"
}




provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/Pheonix-reaper/Task2_Cloud.git    /var/www/html/"
    ]
  }
}

7: Now create a s3 bucket and give public access policy to our bucket so that it can be accessed publicly:

resource "aws_s3_bu ket" "task2_bucket" {
  bucket = "task2cloud-bucket-asish-007-s3bucket"
  acl="public-read"
 force_destroy=true




tags = {
    Name = "My bucket"
  }
}


resource "aws_s3_bucket_public_access_block" "aws_public_access" {
  bucket = "${aws_s3_bucket.task2_bucket.id}"




 block_public_acls   = false
  block_public_policy = false
}

8: Now,Create a cloud front distribution for our bucket:

resource "aws_cloudfront_distribution" "imgcloudfront" {
    origin {
        domain_name = "asishpatnaik_task2_bucket.s3.amazonaws.com"
        origin_id = "S3-asishpatnaik_task2_bucket" 




          custom_origin_config {
            http_port = 80
            https_port = 80
            origin_protocol_policy = "match-viewer"
            origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] 
        }
    }
       
    enabled = true

    default_cache_behavior {
        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        cached_methods = ["GET", "HEAD"]
        target_origin_id = "S3-asishpatnaik_task2_bucket"


     forwarded_values {
            query_string = false
        
            cookies {
               forward = "none"
            }
        }
        viewer_protocol_policy = "allow-all"
        min_ttl = 0
        default_ttl = 3600
        max_ttl = 86400
    }
    # Restricts who is able to access this content
    restrictions {
        geo_restriction {
            
            restriction_type = "none"
        }
    }


# SSL certificate for the service.
    viewer_certificate {
        cloudfront_default_certificate = true
    }
}

9: Save the file,then initialise the terraform using terraform init command to download the required plugins:

No alt text provided for this image

Then check for validity of our terraform code using terraform validate command:

No alt text provided for this image

Now,use terraform apply -auto-approve command to create the infrastructure:

No alt text provided for this image
No alt text provided for this image

Now,you can go to your aws web UI and check that the infrastructure is successfully created:

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

10: Now,I have used AWS codepipeline services to automate the my web server,codepipeline continuously monitors our github,when we edit our webpage index.html,it automatically deploys it again and updates our web server:

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


These are step by step details of how I completed this task.

Check out my github repository for this task: https://github.com/Pheonix-reaper/Task2_Cloud

要查看或添加评论,请登录

Asish Patnaik的更多文章

社区洞察

其他会员也浏览了