Integrating aws with Terraform

Integrating aws with Terraform

Task 1 : How to create/launch Application using Terraform on the top of aws.

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html .

Let us start building the same .

As we will be using the terraform for writing the code we have to use HCL(hashicorp configuration langauge) . It is not necessary to know all of the details of HCL syntax in order to use Terraform , since it is available on the terraform documentories . We can just copy paste the code according to our use .

First of all we have to configure the required user by using the credential file created while creating the user . We can do this by using the aws configure command . Also as we are using the code for creating our infrastucture we also have to give access key and secret key to code file in order to verify on which account the infrastucture is to built . As direct sharing of keys is not good practice , we can use the credential file for same .

provider "aws" {
	region = "us-east-1"
	shared_credentials_file = "C:/Users/DELL/Downloads/credentials.csv"
	profile = "default"
}

Through above code terraform will understand it has to work for aws , hence it will download plugins for same .We also have to mention the region in which we want create the setup .

As we will using the same region for all the other task such as creation of S3 bucket we have to use the same region it is good to data for terraform .

data "aws_region" "current" { name = "us-east-1" }

Step 1 : We want to create security group .

Our requirement is developer should be allowed to connect to our instance via ssh hence we will use TCP protocol for allowing ssh connection on port no 22 .

resource "aws_security_group" "tcp_access4" {
	name = "tcp_access4"
	description = " for developers connecting to instance and clients getting the  website !!! "
	
	
	ingress{
		description = "Making TCP port 80 available for HTTP connection "
		from_port = 80
		to_port = 80
		protocol = "tcp"
		cidr_blocks = ["0.0.0.0/0"] #this is extremely important to let everybody come to port 80
	}
	
	ingress{
		description = "For Secure Shell Access"
		from_port = 22
		to_port =22
		protocol = "tcp"
		cidr_blocks = ["0.0.0.0/0"]
	}




	ingress{
			from_port=8080
			to_port = 8080
			protocol = "tcp"
			cidr_blocks = ["0.0.0.0/0"]


	}
	
	egress {
		from_port = 0
		to_port = 0
		protocol = "-1" #rule to connect to instances from any instances
		cidr_blocks = ["0.0.0.0/0"]
	}
	tags = {
	Name = "tcp_access3"	
    }

}
                   


Step 2 : Launch EC2 instance

As there is no specific order to execute for terraform code . we have to use the depends_on block to mention after which we have to execute this block . As we have to launch this instance after creating the security group , hence we mentioned this in the aws_instance block .

Then we have to give the info about which instance to launch mentioning the ami image id , instance type , keyname , private_key address, security group created , then we have to give the tag , tag is like giving name to instance .

resource "aws_instance" "webserver" {
	
	depends_on = [
		aws_security_group.tcp_access4 
	]
	ami = "ami-09d95fab7fff3776c"
	instance_type = "t2.micro"
	key_name = "Firstkey1"
	security_groups = ["${aws_security_group.tcp_access4.name}"] 
	#must be in list
	
	connection{
		type = "ssh"
		user = "ec2-user"
		private_key = file("C:/Users/DELL/Downloads/Firstkey1.pem")
		host = aws_instance.webserver.public_ip
	
	}
	
	provisioner "remote-exec" {
		inline =[
			"ls"
		]
	
	}
	tags = {
		Name = "Cloud_task_1"	
	}
}

Step 3 : Launch one Volume (EBS) and mount that volume into /var/www/html .

We have to create ebs volume in same region as the ec2 instance is launched . We also mention which size the EBS volume to be created .

resource "aws_ebs_volume" "volume1" {
	availability_zone = aws_instance.webserver.availability_zone
	size = 2          
	tags = {
		Name = "EBSVOLUME"   
	}	
}

We also have to attach the created ebs volume .

resource "aws_volume_attachment" "ebs_att" {
	device_name = "/dev/sdh"  
	#device naming for volumes is dependent on which os we are attaching
	#in my case it is /dev/sdh[f --to-- p][1 --to--6] Note : /dev/sda1 is reserved for root
	#here below two argument for what to attach and where to attach
	volume_id = "${aws_ebs_volume.volume1.id}"
	instance_id = "${aws_instance.webserver.id}"
	
	force_detach = true
}

Step 4 : We have to download the github repo made by developer . And start the webserver but before this we also have to do the partition , format , mount the EBS volume .

We can use the remote provisioner for running commands on remote instance.

resource "null_resource" "nullremote3" {
	depends_on = [
		aws_volume_attachment.ebs_att
	]
	
	connection {
		type = "ssh"
		user = "ec2-user"
		private_key = file("C:/Users/DELL/Downloads/Firstkey1.pem")
		host = aws_instance.webserver.public_ip
	}
	
	provisioner "remote-exec" {
		inline = [ 
			"sudo yum install php httpd git -y",
			"sudo systemctl restart httpd",
			"sudo systemctl enable httpd",
			"sudo mkfs.ext4 -F /dev/xvdh",
			"sudo mount /dev/xvdh /var/www/html",
			"sudo rm -rf /var/www/html/*",
			"sudo git clone https://github.com/shyamwin/webforcloudtask.git /var/www/html/",
			"sudo systemctl restart httpd"
			]
    }

}

Step 5 : Creation of S3 bucket

resource "aws_s3_bucket" "tera_bucket"{
	bucket = "shyam-123"
	acl = "public-read"
	
	tags = {
		Name = "Bucket for Website"
	}
}

giving access only via cloud front and blocking other ways such as ip address .

resource "aws_s3_bucket_public_access_block" "Block_Public_Access"{
	bucket="${aws_s3_bucket.tera_bucket.id}"
	block_public_acls = false
	block_public_policy = false
	restrict_public_buckets = false
	#rember above we gave acl private


}

Now we have to put the data in from local to S3 bucket .

resource "aws_s3_bucket_object" "just_image" {
	bucket = "${aws_s3_bucket.tera_bucket.id}"
	key = "motivation.png"  
	#name of the object when it is in the bucket
	source = "C:/Users/DELL/Downloads/terrafm/terraform.png"
	
}

Now we have to create the cloud distribution .

locals {
  s3_origin_id = "myS3Origin"
}
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
	comment = "Access Identity"
}


resource "aws_cloudfront_distribution" "s3_distribution" {
	depends_on = [
		aws_s3_bucket_object.just_image
	]
  origin {
    domain_name = "${aws_s3_bucket.tera_bucket.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"


    s3_origin_config {
      origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
    }
  }


  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Access Identity"
  default_root_object  = "motivation.png"
 


  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = true


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  # Cache behavior with precedence 0
  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = true
      headers      = ["Origin"]


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "allow-all"
  }


  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "allow-all"
  }


  price_class = "PriceClass_200"


  restrictions {
    geo_restriction {
      restriction_type = "blacklist"  
      #this is for georestrictions
      locations        = ["US"]
    }
  }


  tags = {
    Environment = "production"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
}

Now at this point cloudfront distribution and its connection to s3 bucket is done.

We just need the IP ec2 instance and URL of cloudfront.

resource "null_resource" "nulllocal1" {
	depends_on = [
		null_resource.nullremote3,
		]
	provisioner "local-exec" {
		command = "start  chrome  ${aws_instance.webserver.public_ip}"
	}


}
output "ec2_ip" {
	value= aws_instance.webserver.public_ip


}
output "cloudfront_url" {
	value = aws_cloudfront_distribution.s3_distribution.domain_name
}

Now all the code is done just we need to do is run this code .

For running this code we have to run this commands :

terraform init

terraform apply

No alt text provided for this image

Here is the website .

No alt text provided for this image
No alt text provided for this image

Thank you .

要查看或添加评论,请登录

Shyam Sulbhewar的更多文章

  • Website Deployment Using Jenkins via Job Creation Using Groovy code

    Website Deployment Using Jenkins via Job Creation Using Groovy code

    Jenkins is an incredibly powerful tool for DevOps that allows us to execute clean and consistent builds and deployments…

    6 条评论
  • Metrics Collection and Monitoring Using Prometheus and Grafana

    Metrics Collection and Monitoring Using Prometheus and Grafana

    What is Metrics ? Metrics represent the raw measurements of resource usage or behavior that can be observed and…

    8 条评论
  • CI/CD PIPELINE

    CI/CD PIPELINE

    The CI/CD pipeline is one of the best practices for devops teams to implement, for delivering code changes more…

    10 条评论
  • Launching Website on top of KUBERNETES

    Launching Website on top of KUBERNETES

    What is Kubernetes ? Kubernetes is a portable, extensible, open-source platform for managing containerized workloads…

    2 条评论
  • Launching A Secure Wordpress site

    Launching A Secure Wordpress site

    Task Description : Write an Infrastructure as code using Terraform, which automatically creates a VPC. In that VPC we…

    9 条评论
  • Launching Website Using AWS with EFS using Terraform

    Launching Website Using AWS with EFS using Terraform

    What is AWS ? Amazon web service is a platform that offers flexible, reliable, scalable, easy-to-use and cost-effective…

    8 条评论
  • Face Recognition Using Transfer Learning (VGG16)

    Face Recognition Using Transfer Learning (VGG16)

    Face Recognition is a method to identify the identity of an individual using their face. It is capable of identifying a…

    8 条评论
  • Amazon EKS

    Amazon EKS

    Cloud computing is an internet-based computing service in which large groups of remote servers are networked to allow…

    6 条评论
  • MLOPS & DEVOPS TASK2

    MLOPS & DEVOPS TASK2

    1. Create container image that’s has Jenkins installed using dockerfile 2.

    4 条评论
  • MLOPS & DEVOPS Task1

    MLOPS & DEVOPS Task1

    Integration of git , github ,jenkins ,docker . Task Description : JOB#1 If Developer push to master branch then Jenkins…

    2 条评论

社区洞察

其他会员也浏览了