Introduction to Terraform — EC2 Instance, S3, CloudFront Creation using Terraform

Introduction to Terraform — EC2 Instance, S3, CloudFront Creation using Terraform

This is my first article on Terraform, In this article, I’ll talk about an overview of Terraform & Creation of EC2 Instance and having s3 and cloud front for storing static objects using Terraform.

So the first question comes in mind! What is Terraform? What is the use of it?

Terraform is Infrastructure as code for managing, Building infrastructure from code. Terraform can manage existing and popular cloud service providers(AWS, AZURE, GCP, Alibaba) as well as custom in-house solutions. It is one of the famous DevOps tools in the market.

For Example, Suppose you need EC2 Instance with that EC2 Instances, 1 Security Group, 1 Keypair etc, So you’ll manually create it in AWS Console/CLI/SDK. Now in case, you need the same thing, many times in your requirements, so this will be hectic work for you. So here comes Terraform, Write code once, use it(You can modify) according to you many times.

So In this article, I will try to do the following things from Terraform.

Create/launch Application using Terraform:

1. Create the key and security group which allows the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. A developer has uploaded the code into GitHub repo also the repo has some images.

6. Copy the GitHub repo code into /var/www/html

7. Create an S3 bucket, and copy/deploy the images from Github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

So I Divide this task into multiple subtasks

  1. Creating keypair:
resource "aws_key_pair" "deploy" {
  key_name   = "mykeypair"
  public_key = "ssh-rsa <yourpublickey>"

This will create keypair named mykeypair, for the public key you can generate your own public key either by putty-gen or by this site.

No alt text provided for this image

2. Creating a Security Group

resource "aws_security_group" "examplesg" {
  name = "My  Security Group"
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
 ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
}
egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

This will create a security group name as My Security Group having ingress port 80,22 allowed to internet and allow egress of all ports.

No alt text provided for this image

3. Creating ec2 Instance

resource "aws_instance" "myenv" {
  ami           = "ami-0b44050b2d893d5f7"
  instance_type = "t2.micro"
  key_name = aws_key_pair.deploy.key_name
  security_groups = [aws_security_group.examplesg.name]
  user_data = file("install_apache.sh")
  tags = {
    Name = "MyFirstos"
  }
}

This will create ec2 instance with name as MyFirstos having key and security groups that we created in earlier steps. We are embedding user_data into tf file. The user_data only runs at instance launch time. Install_apache.sh is a shell file that will install apache and mount the EBS volume which we are creating in next step.

No alt text provided for this image

4. Launching EBS volume and attaching same to that instance

resource "aws_ebs_volume" "myebsvol" {
  availability_zone = aws_instance.myenv.availability_zone
  size              = 1
  tags = {
    Name = "myebsvol"
  }
}


#For attaching that ebs volume


resource "aws_volume_attachment" "ebs_att" {
  device_name = "/dev/sdf"
  volume_id   = aws_ebs_volume.myebsvol.id
  instance_id = aws_instance.myenv.id
}

This will create EBS volume with name as myebsvol1 and the same EBS volume is also attached to the instance which we created.

No alt text provided for this image

5. Creating s3 Bucket:

resource "aws_s3_bucket" "mybucket" {
  bucket = "myraghavbucket"
  acl    = "public-read"


  tags = {
    Name = "My bucket"
  }
}

This will create s3 bucket name as My bucket. Bucket Name should be unique for each and every region.

No alt text provided for this image

6. Creating CloudFront Distribution

resource "aws_cloudfront_distribution" "mycloudfrontdistribution" {
  origin {
    domain_name = aws_s3_bucket.mybucket.bucket_regional_domain_name
    origin_id   = "mybucketid"
  }


  enabled             = true
  default_root_object = "index.html"


  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "mybucketid"


    forwarded_values {
      query_string = true


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }




  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
}
No alt text provided for this image

Now we will merge all these resources into a single file with extension .tf which will create key pair, security group, launch ec2 instance ( Apache Installed, Mounted EBS volume with folder /var/www/html), create s3 bucket having CloudFront distribution at one go.

No alt text provided for this image
Furthermore, I Integrated Jenkins also for deploying code to ec2 instance. This code is a simple HTML code. But the image used in that code I stored in amazon s3. So our all static things like ( Images, Videos ) are stored in s3 and to avoid latency I used amazon CloudFront.
No alt text provided for this image
No alt text provided for this image














要查看或添加评论,请登录

Raghav Agarwal的更多文章

社区洞察