HA, multi-region, low latency app infrastructure on AWS using Terraform (Part-2)

HA, multi-region, low latency app infrastructure on AWS using Terraform (Part-2)

(This is the 2nd part of the article, 1st part link)

To write the infrastructure codes for the backup region (Singapore) I have copied the following resource blocks into new .tf files-

aws_security_group (2), aws_lb, aws_lb_target_group, aws_lb_listener, aws_lb_listener_rule, aws_acm_certificate, aws_route53_record (2), aws_acm_certificate_validation, aws_launch_template, aws_placement_group, aws_autoscaling_group, aws_autoscaling_policy

Also copied the following data blocks- aws_vpc, aws_subnets, aws_subnet, aws_s3_bucket

Then on each of the blocks, I have added this argument provider= aws.Singapore except the aws_route53_record

You have to use, your second aws provider’s alias name.

I have updated all the resource names, resource references, depends_on resource names, and region-specific variables to avoid conflicts and errors. Also updated the HTTP header name and value for the backup region.

To verify the failover later, I have slightly changed the userdata.sh file for the backup region Launch Template.

Now finally it’s time to write the CloudFront resource block.

First, I have created a locals block to define 2 ALB origin IDs, 2 S3 origin IDs, and 2 origin group IDs. Then created an aws_cloudfront_origin_access_identity resource block.

Then on the aws_cloudfront_distribution resource block, added depends_on meta-argument followed by 2 origin group blocks (for ALB & S3), 2 ALB origin blocks, and 2 S3 origin blocks.

On the ALB origin group block, added 2 member blocks for the 2 ALB origin blocks followed by failover criteria blocks (‘403, 404, 416, 500, 502, 503, 504’ in tfvars file).

On the S3 origin group block, added 2 member blocks for the 2 S3 origin blocks followed by failover criteria blocks (‘403, 404, 416, 500, 502, 503, 504’ in the tfvars file).

locals {
    alb_origin_id1 = "AlbOriginNVirginia"
    alb_origin_id2 = "AlbOriginSingapore"
    alb_origingroup_id = "myAlbOGroup"
    s3_origin_id1 = "S3OriginNVirginia"
    s3_origin_id2 = "S3OriginSingapore"
    s3_origingroup_id = "s3OGroup"
}
resource "aws_cloudfront_origin_access_identity" "s3assetid" {
    comment = "for S3 static assets"
}
resource "aws_cloudfront_distribution" "mycloudfront" {
    depends_on = [ aws_route53_record.alb_alias_record, aws_route53_record.alb_alias_record2, aws_acm_certificate.cloudFront_acm_cert ]
    origin_group {
        origin_id = local.alb_origingroup_id
        member {
            origin_id = local.alb_origin_id1
        }
        member {
            origin_id = local.alb_origin_id2
        }
        failover_criteria {
            status_codes = var.cloudfront_origin_failover_status_codes
        }
    }
    origin_group {
        origin_id = local.s3_origingroup_id
        member {
            origin_id = local.s3_origin_id1
        }
        member {
            origin_id = local.s3_origin_id2
        }
        failover_criteria {
            status_codes = var.cloudfront_origin_failover_status_codes
        }
    }        

Then on each of the ALB origin blocks, added ALB subdomain as domain name, origin ID, connection attempt & timeout, custom header block (for the http header name & value), and custom origin config block as shown.

Note: As here I am using HTTPS to connect to the ALB (i.e. HTTPS listener), so I’ve used ‘https-only’ as the origin protocol policy. If you’re using an HTTP listener then you have to select ‘http-only’ as the origin protocol policy.

    origin {
        domain_name = "${var.alb_subdomain_name}"
        origin_id = local.alb_origin_id1
        connection_attempts = 3
        connection_timeout = 10
        custom_header {
            name = var.custom_header_name1   #Sensitive
            value = var.custom_header_value1 #Sensitive
        }
        custom_origin_config {
            origin_protocol_policy = "https-only" #as using alb subdomain with https
            origin_ssl_protocols = ["TLSv1.2"]
            http_port = 80
            https_port = 443
            origin_keepalive_timeout = 5
            origin_read_timeout = 30
        }
    }
    origin {
        domain_name = "${var.alb_subdomain2_name}"
        origin_id = local.alb_origin_id2
        connection_attempts = 3
        connection_timeout = 10
        custom_header {
            name = var.custom_header_name2   #Sensitive
            value = var.custom_header_value2 #Sensitive
        }
        custom_origin_config {
            origin_protocol_policy = "https-only" #as using alb subdomain with https
            origin_ssl_protocols = ["TLSv1.2"]
            http_port = 80
            https_port = 443
            origin_keepalive_timeout = 5
            origin_read_timeout = 30
        }
    }        

Then on each of the S3 origin blocks, added the S3 bucket’s regional domain name, origin ID, connection attempt and timeout, and S3 origin config block (referencing the aws_cloudfront_origin_access_identity block). As I am using existing S3 buckets, I have manually added bucket policies on both buckets (reference document).

    origin {
        domain_name = "${data.aws_s3_bucket.mybucket.bucket_regional_domain_name}"
        origin_id = local.s3_origin_id1
        connection_attempts = 3
        connection_timeout = 10
        s3_origin_config {
            origin_access_identity = aws_cloudfront_origin_access_identity.s3assetid.cloudfront_access_identity_path
        }
    }
    origin {
        domain_name = "${data.aws_s3_bucket.mybucket2.bucket_regional_domain_name}"
        origin_id = local.s3_origin_id2
        connection_attempts = 3
        connection_timeout = 10
        s3_origin_config {
            origin_access_identity = aws_cloudfront_origin_access_identity.s3assetid.cloudfront_access_identity_path
        }
    }        

Next on the default cache behaviours block, added the ALB origin group’s ID as the target origin ID, allowed methods, cache methods, cache policy ID (CachingDisabled reference document), and viewer protocol policy. Then disabled the compression which is recommended for APIs.

Note: If you’re using the CloudFront origin group, then you can only use the GET, HEAD and OPTIONS as the allowed HTTP methods, not the PUT, POST, DELETE or PATCH. But if you’re using a single CloudFront origin, then you can use all of them. Also, CachingDisabled is the recommended cache policy for APIs.

Next on the ordered cache behaviours block, added the S3 origin group’s ID as the target origin ID, path pattern (“assets/*” because I’ve kept the media files inside the 'assets' folder in S3 buckets), allowed methods, cache methods, cache policy ID (CachingOptimized), viewer protocol policy. Then enabled the compression which is recommended for assets and media files.

    default_cache_behavior {
        target_origin_id = local.alb_origingroup_id
        allowed_methods = var.cloudfront_behaviour_allowed_methods_api
        cached_methods = [ "GET", "HEAD" ]
        cache_policy_id = "4135ea2d-6df8-44a3-9df3-4b5a84be39ad"
        viewer_protocol_policy = "redirect-to-https"
        compress = "false"
    }
    ordered_cache_behavior {
        target_origin_id = local.s3_origingroup_id
        path_pattern = "assets/*"
        allowed_methods = var.cloudfront_behaviour_allowed_methods_s3
        cached_methods = ["GET", "HEAD"]
        cache_policy_id = "658327ea-f89d-4fab-a63d-7e88639e58f6" #CachingOptimized
        viewer_protocol_policy = "redirect-to-https"
        compress = true
    }        

Then added an alias name (domain/subdomain) which will be used by the end users to access the application.

Similar to the ALB certificates, I’ve created resource blocks of aws_acm_certificate, 2 aws_route53_record (for ACM & subdomain alias), aws_acm_certificate_validation for CloudFront certificate.

In the viewer certificate block, added the ACM certificate ARN, ‘TLSv1’ as the minimum protocol version and then ‘sni-only’ as the SSL support method (reference document).

Note: If you are not using any alias name, then you’ve to add cloudfront_default_certificate = true and then the SSL support method argument is optional.

Then finally, added the enabled argument (as true), price class and restriction block (selected none so that there would be no geo-restriction).

    aliases = [var.cloudfront_subdomain_name]
    viewer_certificate {
        acm_certificate_arn = aws_acm_certificate.cloudFront_acm_cert.arn
        minimum_protocol_version = "TLSv1"
        ssl_support_method = "sni-only"
    }
    enabled = true
    price_class = var.cloudfront_price_class
    restrictions {
        geo_restriction {
            restriction_type = "none"
        }
    }
}        

Now, it’s time to run the Terraform commands.

terraform init -> terraform validate-> terraform plan -> terraform apply --auto-approve

After the creation of all the resources, opened the subdomain (I.e. CloudFront alias) on a browser incognito tab to see the page.

Now, to check the ALB subdomain, I’ve opened it on another incognito tab, and also opened the browser console. It has given a 403 status code on the GET request and also shown the custom error message (expected, direct access to ALB is not allowed due to the custom HTTP header).

To test the failover of the ALB origin group I have edited the custom listener rule of the N. Virginia ALB (which validates HTTP header) and changed the action; it will throw a 503 error code. Then again refreshed the application incognito tab. At the bottom line (Singapore) has come which I had given on the backup region Launch Template’s userdata.sh earlier.

Next, to test the failover of the S3 origin group, I have deleted the bucket policy of the S3 bucket in N. Virginia region so that it will block incoming requests. Also, on the CloudFront console, done cache Invalidations.

Then again refreshed the application incognito tab. The video got changed as well.

Thanks for reading the article.

Please click here to get my other articles.

Thanks for reading the article. Could you read my other articles on LinkedIn too? And a humble request. I'm looking for a new job and would appreciate your support. I have 5.5+ years of experience in the following skills- AWS, Azure, Azure DevOps, Terraform, Kubernetes etc. I am currently serving as a DevOps Engineer at Accenture.

回复

要查看或添加评论,请登录

Utpal Bhattacharjee的更多文章

社区洞察

其他会员也浏览了