Mastering GCP Infrastructure with Terraform: Regional HTTPS Load Balancer with Cloud DNS - Part 2
Reza Chegini
Certified GCP & AWS DevOps Engineer| Seeking Entry-Level Cloud Developer, DevOps, SRE Roles, Software Engineer or Developer | Aspiring DevOps & SRE
?? Welcome to Part 2 of the series, "Mastering GCP Infrastructure with Terraform: Regional HTTPS Load Balancer with Cloud DNS." In Part 1, we laid the foundation for our infrastructure by configuring Terraform, defining variables, and setting up reusable local values.
In Part 2, we’ll move on to the networking layer. A strong network setup ensures your infrastructure is secure, scalable, and reliable. We’ll create a custom Virtual Private Cloud (VPC), define subnets, and configure firewall rules to control traffic.
By the end of this part, you’ll have a custom network ready to support your scalable application.
Setting Up Networking
Custom networking in GCP gives you control over IP ranges, traffic flow, and resource isolation. Let’s dive into the configurations.
1. Creating a Custom VPC and Subnets
resource "google_compute_network" "myvpc" {
name = "${local.name}-vpc"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "mysubnet" {
name = "${var.gcp_region1}-subnet"
region = var.gcp_region1
ip_cidr_range = "10.128.0.0/24"
network = google_compute_network.myvpc.id
}
resource "google_compute_subnetwork" "regional_proxy_subnet" {
name = "${var.gcp_region1}-regional-proxy-subnet"
region = var.gcp_region1
ip_cidr_range = "10.0.0.0/24"
network = google_compute_network.myvpc.id
purpose = "REGIONAL_MANAGED_PROXY"
role = "ACTIVE"
}
Custom VPC (google_compute_network):
Subnets (google_compute_subnetwork):
2. Configuring Firewall Rules
Firewall rules allow or deny specific traffic to your network. Here’s how we set them up:
resource "google_compute_firewall" "fw_ssh" {
name = "${local.name}-fwrule-allow-ssh22"
allow {
protocol = "tcp"
ports = ["22"]
}
direction = "INGRESS"
network = google_compute_network.myvpc.id
priority = 1000
source_ranges = ["0.0.0.0/0"]
target_tags = ["ssh-tag"]
}
Firewall Rule for SSH (fw_ssh):
Here’s another example for HTTP traffic:
领英推荐
resource "google_compute_firewall" "fw_http" {
name = "${local.name}-fwrule-allow-http80"
allow {
protocol = "tcp"
ports = ["80"]
}
direction = "INGRESS"
network = google_compute_network.myvpc.id
priority = 1000
source_ranges = ["0.0.0.0/0"]
target_tags = ["webserver-tag"]
}
Firewall Rule for HTTP (fw_http):
Finally, we need a rule for health checks:
resource "google_compute_firewall" "fw_health_checks" {
name = "fwrule-allow-health-checks"
network = google_compute_network.myvpc.id
allow {
protocol = "tcp"
ports = ["80"]
}
source_ranges = [
"1.1.1.1/16",
"2.2.2.2./22"
]
target_tags = ["allow-health-checks"]
}
Firewall Rule for Health Checks (fw_health_checks):
Why These Steps Matter
What’s Next?
In Part 3, we’ll set up instance templates and managed instance groups (MIGs) to automate scaling and ensure high availability. This will be the backbone of our backend infrastructure.
Conclusion
By completing this part, you’ve built the networking layer for your infrastructure:
These configurations provide a secure and scalable foundation for deploying application resources.