Multi-Cloud Project
Naitik Shah
Data Scientist | Expert in Predictive Modeling, Machine Learning & Data Engineering | Python, SQL, Azure, Databricks | Achieved 15% Cost Reduction & Optimized Operations
A quick summary of the project:
The purpose is to deploy a WordPress framework using Terraform on Kubernetes. For this, I have used Google Cloud Platform's GKE services to deploy Amazon Cloud's WordPress Server & RDS to deploy the MySQL database that will be linked to WordPress. This project, therefore, provides a perfect example of Multi-Cloud.
Project Description:
- Write an Infrastructure as code using Terraform, which automatically deploys the WordPress application.
- On AWS, use RDS service for the relational database for WordPress application.
- Deploy WordPress as a container on top of Google Kubernetes Engine running on GCP.
- The WordPress application should be accessible from the public world.
Before heading on to how to create this project, let's first know a few things.
About WordPress:
WordPress (WP, WordPress.org) is a free and open-source content management system (CMS) written in PHP and paired with a MySQL or MariaDB database. Features include a plugin architecture and a template system, referred to within WordPress as Themes. WordPress was created as a blog-publishing system but has evolved to support other types of web content including more traditional mailing lists and forums, media galleries, membership sites, learning management systems (LMS), and online stores. WordPress is used by more than 60 million websites, including 33.6% of the top 10 million websites as of April 2019, WordPress is one of the most popular content management system solutions in use. WordPress has also been used for other application domains such as pervasive display systems (PDS).
About GKE:
Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines (specifically, Compute Engine instances) grouped to form a cluster.
You might be knowing about Amazon Elastic Kubernetes Service, GKE is the same service, but it is provided by Google.
The Kubernetes open-source cluster management system powers GKE clusters. The processes by which you engage with your cluster are given by Kubernetes. To install and administer your programs, execute administration functions, set standards, and control the health of your workloads deployed, you use the Kubernetes commands and tools.
Kubernetes builds on the same architecture concepts that operate common Google services and offers the same advantages: automated maintenance, device container tracking, and liveness probe, automatic scaling, rolling updates, and more. Using technologies focused on Google's 10 + years of experience running development workloads in containers, while you run your applications on a cluster.
Kubernetes on Google Cloud When you run a GKE cluster, you also gain the benefit of advanced cluster management features that Google Cloud provides. These include:
- Google Cloud's load-balancing for Compute Engine instances.
- Node pools to designate subsets of nodes within a cluster for additional flexibility.
- Automatic scaling of your cluster's node instance count.
- Automatic upgrades for your cluster's node software.
- Node auto-repair to maintain node health and availability.
- Logging and monitoring with Google Cloud's operations suite for visibility into your cluster.
About Amazon RDS:
Amazon Relational Database Service (or Amazon RDS) is a distributed relational database service by Amazon Web Services (AWS). It is a web service running "in the cloud" designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases, and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security, and compatibility they need.
Amazon RDS is available on several database instance types - optimized for memory, performance, or I/O - and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. You can use the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.
All right, so that's all about the definitions, let's jump into creating our project:
Step 1: First of all, as you might have seen my pattern in the previous cloud computing projects, I always configure the IAM profile, and you have to configure and login from CMD so that AWS and GCP know that it is you.
Just the difference will be that we will even configure GCP because this is a Multi-Cloud Project.
AWS:
aws configure --profile naitik2 AWS Access Key ID [****************AHNK]: AWS Secret Access Key [****************n/lk]: Default region name [ap-south-1]: Default output format [None]:
GCP:
You might not have the Google SDK, but it super easy to install, just like Amazon AWS.
provider "google" { credentials = file("${var.gcp_credentials_path}") project = var.gcp_project_id region = var.gcp_cur_region }
Step 2: Build a var.tf file where all the variables we need to use in our code are contained in it. That way, in the future our code would be easy to change. We'll only need to change the values in this variable file instead of going and modifying them in all locations.
variable "gcp_credentials_path"{ default="C:\\Users\\Naitik\\AppData\\Roaming\\gcloud\\My First Project-a04023du5af5.json" } variable "gcp_project_id"{ default="iconic-project-287539" } variable "gcp_cur_region"{ default="asia-south1" } variable "aws_profile"{ default="naitik2" } variable "aws_region"{ default= "ap-south-1" } variable "gcp_vpc_name"{ default = "gcp-vpc" } variable "subnet_gcp_name"{ default = "subnet-vpc" } variable "subnet_ip_cidr_range"{ default = "10.0.2.0/24" } variable "gcp_subnet_region"{ default = "asia-southeast1" } variable "gcp_compute_firewall"{ default = "firewall-gcp" } variable "allowed_ports"{ type=list default=["80","22"] } variable "google_container_cluster_name"{ default="gcp-cluster" } variable "google_container_cluster_location"{ default = "asia-southeast1" } variable "gcp_node_config_machine_type"{ default = "n1-standard-1" } variable "aws_db_instance_storage_type"{ default = "gp2" } variable "aws_db_instance_engine"{ default = "mysql" } variable "aws_db_instance_engine_version"{ default = 5.7 } variable "aws_db_instance_instance_class"{ default = "db.t2.micro" } variable "aws_db_instance_db_name"{ default = "db" } variable "aws_db_instance_username"{ default = "admin" } variable "aws_db_instance_password"{ default = "naitik2" } variable "aws_db_instance_publicly_accessible"{ default = true } variable "aws_db_instance_skip_final_snapshot"{ default = true }
Step 3:
Now let's start building our key file. I used the module method here. I built different modules for the various works. Since I built a folder called modules I listed it in my main.tf as a source.
module "gcp_aws"{ source = "./modules" gcp_project_id=var.gcp_project_id gcp_vpc_name=var.gcp_vpc_name subnet_gcp_name=var.subnet_gcp_name subnet_ip_cidr_range=var.subnet_ip_cidr_range gcp_subnet_region=var.gcp_subnet_region gcp_compute_firewall=var.gcp_compute_firewall allowed_ports=var.allowed_ports google_container_cluster_name=var.google_container_cluster_name google_container_cluster_location=var.google_container_cluster_location gcp_node_config_machine_type=var.gcp_node_config_machine_type aws_db_instance_storage_type=var.aws_db_instance_storage_type aws_db_instance_engine=var.aws_db_instance_engine aws_db_instance_engine_version=var.aws_db_instance_engine_version aws_db_instance_instance_class=var.aws_db_instance_instance_class aws_db_instance_db_name=var.aws_db_instance_db_name aws_db_instance_username=var.aws_db_instance_username aws_db_instance_password=var.aws_db_instance_password aws_db_instance_publicly_accessible=var.aws_db_instance_publicly_accessible aws_db_instance_skip_final_snapshot=var.aws_db_instance_skip_final_snapshot }
Step 4: Now we have to create a VPC, subnet, and a firewall in GCP. If you have read my previous articles on AWS, then you might remember that how to create VPC, Subnet, and Firewall in AWS, but in this article, I will show you how to create those services on GCP.
A Virtual Private Cloud (VPC) is a global private isolated virtual network partition that provides managed networking functionality for your Google Cloud Platform (GCP) resources.
The VPC instances have internal IP addresses and can connect in private across the globe.
Subnet are tools of a particular kind. A list of IP addresses is specified by a subnet. Traffic from and to instances can be managed using the rules for network firewalls.
Firewall rules for the Google Cloud Platform (GCP) allow you to allow or reject traffic to and from instances of your virtual machine ( VM) depending on the configuration you define. You designate a Virtual Private Cloud (VPC) network and a collection of components that determine what the protocol does, by defining a firewall code.
Terraform Code:
variable "gcp_vpc_name"{} variable "subnet_gcp_name"{} variable "subnet_ip_cidr_range"{} variable "gcp_subnet_region"{} variable "gcp_compute_firewall"{} variable "allowed_ports"{} variable "gcp_project_id"{} // Creating a VPC resource "google_compute_network" "vpc_gcp" { name = var.gcp_vpc_name auto_create_subnetworks=false project= var.gcp_project_id } // Creating a subnetwork resource "google_compute_subnetwork" "subnet_vpc" { depends_on=[google_compute_network.vpc_gcp] name =var.subnet_gcp_name ip_cidr_range = var.subnet_ip_cidr_range region =var.gcp_subnet_region network = google_compute_network.vpc_gcp.id } // Creating a firewall resource "google_compute_firewall" "default" { depends_on=[google_compute_network.vpc_gcp] name =var.gcp_compute_firewall network = google_compute_network.vpc_gcp.name allow { protocol = "icmp" } allow { protocol = "tcp" ports = var.allowed_ports } }
Step 5: Launching the GKE cluster using terraform.
Terraform Code:
variable "google_container_cluster_name"{} variable "google_container_cluster_location"{} variable "gcp_node_config_machine_type"{} resource "google_container_cluster" "gcp_cluster" { depends_on=[google_compute_network.vpc_gcp] name = var.google_container_cluster_name location = var.google_container_cluster_location initial_node_count = 1 master_auth { username = "" password = "" client_certificate_config { issue_client_certificate = false } } node_config { machine_type= "n1-standard-1" } network= google_compute_network.vpc_gcp.name project=var.gcp_project_id subnetwork=google_compute_subnetwork.subnet_vpc.name } // running the command to update the kubeconfig file resource "null_resource" "cluster" { provisioner "local-exec" { command ="gcloud container clusters get-credentials ${google_container_cluster.gcp_cluster.name} --region ${google_container_cluster.gcp_cluster.location} --project ${google_container_cluster.gcp_cluster.project}" } }
Step 6: Now, we need to open our AWS cloud RDS database. We're releasing the Reqd for that. VPC, Subnets, Portal Internet & AWS security classes.
Amazon Virtual Private Cloud (Amazon VPC) allows you to have a logically separated portion of the AWS Cloud that allows you to deploy AWS services on a given virtual network. You have full control over your virtual networking environment, including your own IP address range collection, subnet development, and route tables and network gateways setup. For safe and fast access to services and software, you can use both IPv4 and IPv6 on your VPC.
A subnet is "part of the network," that is to say part of the whole compatibility zone. Each subnet must be within one Availability Zone entirely and does not stretch areas.
An internet gateway is a VPC component that is horizontally scalable, redundant, and highly accessible allowing connectivity between your VPC and the internet.
A protection community serves as a virtual firewall for monitoring incoming and outgoing traffic to your EC2 instances. Inbound rules control your example's incoming traffic, and outbound rules control your example's outgoing traffic. Network interfaces are aligned with security classes.
Terraform Code:
variable "aws_db_instance_storage_type"{} variable "aws_db_instance_engine"{} variable "aws_db_instance_engine_version"{} variable "aws_db_instance_instance_class"{} variable "aws_db_instance_db_name"{} variable "aws_db_instance_username"{} variable "aws_db_instance_password"{} variable "aws_db_instance_publicly_accessible"{} variable "aws_db_instance_skip_final_snapshot"{} resource "aws_vpc" "defaultvpc" { cidr_block = "192.168.0.0/16" instance_tenancy = "default" enable_dns_hostnames = true tags = { Name = "naitik2_vpc" } } resource "aws_subnet" "naitik2_public_subnet" { vpc_id = aws_vpc.defaultvpc.id cidr_block = "192.168.0.0/24" availability_zone = "ap-south-1a" map_public_ip_on_launch = "true" tags = { Name = "naitik2_public_subnet" } } resource "aws_subnet" "naitik2_public_subnet2" { vpc_id = aws_vpc.defaultvpc.id cidr_block = "192.168.1.0/24" availability_zone = "ap-south-1b" map_public_ip_on_launch = "true" tags = { Name = "naitik2_public_subnet2" } } resource "aws_db_subnet_group" "default" { name = "main" subnet_ids = [aws_subnet.naitik2_public_subnet.id,aws_subnet.naitik2_public_subnet2.id] tags = { Name = "My DB subnet group" } } resource "aws_internet_gateway" "naitik2_gw" { vpc_id = aws_vpc.defaultvpc.id tags = { Name = "naitik2_gw" } } resource "aws_security_group" "naitik2_public_sg" { depends_on=[google_container_cluster.gcp_cluster] name = "HTTP_SSH_PING" description = "It allows HTTP SSH PING inbound traffic" vpc_id = aws_vpc.defaultvpc.id ingress { description = "TLS from VPC" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } tags = { Name = "HTTP_SSH_PING" } } resource "aws_db_instance" "wp_db" { depends_on=[aws_security_group.naitik2_public_sg] allocated_storage = 20 storage_type = var.aws_db_instance_storage_type engine = var.aws_db_instance_engine engine_version = var.aws_db_instance_engine_version instance_class = var.aws_db_instance_instance_class name = var.aws_db_instance_db_name username = var.aws_db_instance_username password = var.aws_db_instance_password parameter_group_name = "default.mysql5.7" publicly_accessible = var.aws_db_instance_publicly_accessible skip_final_snapshot = var.aws_db_instance_skip_final_snapshot vpc_security_group_ids = [aws_security_group.naitik2_public_sg.id] db_subnet_group_name = aws_db_subnet_group.default.name }
Step 7: Now that everything is set, we're fit to start our WordPress server on top of the GKE cluster.
Terraform code is:
provider "kubernetes" { config_context_cluster="gke_${google_container_cluster.gcp_cluster.project}_${google_container_cluster.gcp_cluster.location}_${google_container_cluster.gcp_cluster.name}" } resource "kubernetes_service" "k8s" { depends_on=[aws_db_instance.wp_db,google_container_cluster.gcp_cluster] metadata{ name="wp" labels={ env="test" name="wp" } } spec{ type="LoadBalancer" selector={ app="wp" } port{ port=80 target_port=80 } } } output "ip_add"{ value=kubernetes_service.k8s.load_balancer_ingress[0].ip } resource "kubernetes_deployment" "wp_deploy"{ depends_on=[aws_db_instance.wp_db,google_container_cluster.gcp_cluster] metadata{ name="wp-deploy" labels={ name="wp-deploy" app="wp" } } spec{ replicas=1 selector{ match_labels = { app="wp" } } template{ metadata{ name="wp-deploy" labels={ app="wp" } } spec{ container{ name="wp" image="wordpress" env{ name="WORDPRESS_DB_HOST" value=aws_db_instance.wp_db.address } env{ name="WORDPRESS_DB_USER" value=aws_db_instance.wp_db.username } env{ name="WORDPRESS_DB_PASSWORD" value=aws_db_instance.wp_db.password } env{ name="WORDPRESS_DB_NAME" value=aws_db_instance.wp_db.name } } } } } } // open wordpress site in browser resource "null_resource" "open_wordpress" { provisioner "local-exec" { command ="start chrome ${kubernetes_service.k8s.load_balancer_ingress[0].ip}" } }
Step 8: Now that we created Terraform code for everything, run the terraform init command, run it from your terminal, but I am using Visual Studio Code for typing the terraform codes, so I will be using th terminal of Visual studio code.
As you can see, it shows Terraform has been successfully initialized.
Finally, we run the terraform apply --auto-approve. This will run the code on the optimised AWS & GCP Clouds and generate the above listed tools.
Finally, in our web browser, we may see the terraform local executioner open up our Wordpress account.
Congratulation fellas! We've actually done it. Using a mix of Google Kubernetes System from AWS Cloud's Google Cloud & Amazon Relational Database Services we finally deployed our WordPress platform on a Multi Cloud framework.
Even check out my website, which I just hosted, because my Linkedin profile didn't do the justice because of the knowledge I have due to Vimal Daga Sir.
Any suggestions are welcome. Thank You!
Open to opportunities
4 年Good work!