Setting up Kuma Service Mesh across Kubernetes and VMs
Introduction
In the modern cloud-native landscape, managing applications across multiple zones in a scalable and reliable manner is crucial for ensuring high performance and availability. Organizations often leverage service mesh technologies to enhance traffic management, observability, and security. This guide walks you through setting up a Kuma Mesh multi-zone on Universal, covering steps from prerequisites to testing the multi-zone deployment.
Table of Contents:
Prerequisites:
Before we begin, ensure the following prerequisites:
Kumactl Installation:
# make sure your in home directory
curl -L https://kuma.io/installer.sh | VERSION=2.3.0 sh -
export PATH=$HOME/kuma-2.3.0/bin:$PATH >> ~/.bashrc
source ~/.bashrc
To set up a multi-zone deployment we will need to:
Step 1: Set up the global control plane
The global control plane coordinates and synchronizes the configuration between different control plane instances across zones. It ensures consistent policies, traffic routing, and service discovery across the entire multi-zone deployment.
To get started, we will deploy Kuma global control plane on our global-cp Virtual Machine.
# Generate tls certficats with global-cp public Ip
kumactl generate tls-certificate \
--type=server \
--hostname=< global-cp VM PublicIP > \
--cert-file=/tmp/tls.crt \
--key-file=/tmp/tls.key
cp /tmp/tls.crt /tmp/ca.crt
# Run kuma in Global mode
KUMA_MULTIZONE_GLOBAL_KDS_TLS_CERT_FILE=/tmp/tls.crt \
KUMA_MULTIZONE_GLOBAL_KDS_TLS_KEY_FILE=/tmp/tls.key \
KUMA_MODE=global \
kuma-cp run
Step 2: Access the Kuma user interface
Next, Retrieve the Public IP address of the Kuma global control-plane Virtual Machine to access Kuma GUI ( https://< global-cp VM PublicIP >:5681/gui).
Note: Make sure to add inbound port rules.
Step 3: Set up Zone Control-Planes — kumazone1 on K8s and kumazone2 on Virtual Machine
The control planes are responsible for managing and configuring the service mesh. In a multi-zone deployment, you can have multiple instances of the control plane running across different zones or regions. Each control plane instance operates independently but shares the same configuration and policies.
Make sure you copied ca.crt file from global-cp
-------------------------- zone-k8s -----------------------
# Install K3s for kubernetes environment
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="server --disable=traefik" sh -
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# Create Zone control-plane kumazone-1
kumactl install control-plane \
--mode=zone \
--zone=kumazone1 \
--ingress-enabled \
--kds-global-address grpcs://< global-cp VM PublicIP >:5685 \
--set controlPlane.tls.kdsZoneClient.skipVerify=true | kubectl apply -f -
------------------------- zone-vm ------------------------
# Create Zone control-plane kumazone-2
KUMA_MULTIZONE_ZONE_KDS_ROOT_CA_FILE=/tmp/ca.crt \
KUMA_MODE=zone \
KUMA_MULTIZONE_ZONE_NAME=kumazone2 \
KUMA_MULTIZONE_ZONE_GLOBAL_ADDRESS=grpcs://< global-cp VM PublicIP >:5685 \
kuma-cp run
---------------------- global-cp ---------------------------
# Generate Zone tokens once Verified Zone Control Planes connectivity on global control terminal
kumactl generate zone-token --zone="kumazone2" --scope egress --scope ingress --valid-for=720h > zone-vm-token
# Update default mesh for enable mtls to test cross-zone communistaion.
echo "type: Mesh
name: default
mtls:
enabledBackend: ca-1
backends:
- name: ca-1
type: builtin
dpCert:
rotation:
expiration: 1d
conf:
caCert:
RSAbits: 2048
expiration: 10y " | kumactl apply -f -
Note: After executing the Zone-control plane kumazone2 command, the Kuma Control Plane (CP) will run in the foreground, enabling you to monitor its output and interact with it in real-time.
领英推荐
Step 4: Verify control plane connectivity
You can run kumactl get zones on the Global Control-plane terminal, check the global control plane's list of zones in the web UI to verify zone control planes (kumazone1 and kumazone2) connections. When a zone control plane connects to the global control plane, the Zone resource is created automatically in the global control plane.
The Zone Ingress tab of the web UI also lists zone control planes that you deployed with zone ingress. You will notice that Kuma automatically creates an mesh an entity with a name. default.
Step 5: Configure Data Plane with Kuma Mesh and deploy sample applications on Zone-CPs
kumazone1 CP:
-------------------------- zone-k8s -----------------------
# create aio application resource file aio.yaml
cat > aio.yaml << "EOF"
apiVersion: v1
kind: Namespace
metadata:
name: kuma-demo
namespace: kuma-demo
labels:
kuma.io/sidecar-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-master
namespace: kuma-demo
labels:
app: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: master
image: kvn0218/postgres:latest
env:
- name: POSTGRES_USER
value: kumademo
- name: POSTGRES_PASSWORD
value: kumademo
- name: POSTGRES_DB
value: kumademo
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 150m
memory: 256Mi
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: kuma-demo
labels:
app: postgres
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
selector:
app: postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
namespace: kuma-demo
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: kvn0218/kuma-redis
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 150m
memory: 256Mi
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: kuma-demo
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: kuma-demo
annotations:
3001.service.kuma.io/protocol: "http"
spec:
selector:
app: kuma-demo-backend
ports:
- name: api
port: 3001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuma-demo-backend-v0
namespace: kuma-demo
spec:
replicas: 1
selector:
matchLabels:
app: kuma-demo-backend
version: v0
env: prod
template:
metadata:
labels:
app: kuma-demo-backend
version: v0
env: prod
spec:
containers:
- image: kvn0218/kuma-demo-be:latest
name: kuma-be
env:
- name: POSTGRES_HOST
value: postgres_kuma-demo_svc_5432.mesh
- name: POSTGRES_PORT_NUM
value: "80"
- name: SPECIAL_OFFER
value: "false"
- name: REDIS_HOST
value: redis_kuma-demo_svc_6379.mesh
- name: REDIS_PORT
value: "80"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuma-demo-backend-v1
namespace: kuma-demo
spec:
replicas: 0
selector:
matchLabels:
app: kuma-demo-backend
version: v1
env: intg
template:
metadata:
labels:
app: kuma-demo-backend
version: v1
env: intg
spec:
containers:
- image: kvn0218/kuma-demo-be:latest
name: kuma-be
env:
- name: POSTGRES_HOST
value: postgres_kuma-demo_svc_5432.mesh
- name: POSTGRES_PORT_NUM
value: "80"
- name: REDIS_HOST
value: redis_kuma-demo_svc_6379.mesh
- name: REDIS_PORT
value: "80"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuma-demo-backend-v2
namespace: kuma-demo
spec:
replicas: 0
selector:
matchLabels:
app: kuma-demo-backend
version: v2
env: dev
template:
metadata:
labels:
app: kuma-demo-backend
version: v2
env: dev
spec:
containers:
- image: kvn0218/kuma-demo-be:latest
name: kuma-be
env:
- name: POSTGRES_HOST
value: postgres_kuma-demo_svc_5432.mesh
- name: POSTGRES_PORT_NUM
value: "80"
- name: TOTAL_OFFER
value: "2"
- name: REDIS_HOST
value: redis_kuma-demo_svc_6379.mesh
- name: REDIS_PORT
value: "80"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
EOF
# Create resources by appling aio.yaml file
kubectl apply -f aio.yaml
kumazone2 CP:
Make sure you have copied Zone-token ”zone-vm-token“ from global-cp virtual machine
-------------------------- zone-vm -----------------------
# create Zone-ingress resource file
cat > ingress.yaml << "EOF"
type: ZoneIngress
name: ingress-vm
networking:
address: < kumazone2 VM PrivateIP > # address that is routable within the zone
port: 10001
advertisedAddress: < kumazone2 VM PublicIP > # an address which other zones can use to consume this zone-ingress
advertisedPort: 10001 # a port which other zones can use to consume this zone-ingress
admin:
port: 9902
EOF
# Create Zone-ingress on kumazone2
sudo ./kuma-2.3.0/bin/kuma-dp run \
--proxy-type=ingress \
--cp-address=https://localhost:5678 \
--dataplane-token-file=zone-token \
--dataplane-var name=`hostname -s` \
--dataplane-var address=< kumazone2 VM PrivateIP > \
--dataplane-file=ingress.yaml
# make sure you have docker in your machine.
sudo apt update && sudo apt install docker.io
# run the frontend service on kumazone2 vm
sudo docker run --network host kvn0218/kuma-demo-fe:latest -P https://localhost:8080 -p 9080
# Create dataplane resource file dp.yaml to create frontend service
cat > dp.yaml << "EOF"
type: Dataplane
mesh: default
name: frontend
networking:
advertisedAddress: {{ address }}
address: {{ address }}
inbound:
- port: 80
servicePort: 9080
serviceaddress: {{ address }}
tags:
kuma.io/service: frontend
kuma.io/protocol: http
health:
ready: true
outbound:
- port: 8080
tags:
kuma.io/service: backend_kuma-demo_svc_3001
EOF
# Create dataplane token
kumactl generate dataplane-token \
--name frontend\
--mesh default \
--tag kuma.io/service=frontend \
--valid-for 720h > dataplane-token
sudo ./kuma-2.3.0/bin/kuma-dp run --cp-address=https://localhost:5678 \
--dataplane-file=dp.yaml \
--dataplane-token-file=dataplane-token \
--dataplane-var name=`hostname -s` \
--dataplane-var address=< kumazone2 VM PrivateIP >
Note: After executing the Zone-control plane kumazone2 ingress, data plane, and kuma-demo application front-end service commands, the Kuma Zone-ingress, Data Plane (DP), and front-end service will run in the foreground, enabling you to monitor their output and interact with them in real-time. Please ensure you use separate terminals to run the Zone-ingress, Data Plane (DP), and front-end service commands for a better understanding of their individual outputs and to monitor their execution simultaneously.
# Access and verify the Kuma-demo application user interface
https://< kumazone2vm publicIP >:9080
Conclusion
By the end of this guide, you'll have a solid understanding of how to set up a Kuma Mesh multi-zone on Universal. You'll also know how to verify your setup and troubleshoot common issues that may arise. With this knowledge, you can leverage Kuma Mesh to deploy and manage applications across multiple zones, ensuring high availability and performance in your cloud-native environment.
About Zelar
Zelarsoft is a trusted partner, specializing in Kong API Gateway solutions and cloud services. As an official Kong partner, we offer end-to-end consulting, implementation, and licensing services to help businesses maximize their API management capabilities. Our Kong licensing solutions ensure that organizations can leverage the full potential of Kong’s enterprise-grade features, including enhanced security, traffic management, and performance optimization.
In addition to Kong's powerful API Gateway, we provide seamless integration with cloud platforms like Google Cloud and AWS, delivering cost-effective and scalable solutions. Our expertise ensures businesses can simplify their infrastructure, maintain compliance, and improve operational efficiency. Whether you're looking to secure your APIs, scale your services, or future-proof your IT environment, Zelarsoft offers tailored solutions that accelerate innovation and reduce complexity.
Schedule a complimentary consultation with Zelarsoft to assess your Kong API Gateway setup and optimize your API management strategy for enhanced security, scalability, and performance.
For more information: https://zelarsoft.com/
Email: [email protected]
Phone: 040-42021524 ; 510-262-2801