Working with Oracle Container Engine for Kubernetes (OKE) Microservices and Dockers

Working with Oracle Container Engine for Kubernetes (OKE) Microservices and Dockers

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerised applications.

Run K8s Anywhere

Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.

Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is a managed Kubernetes service that simplifies the operations of enterprise-grade Kubernetes at scale. It reduces the time, cost, and effort needed to manage the complexities of the Kubernetes infrastructure. Container Engine for Kubernetes lets you deploy Kubernetes clusters and ensure reliable operations for both the control plane and the worker nodes with automatic scaling, upgrades, and security patching. Additionally, OKE provides a fully serverless Kubernetes experience with virtual nodes.


In this article, we will create an Oracle Container Engine for Kubernetes Cluster with 3 Nodes, understand some of the Kubernetes concepts such as Namespace, PODs, Deployments, High Availability, Microservices, View and monitor Logs of PODs, create an OKE deployment from a repository. Expose a service through Oracle Load Balancer, Create Tomcat and Nginx services on Kubernetes. Create a PHP and Redis Kubernetes Service on OKE and access it through Load Balancer, use the same code example of an Application that runs on Google Kubernetes Engine (GKE) on Oracle Container Engine for Kubernetes Cluster and hence prove that Kubernetes follows the principle of write once Run K8s Anywhere and on any Cloud service infrastructure.

Please Note: If you are new to Kubernetes, please check the end of this article for terms such as PODS, Deployment, Cluster, Node, ReplcaSet, Controller etc.


Lets Get Started with Kubernetes -

Step 1: Create Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)

Assumption: You have administrative access over the given tenancy.if not ask your tenancy admin to set up the required policies.

Creating Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is very easy, Login to cloud.oracle.com under the left navigation select Developer Services, then under Containers and Artifacts select Kubernetes Clusters

Select Quick cluster. as this would create the required Network, Subnet, Routing Rules, Internet Gateway, Kubernetes Cluster, Worker nodes and Node pool - everything that we would need.

Provide name for the cluster, let rest of it be default settings. and click on Create Cluster.

Choose OCPUs, memory in GB and Oracle Linux 8 image.

In next few minutes we can see that the Kubernetes nodes are ready and created.

Review the summary information once the OKE Cluster is Active.

Step 2: Access Kubernetes Cluster

On the left navigation you can see Quick Start, click on that and then Access Cluster

This would open the Access Cluster dialog, click on Launch Cloud Shell and then copy page the oci ce cluster command as shown below.

-- Open Oracle Cloud Shell --
-- Copy this command from Kubernetes Access from Cloud Shell --

$ oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.phx.aaaaaXXX5cvpyemvicxq --file $HOME/.kube/config --region us-phoenix-1 --token-version 2.0.0  --kube-endpoint PUBLIC_ENDPOINT        

Great, now our OKE cloud infrastructure is available and ready for use.


Step 3: List and create a new namespace

Listing the namespaces that come by default

$kubectl get namespaces

-- Output --
NAME              STATUS   AGE
default           Active   19h
kube-node-lease   Active   19h
kube-public       Active   19h
kube-system       Active   19h        

You can create deployments under default namespace, however its good to create your own namespaces such as Development, Production etc.

To create your own namespace, first we would need to create a YAML file with the name of namespace to be created, in this case let us create a namespace as developmentns

vi namespace.yaml        
---
apiVersion: v1
kind: Namespace
metadata:
  name: developmentns        

If you would like to use cloud Code Editor, you can also use the inbuilt cloud code editor. as shown below.

Coming back to cloud shell, Kubectl apply with the name of namespace file will create a new namespace as shown below.

$ kubectl apply -f namespace.yaml
-- Output --
namespace/developmentns created        
List the namespaces -
$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   19h
developmentns     Active   3m44s
kube-node-lease   Active   19h
kube-public       Active   19h
kube-system       Active   19h        

Step 4: Creating first POD deployment

Create YAML deployment file a pod with 3 replicas this helps in high availability, for example if one of the PODs go down, the other will be automatically created.

We can find several images at hub.docker.com, here is an example hello world image

--- 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: first-pod-info-deployment
  namespace: developmentns
  labels:
    app: first-pod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: first-pod
  template:
    metadata:
      labels:
        app: first-pod
    spec:
      containers:
      - name: first-pod-container
        image: testcontainers/helloworld:latest
        ports:
        - containerPort: 3000
        env:
          - name: FIRST_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: FIRST_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: FIRST_POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
        

Deploy your first deployment and check the deployments under a given namespace.

$ kubectl apply -f first-deployment.yaml
deployment.apps/first-pod-info-deployment created        

List the deployment by - kubectl get deployments -n <namespace name>

$ kubectl get deployments -n developmentns

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
first-pod-info-deployment   3/3     3            3           18s        

List the PODS - kubectl get pods -n <namespace name>

$ kubectl get pods -n developmentns

NAME                                         READY   STATUS    RESTARTS   AGE
first-pod-info-deployment-65d89d55fb-5qzlm   1/1     Running   0          7m20s
first-pod-info-deployment-65d89d55fb-ff4m5   1/1     Running   0          7m20s
first-pod-info-deployment-65d89d55fb-qhkhs   1/1     Running   0          7m20s        

Step 5: How to check the health of a given POD?

kubectl describe pod <Pod Name> -n <Namespace Name>

$ kubectl describe pod first-pod-info-deployment-65d89d55fb-5qzlm -n developmentns

-- Output will be as shown --
Name:             first-pod-info-deployment-65d89d55fb-5qzlm
Namespace:        developmentns
Priority:         0
Service Account:  default
Node:             10.0.10.175/10.0.10.175
Start Time:       Sat, 16 Mar 2024 05:47:35 +0000
Labels:           app=first-pod
                  pod-template-hash=65d89d55fb
Annotations:      <none>
Status:           Running
IP:               10.0.10.200
IPs:
  IP:           10.0.10.200
Controlled By:  ReplicaSet/first-pod-info-deployment-65d89d55fb
Containers:
  first-pod-container:
    Container ID:   cri-o://66e208e40b69e152800f00e01a33f0907da4208db26ede9cef0155b82bc8f8ba
    Image:          testcontainers/helloworld:latest
    Image ID:       6974669be52b12a9103072cbad3e13fbf119b76aa09747f19a821a5eaad34be1
    Port:           3000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 16 Mar 2024 05:47:39 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      FIRST_POD_NAME:       first-pod-info-deployment-65d89d55fb-
 --- * ---                            node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned developmentns/first-pod-info-deployment-65d89d55fb-5qzlm to 10.0.10.175
  Normal  Pulling    13m   kubelet            Pulling image "testcontainers/helloworld:latest"
  Normal  Pulled     13m   kubelet            Successfully pulled image "testcontainers/helloworld:latest" in 2.742s (2.742s including waiting)
  Normal  Created    13m   kubelet            Created container first-pod-container
  Normal  Started    13m   kubelet            Started container first-pod-container        

Step 6: How to get IP address of each of these PODS?

kubectl get pods -n <Namespace name> -o wide

$ kubectl get pods -n developmentns -o wide

NAME                                         READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
first-pod-info-deployment-65d89d55fb-5qzlm   1/1     Running   0          24m   10.0.10.200   10.0.10.175   <none>           <none>
first-pod-info-deployment-65d89d55fb-ff4m5   1/1     Running   0          24m   10.0.10.186   10.0.10.226   <none>           <none>
first-pod-info-deployment-65d89d55fb-qhkhs   1/1     Running   0          24m   10.0.10.93    10.0.10.27    <none>           <none>        

The output will now contain the IP address of each of these running PODS.


Deploy simple Nginx application

Deploy simple Nginx application through a load balancer, Create a deployment of Nginx image

$ kubectl run nginx  --image=nginx --port=80
pod/nginx created

kubectl get deployments

-- View deployments  

$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2           23h

-- Get IP address of PODs

$ kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
nginx                               1/1     Running   0          48s   10.0.10.157   10.0.10.226   <none>           <none>
nginx-deployment-86dcfdf4c6-c9vs6   1/1     Running   0          23h   10.0.10.187   10.0.10.226   <none>           <none>
nginx-deployment-86dcfdf4c6-zdfv8   1/1     Running   0          23h   10.0.10.111   10.0.10.175   <none>           <none>        

Step 7: Expose the POD through a Load Balancer

$ kubectl expose pod nginx --port=80 --type=LoadBalancer

service/nginx exposed

$ kubectl get services
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)             AGE
kubernetes   ClusterIP      10.96.0.1       <none>           443/TCP,12250/TCP   24h
nginx        LoadBalancer   10.96.X.Y   129.146.X.Y   80:30549/TCP        5m18s        

This will start to create a Load Balancer - once it is Active make note of IP address, and use that IP address in

Step 8: Describe services

We can use describe services to check the IP of Load balancer and get more details about the service.

kubectl describe services

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl describe services

Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.0.1
IPs:               10.96.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         10.0.0.5:6443
Port:              proxymux  12250/TCP
TargetPort:        12250/TCP
Endpoints:         10.0.0.5:12250
Session Affinity:  None
Events:            <none>


Name:                     nginx
Namespace:                default
Labels:                   run=nginx
Annotations:              <none>
Selector:                 run=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.X.210
IPs:                      10.96.Y.210
LoadBalancer Ingress:     129.146.X.Y
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30549/TCP
Endpoints:                10.0.10.157:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason                  Age                    From                Message
  ----     ------                  ----                   ----                -------
  Normal   EnsuringLoadBalancer    7m58s                  service-controller  Ensuring load balancer
 balancer: An operation for the lb: default/nginx already exists.
  Normal   EnsuredLoadBalancer     7m26s                  service-controller  Ensured load balancer
  Normal   EnsuringLoadBalancer    7m23s (x4 over 7m58s)  service-controller  Ensuring load balancer
  Normal   EnsuredLoadBalancer     7m22s                  service-controller  Ensured load balancer
madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl describe services/nginx
Name:                     nginx
Namespace:                default
Labels:                   run=nginx
Annotations:              <none>
Selector:                 run=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.253.210
IPs:                      10.96.253.210
LoadBalancer Ingress:     129.146.X.Y
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30549/TCP
Endpoints:                10.0.10.157:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason                  Age                    From                Message
  ----     ------                  ----                   ----                -------
  Normal   EnsuringLoadBalancer    8m12s                  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  7m57s (x3 over 8m12s)  service-controller  Error syncing load balancer: failed to ensure load balancer: An operation for the lb: default/nginx already exists.
  Normal   EnsuredLoadBalancer     7m40s                  service-controller  Ensured load balancer
  Normal   EnsuringLoadBalancer    7m37s (x4 over 8m12s)  service-controller  Ensuring load balancer
  Normal   EnsuredLoadBalancer     7m36s                  service-controller  Ensured load balancer        

Step 9: Access Application through Load Balancer

From the browser access the Load Balancer through public IP address, then we can see the following web page.

Step 10: Deleting a service

kubectl delete service nginx

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl delete service nginx

service "nginx" deleted        

Step 11: High Availability

What happens when a POD gets deleted or unavailable?

Answer: A new POD will be automatically re-created, here is an example

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl get pods -o wide -n developmentns
NAME                                         READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE   READINESS GATES 
second-pod-info-deployment-9fd7b49f9-4q78g   1/1     Running   0          6h19m   10.0.10.22    10.0.10.226   <none>           <none>
second-pod-info-deployment-9fd7b49f9-blph8   1/1     Running   0          43s     10.0.10.20    10.0.10.175   <none>           <none>
second-pod-info-deployment-9fd7b49f9-m55k7   1/1     Running   0          6h19m   10.0.10.117   10.0.10.27    <none>           <none>

$ kubectl delete pod second-pod-info-deployment-9fd7b49f9-4q78g -n developmentns

pod "second-pod-info-deployment-9fd7b49f9-4q78g" deleted
         

It will now start re-creating a new POD this helps in High Availability when one of the PODs goes down, still other PODs are available to keep the incoming traffic

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl get pods -o wide -n developmentns
NAME                                         READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE   READINESS GATES 
second-pod-info-deployment-9fd7b49f9-blph8   1/1     Running   0          87s     10.0.10.20    10.0.10.175   <none>           <none>
second-pod-info-deployment-9fd7b49f9-hbtrc   1/1     Running   0          7s      10.0.10.10    10.0.10.226   <none>           <none>
second-pod-info-deployment-9fd7b49f9-m55k7   1/1     Running   0          6h20m   10.0.10.117   10.0.10.27    <none>           <none>        

Deploying a Tomcat Application on Kubernetes

Create a tomcat.yaml file as shown below, that spins up Tomcat version 9 with 3 replicas

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat
  labels:
    app: tomcat
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
        - name: tomcat
          image: tomcat:9
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: app-volume
              mountPath: /usr/local/tomcat/webapps/
      volumes:
        - name: app-volume
          configMap:
            name: app-bundle
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  labels:
    app: tomcat
spec:
  ports:
  - port: 80
    name: http
    targetPort: 8080
  selector:
    app: tomcat
  type: LoadBalancer        

Save and apply the tomcat.yaml file

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl apply -f tomcat.yaml
deployment.apps/tomcat created
service/tomcat created        

Download the tomcat sample war file

madhusudha@cloudshell:~ (us-phoenix-1)
$ wget https://tomcat.apache.org/tomcat-9.0-doc/appdev/sample/sample.war
--2024-03-17 04:59:18--  https://tomcat.apache.org/tomcat-9.0-doc/appdev/sample/sample.war
Resolving tomcat.apache.org (tomcat.apache.org)... 151.101.2.132, 2a04:4e42::644
Connecting to tomcat.apache.org  
Saving to: ‘sample.war’

100%[===== =====>] 4,606       --.-K/s   in 0.001s  

2024-03-17 04:59:18 (8.51 MB/s) - ‘sample.war’ saved [4606/4606]
        

Create Configmap from the sample war file.

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl create configmap app-bundle --from-file sample.war
configmap/app-bundle created        

Get deployments

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl get deploy,svc
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat             1/3     3            1           40s

NAME                                TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)             AGE
service/tomcat                      LoadBalancer   10.96.64.141    129.146.X.Y    80:32464/TCP        39s        

Access the tomcat application at Load Balancer Public IP address

Congrats, our tomcat application is now available.

Now let us delete our Application and save resources, this will also terminate the Load Balancer which was created for Tomcat.

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl delete -f tomcat.yaml
deployment.apps "tomcat" deleted
service "tomcat" deleted        

Optional Learnings -

Deploying PHP Guestbook application with Redis

This tutorial shows you how to build and deploy a simple (not production ready), multi-tier web application using Kubernetes and Docker. The guestbook application uses Redis to store its data. This example consists of the following components:

  • A single-instance Redis to store guestbook entries
  • Multiple web frontend instances

Objectives

  • Step 13: Start up a Redis leader.
  • Step 14: Create Redis leader service
  • Step 15: Start up two Redis followers.
  • Step 16: Create Redis follower service
  • Step 17: Start up the guestbook frontend.
  • Step 18: Create frontend service.
  • Step 19: Expose and view the Frontend Service.
  • Clean up.

Step 13: Creating the Redis Deployment

The manifest file, included below, specifies a Deployment controller that runs a single replica Redis Pod.

redis-leader-deployment.yaml

Google Cloud source code examples

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-leader
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: leader
        tier: backend
    spec:
      containers:
      - name: leader
        image: "docker.io/redis:6.0.5"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379        

Apply the Redis Deployment from the redis-leader-deployment.yaml file:

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml

deployment.apps/redis-leader created        

Query the list of Pods to verify that the Redis Pod is running:

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          29m
nginx-deployment-86dcfdf4c6-c9vs6   1/1     Running   0          24h
nginx-deployment-86dcfdf4c6-zdfv8   1/1     Running   0          24h
redis-leader-6cc46676d8-89q2p       1/1     Running   0          38s        

Run the following command to view the logs from the Redis leader Pod:

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl logs -f deployment/redis-leader
1:C 16 Mar 2024 09:01:14.070 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 16 Mar 2024 09:01:14.070 # Redis version=6.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 16 Mar 2024 09:01:14.070 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 16 Mar 2024 09:01:14.071 * Running mode=standalone, port=6379.
1:M 16 Mar 2024 09:01:14.071 # Server initialized
1:M 16 Mar 2024 09:01:14.071 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 16 Mar 2024 09:01:14.071 * Ready to accept connections        

Step 14: Creating the Redis leader Service

The guestbook application needs to communicate to the Redis to write its data. You need to apply a Service to proxy the traffic to the Redis Pod. A Service defines a policy to access the Pods.

redis-leader-service.yaml

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
  name: redis-leader
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: leader
    tier: backend        

Apply the Redis Service from the following redis-leader-service.yaml

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml

service/redis-leader created        

Query the list of Services to verify that the Redis Service is running:

Step 15: Set up Redis followers

Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.

redis-follower-deployment.yaml

image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2        
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-follower
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: follower
        tier: backend
    spec:
      containers:
      - name: follower
        image: us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379        

Apply the Redis Deployment from the following redis-follower-deployment.yaml

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml

deployment.apps/redis-follower created        

Step 16: Creating the Redis follower service

The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another Service.

redis-follower-service.yaml

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
  name: redis-follower
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
  selector:
    app: redis
    role: follower
    tier: backend        

Apply the Redis Service from the following redis-follower-service.yaml

$ kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml

service/redis-follower created

 
$ kubectl get service
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)             AGE
kubernetes     ClusterIP      10.96.0.1       <none>           443/TCP,12250/TCP   24h
nginx          LoadBalancer   10.96.X.Y   129.153.X.Y   8080:31124/TCP      13m
redis-leader   ClusterIP      10.96.18.61     <none>           6379/TCP            43s        

Step 17: Set up and Expose the Guestbook Frontend

Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.

The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX.

image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5        

frontend-deployment.yaml

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
        app: guestbook
        tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5
        env:
        - name: GET_HOSTS_FROM
          value: "dns"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80        

Apply the frontend Deployment from the frontend-deployment.yaml

$ kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml

deployment.apps/frontend created        

Step 18: Creating the Frontend Service

The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster

frontend-service.yaml

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  #type: LoadBalancer
  ports:
    # the port that this service should serve on
  - port: 80
  selector:
    app: guestbook
    tier: frontend        
$ kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml

service/frontend created        

Query the list of Services to verify that the frontend Service is running:

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl get services
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)             AGE
frontend         ClusterIP      10.96.46.237    <none>           80/TCP              25s
kubernetes       ClusterIP      10.96.0.1       <none>           443/TCP,12250/TCP   24h
nginx            LoadBalancer   10.96.178.100   129.X.Y.44   8080:31124/TCP      17m
redis-follower   ClusterIP      10.96.95.111    <none>           6379/TCP            2m20s
redis-leader     ClusterIP      10.96.18.61     <none>           6379/TCP            5m6s        
madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl get service frontend
NAME       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
frontend   ClusterIP   10.96.46.X   <none>        80/TCP    114s        
-- Query the list of Services to verify that the Redis Service is running:

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl get service
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)             AGE
kubernetes     ClusterIP      10.96.0.1       <none>           443/TCP,12250/TCP   24h
nginx          LoadBalancer   10.96.X.Y   129.153.X.Y   8080:31124/TCP      13m
redis-leader   ClusterIP      10.96.18.61     <none>           6379/TCP            43s

 
madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          33m
nginx-deployment-86dcfdf4c6-c9vs6   1/1     Running   0          24h
nginx-deployment-86dcfdf4c6-zdfv8   1/1     Running   0          24h
redis-follower-7dddf7c979-h98nf     1/1     Running   0          29s
redis-follower-7dddf7c979-mbtz8     1/1     Running   0          29s
redis-leader-6cc46676d8-89q2p       1/1     Running   0          4m35s

 
madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl get pods -l app=guestbook -l tier=frontend
NAME                        READY   STATUS    RESTARTS   AGE
frontend-795b566649-45j4d   1/1     Running   0          41s
frontend-795b566649-lhkx6   1/1     Running   0          41s
frontend-795b566649-qzr5w   1/1     Running   0          41s        

Step 19: Expose the service through a Load Balancer

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl expose pod frontend-795b566649-45j4d --port=80 --type=LoadBalancer

service/frontend-795b566649-45j4d exposed        

This will create Load Balance - check for public IP

From your web browser access the public IP address to get into the Guestbook application.


Step 20: High Availability:

Scaling up the deployment by adding new replicas

madhusudha@cloudshell:~ (us-phoenix-1)
$ kubectl scale deployment frontend --replicas=3

deployment.apps/frontend scaled        

View the new front end replicas running.

madhusudha@cloudshell:~ (us-phoenix-1)$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
frontend-795b566649-45j4d           1/1     Running   0          10m
frontend-795b566649-lhkx6           1/1     Running   0          10m
frontend-795b566649-qzr5w           1/1     Running   0          10m
nginx                               1/1     Running   0          45m
nginx-deployment-86dcfdf4c6-c9vs6   1/1     Running   0          24h
nginx-deployment-86dcfdf4c6-zdfv8   1/1     Running   0          24h
redis-follower-7dddf7c979-h98nf     1/1     Running   0          12m
redis-follower-7dddf7c979-mbtz8     1/1     Running   0          12m
redis-leader-6cc46676d8-89q2p       1/1     Running   0          16m        

Clean up, Run the following commands to delete all Pods, Deployments, and Services.

kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment frontend
kubectl delete service frontend
The response should look similar to this:

deployment.apps "redis-follower" deleted
deployment.apps "redis-leader" deleted
deployment.apps "frontend" deleted
service "frontend" deleted
Query the list of Pods to verify that no Pods are running:        

Optional: Google Kubernetes Engine (GKE)

Most of the above steps would remain same. the final step would be

kubectl get service frontend

-- Output --
NAME       CLUSTER-IP      EXTERNAL-IP        PORT(S)        AGE
frontend   10.51.X.136   109.197.X.Y     80:32372/TCP   1m
        

Kubernetes Terminology :-

POD: are the smallest deployable units of computing that you can create and manage in Kubernetes.A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context.

ReplicaSet: Its purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

Cluster: A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node

Controller: In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.

Deployment: provides declarative updates for PODs and ReplicaSets.

You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Service: Expose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.

Ingress: Make your HTTP (or HTTPS) network service available using a protocol-aware configuration mechanism, that understands web concepts like URIs, hostnames, paths, and more. The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API.

Containers: Each container that you run is repeatable; the standardization from having dependencies included means that you get the same behavior wherever you run it. Containers decouple applications from the underlying host infrastructure. This makes deployment easier in different cloud or OS environments. Containers in a Pod are co-located and co-scheduled to run on the same node.

Nodes: Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods.Typically you have several nodes in a cluster, in a resource-limited environment, you might have only one node.

Typically, What is a Kubernetes Node is really made up of?

ImagePullBackOff: If we see POD status as ImagePullBackOff rather than Running then there is an error. The ImagePullBackOff error is a common error message in Kubernetes that occurs when a container running in a pod fails to pull the required image from a container registry. When a pod is created, Kubernetes attempts to pull the container image specified in the pod definition from the container registry. If the image is not available or cannot be pulled, Kubernetes marks the pod as “ImagePullBackOff” and stops attempting to pull the image. Thus, the pod will not be able to start and will remain in a pending state.


Thanks for reading, liking, sharing and reposting - Have a great day

Regards, Madhusudhan Rao


References:

Cindy McClung

??"Suggested Term" Optimization for Home Care/Health |??Sculpting Success With Fully Automated Marketing Process |??200+ businesses auto-suggested by Google | ???Effortlessly get online customer reviews | ??Near Me

8 个月

Great insights on setting up an Oracle Container Engine for Kubernetes and utilizing key Kubernetes concepts! ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了