OpenShift 4.X Operators - Kubernetes API Fundamentals
For the next few weeks, starting this week, I'll be writing about Operators - software to install Kubernetes application. Before diving into the Operator Framework, this article will give an overview of Kubernetes API fundamentals (from a completely hands-on approach; no theory, sorry!).
Getting started and prerequisites
I assume you have access to an OpenShift 4.X cluster. If not, give OpenShift Playground a try! Log in to your cluster with cluster-admin access.
The Kubernetes API Server is the central management entity and provides the following core functionality:
- Serves the Kubernetes API, used cluster-internally by the worker nodes as well as externally by kubectl
- Proxies cluster components such as the Kubernetes UI
- Allows the manipulation of the state of objects, for example pods and services
- Persists the state of objects in a distributed storage (etcd)
The diagram above shows an architectural diagram for Kubernetes API Server. I'm using oc, which is OpenShift's CLI, to talk to the OpenShift Master API Server. This is similar to kubectl talking to Kubernetes API Server.
Getting friendly with manifests and YAMLs
Whether you're developing on or administering Kubernetes/OpenShift, you'll be working with a lot of YAMLs. In this part, we'll be creating a multi-container pod using a pod manifest file.
Create a new file called pod-multi-container.yaml and copy the following contents there:
apiVersion: v1 kind: Pod metadata: name: my-two-container-pod namespace: myproject labels: environment: dev spec: containers: - name: server image: nginx:1.13-alpine ports: - containerPort: 80 protocol: TCP - name: side-car image: alpine:latest command: ["/usr/bin/tail", "-f", "/dev/null"]
restartPolicy: Never
Create the pod by specifying the manifest:
oc create -f pod-multi-container.yaml
View the detail for the pod and look at the events:
oc describe pod my-two-container-pod
The two containers that were created are a server and a sidecar. Let's first execute a shell session inside the server container by using the -c flag:
oc exec -it my-two-container-pod -c side-car -- /bin/sh
Run some commands inside the server container:
ip address netstat -ntlp hostname ps
exit
You can execute similar commands inside the side-car container (try it yourself).
Basic Operations with the Kubernetes API
Verify the currently available Kubernetes API versions on OpenShift:
oc api-versions # You can also get the same output from kubectl kubectl api-versions
Use the --v flag to set a verbosity level. This will allow you to see the request/responses against the Kubernetes API:
oc get pods --v=8
Use the oc proxy command to proxy local requests on port 8001 to the Kubernetes API:
oc proxy --port=8001
Open another terminal and send a GET request to the Kubernetes API using curl:
curl -X GET https://localhost:8001
You can also explore the OpenAPI definition file to see the complete API details (this will yield a long output):
curl localhost:8001/openapi/v2
Send a GET request to list all pods in the environment:
curl -X GET https://localhost:8001/api/v1/pods
Delete the current pod by sending the DELETE request method:
curl -X DELETE https://localhost:8001/api/v1/namespaces/myproject/pods/my-two-container-pod
Verify the pod is in Terminating status:
oc get pods
Verify the pod no longer exists:
curl -X GET https://localhost:8001/api/v1/namespaces/myproject/pods/my-two-container-pod
Replica Sets
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
Get a list of all pods in the myproject Namespace:
oc get pods -n myproject
Create a ReplicaSet object manifest file called replica-set.yaml and copy the following contents:
apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: name: myfirstreplicaset namespace: myproject spec: selector: matchLabels: app: myfirstapp replicas: 3 template: metadata: labels: app: myfirstapp spec: containers: - name: nodejs image: openshiftkatacoda/blog-django-py
Create the ReplicaSet:
oc apply -f replica-set.yaml
In a new terminal window, select all pods that match app=myfirstapp:
oc get pods -l app=myfirstapp --show-labels -w
Delete the pods and watch new ones spawn:
oc delete pod -l app=myfirstapp
Imperatively scale the ReplicaSet to 6 replicas:
oc scale replicaset myfirstreplicaset --replicas=6
You can also use oc scale command to interact with the /scale endpoint:
curl -X GET https://localhost:8001/apis/extensions/v1beta1/namespaces/myproject/replicasets/myfirstreplicaset/scale
Deployments
You can also create a YAML file like the following way directly. While we don't cover Finalizer in this article, you can read more about it here. Create a manifest for a Deployment with a Finalizer:
cat > finalizer-test.yaml<<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: finalizer-test namespace: myproject labels: app: finalizer-test finalizers: - finalizer.extensions/v1beta1 spec: replicas: 3 template: metadata: labels: app: finalizer-test spec: containers: - name: hieveryone image: openshiftkatacoda/blog-django-py imagePullPolicy: Always ports: - name: helloworldport containerPort: 8080
EOF
Create the Deployment:
oc create -f finalizer-test.yaml
Verify the Deployment has been created:
oc get deploy
Verify the ReplicaSet has been created:
oc get replicaset
Verify the pods are running:
oc get pods
Custom Resource Definitions
Begin by running a proxy to the Kubernetes API server:
oc proxy --port=8001
From another terminal, create a new Custom Resource Definition (CRD) object manifest for Postgres:
cat >> postgres-crd.yaml <<EOF apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: postgreses.rd.example.com spec: group: rd.example.com names: kind: Postgres listKind: PostgresList plural: postgreses singular: postgres shortNames: - pg scope: Namespaced version: v1alpha1
EOF
Create the CRD resource object:
oc create -f postgres-crd.yaml
You should now see the Kubernetes API reflect a brand new api group called rd.example.com:
curl https://localhost:8001/apis | jq .groups[].name
This will also be reflected in the oc api-versions command. Within the rd.example.com group there will be an api version v1alpha1 (per our CRD resource object). The database resource resides here:
curl https://localhost:8001/apis/rd.example.com/v1alpha1 | jq
Notice how oc now recognize postgres as a present resource (although there will be no current resource objects at this time).
oc get postgres
Let's create a new Custom Resource (CR) object manifest for the database:
cat >> wordpress-database.yaml <<EOF apiVersion: "rd.example.com/v1alpha1" kind: Postgres metadata: name: wordpressdb spec: user: postgres password: postgres database: primarydb nodes: 3
EOF
Create the new object:
oc create -f wordpress-database.yaml
Verify the resource was created:
oc get postgres
View the details about the wordpressdb object:
oc get postgres wordpressdb -o yaml
Summary
In this article, you created a multi-container pod from a YAML, executed various common kubectl/oc commands to the API server, played with replica sets and deployments. For more information, check out these links below:
GitHub
- Operator-Framework: https://github.com/operator-framework
- Operator-SDK: https://github.com/operator-framework/operator-sdk/
Chat
- Kubernetes Slack Chat (upstream): #kubernetes-operators at https://kubernetes.slack.com/
- Operator-Framework on Google Groups: https://groups.google.com/forum/#!forum/operator-framework
- OpenShift Operators Special Interest Group (SIG): https://commons.openshift.org/sig/OpenshiftOperators.html
Principal Developer Advocate @ Harness | ??: dewanahmed.com
4 年Resources I'm using: 1. learn.openshift.com 2. RedHat Developers youtube channel and docs 3. https://blog.openshift.com/kubernetes-deep-dive-api-server-part-1/