Navigating Kubernetes: A Hands-On Introduction

Navigating Kubernetes: A Hands-On Introduction

Kubernetes has become the de facto framework to deploy and operate containerized applications in a prívate cloud and also public cloud.? All major public clouds have managed Kubernetes Services ( Google GKE , Azure AKS, AWS EKS).

Kubernetes is a very powerful framework, but it is also a complex framework and it has a steep learning curve. The objective of this article is to help you handle the steep learning curve by giving you easy examples of how to start Taming Kubernetes.

NOTE 1: Installation “step by step” of a Kubernetes cluster can have slight variations due to different Kubernetes versions and different Host OSes.? If you are new to Kubernetes, we recommend to play with the Kubernetes cluster deployed in the free Kubernetes classroom ( https://training.play-with-kubernetes.com/ ? ) and avoiding installing a cluster until you get a little experience. In the free Kubernetes classroom,?you can test the examples provided in this article.?

Let’s start by explaining the fundamentals of a Kubernetes cluster.? A cluster consists of one or several nodes. A node can be a physical machine or a virtual machine. Each node has a role. Most common roles are control plane and worker. It is mandatory to have at least one control plane node.?

Below we can see our lab has one control-plane node. There are other 2 nodes whose role say “<none>”. Those 2 nodes are workers.

Let ‘s find more information about our nodes by using “- owide” option. We can see all nodes are running Kubernetes 1.28.12 on Ubuntu 20.04.6 LTS. Also, all nodes are using containerd as the container runtime.


Now we bring our attention to pods and containers:

  • Kubernetes is a pod orchestrator.? But what is a pod ? A pod is a collection of one or more containers. Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
  • Kubernetes needs a “container runtime” in order for the pods to run. Some popular options are containerd and CRI-O.?
  • Kubernetes needs networking to bring connectivity to the pods. Some popular options are calico and flannel.

NOTE 2: A “container runtime” software is used to download container images , to leverage various features of the OS, like namespaces and cgroups, to isolate and manage resources for each container and it is responsible for managing the execution and lifecycle of containers.?


As seen in the image above, there are several “kube-system” pods installed by default in all nodes.? We can check the pods with the following command:


No resources found in the default namespace.

We don’t see any pod at all in the default namespace.?

NOTE 3: A namespace is a mechanism to isolate resources in a Kubernetes cluster. It is good practice to create and use different namespaces to organize resources and create policies specific to the namespaces. For example, you can create a “development” namespace and a “production” namespace. Then you can create a user and deny him access to resources from the “production” namespace but allow him access to resources from the “development” namespace.

Let’s find what namespaces exist by default in our cluster.


Let’s use? the –n flag in the command “kubectl get pods” to check the pods in every namespace.? Below we can see that right now only the namespace “kube-system” has pods.


Above we see there are lots of pods. But it is better to see which pods are deployed in which node. To do that we use the “-owide” flag and grep with the name of each node.? Let’s begin with the 2 workers. We see each worker has 2 pods: calico-node and kube-proxy.



Let's see the pods in the control plane node. We notice there are many more pods in a control plane node than in a worker node.?




Now let’s find out what networking component is in our lab.? Below you can see our lab has calico as the networking component.




Below we can see Kubernetes is using the IP range “10.244.0.0/16” as the cluster CIDR and Kubernetes is subnetting it in order to give connectivity to the pods.


We just finished doing our first reconnaissance of our Kubernetes cluster lab.? Now we are ready to deploy our first pod. We are going to deploy a NGINX container.? A container registry ?is needed to download the images. The container registry is external to the Kubernetes cluster.? By default Kubernetes looks for an image in https://hub.docker.com? which is a public and free? container registry.?


Kuberenetes can use a declarative approach. It means it can use files to describe the configuration of? a resource and Kubernetes will try to enforce the configuration at all times.?

Kubernetes use YAML files to describe the configuration. Below we can see a file that will be used to create a deployment with one pod created with the container image of nginx.


NOTE 4: In the line 6 of the file “deployment.yaml” we see replicas :1? ? A replica is the number of instances of a pod that should be running at any time.? Later in this article, we will experiment changing the number of replicas to 2.

Now? we have to apply the file to create our deployment and our pod. At first you will see the status “ContainerCreating”for a minute or two because it will take some time to download the image from hub.docker.com.? Later we should see the? status "Running.”


Our pod of nginx is running but we can’t access it because it doesn’t have a service associated to it.? Below we can see the only service created so far? is the default service of Kubernetes Cluster.



Now we create a YAML file and apply it to create a Kubernetes service for nginx. This service will select the pod nginx and publish the service in port 80.




Now we use curl to test the web page of NGINX.



It works! But please notice we are using an internal IP to do the testing. What if we want to test it from another PC?. There are several ways for Kubernetes to publish a service for external access.? The simplest way is to use port-forwarding.?

NOTE 5: Port-forwarding? should only used for testing and not for production. It’s better to use a? “load balancer” service and even better to use a Kubernetes “ingress” resource.?

Below we see we are using a host with IP 192.168.150.110.?


The flag --address="0.0.0.0" of the command below means any IP of the host, which of course includes 192.168.150.110.? Any packet that reaches port 8080 of the host will be forwarded to port 80 of the container.


Now we can use any PC that has connectivity to our host 192.168.150.110.? We browse to https://192.168.150.110:8080 and we get the “Welcome to nginx!” message.


Earlier we said that the declarative approach enforces the configuration at all times.? The YAML file of the NGINX deployment says “replicas: 1”.? What will happen if we manually delete the only pod ?? Kubernetes will detect the deployment does not comply and will try to bring another pod . Below we see we manually delete the pod my-nginx-f4f77479-vhjf5 and Kubernetes automatically created another pod my-nginx-f4f77479-phwtw to comply with the policy.


We can also check what just happened by looking the events, as seen below.


Finally, we can also modify our deployment file to tell Kubernetes to have 2 replicas instead of 1 .


We apply the changes and now we see Kubernetes has 2 NGINX pods deployed.


This was a brief introduction to deploy and manage an application with Kubernetes. This is the tip of the iceberg. Kubernetes has many resources to get accurate metrics of your application (for example, cpu usage, memory usage,? HTTP requests per second, etc) .? Then Kubernetes can leverage the metrics to do autoscaling (Vertical pod autoscaler, horizontal pod autoscaler). You can use tools like Helm and Kustomize to manage the Kubernetes manifest files of several clusters and quick deployment of complex applications.? The universe of Kubernetes is almost endless.

You can continue your Kubernetes journey with the following resources:

https://kubernetes.io/docs/reference/kubectl/quick-reference/

https://training.linuxfoundation.org/resources/?_sft_content_type=free-course&_sf_s=kubernetes


- Author

Eduardo Aliaga

Telecommunication engineer with expertise in networking technologies for Service Provider, Enterprise and Security markets from Latin-American region. Committed to constant training regarding networking specifications, focusing mainly on high-scale networks design, support, deployment and consultancy.


要查看或添加评论,请登录