From Command to Servicing, the complex process behind a Kubernetes pod creation
Shaji Nair
Enterprise Transformation leader with expert-level hands-on skills in Technology. Architecture, Cloud, ML, Gen-AI, and Architecture, Engineering Team Management. I specialize in banking, finance, and retail domains.
In #Kubernetes , building and managing a component is a continuous process. A simple command to create a Pod in Kubernetes is executed through several #eventdriven collaborations across different parts in the cluster's control plane and node server.
Every deployable component in Kubernetes, such as Pod, has a desired state specification, detailed in the configuration provided in the deployment API call.
Kubernetes, in principle, is responsible for creating the component and managing the active state towards the desired state through continuous monitoring, management, and optimization component state and the cluster workloads and resources.
Let us see the internal event-driven collaboration steps involved in creating a Pod component in Kubernetes.
The high-level illustrative view
Step 1- The interaction between the client and API server
When a user with Kubernetes client API access executes the pod deployment command with the deployment specification, it will be intercepted by the #API server and completed with the following steps.
Step2- The collaboration between the scheduler and API server
Once the object definition is available in the ETCD store, it must be scheduled for deployment to a node where actual compute, #network , and #storage resources are available to deploy the component. The #scheduler component from the #controlplane is responsible for scheduling. The scheduler selects the nodes to run new pods based on the resource #requirements , #affinity , anti-affinity, and other constraints specified in the pod's deployment descriptor. ?
The scheduler is a separate process that runs on each Kubernetes cluster control plane node. Scheduler observes the API server's state for unscheduled pods and decides which node to place the pod. The scheduler then updates the API server with the node assignment for the pod. The scheduler operates a continuous loop, monitoring the Kubernetes API server for new or updated workload objects that need to be scheduled.
Here is a high-level overview of the Kubernetes API server scheduler process flow:
Step-3 The collaboration between KUBLET and API server
The #KUBELET is a Kubernetes agent that runs on each node, retrieves the pod specification from the ETCD datastore through API server calls, and ensures that the pod's containers run using the node resources.
Step-4 KUBELET collaboration with container engine and container runtime
A #containerengine is a component responsible for managing and executing container processes on a host machine. ?Examples of container engines include Docker, rkt, and CRI-O. These engines provide the low-level functionality required to create, run, and manage containers, including container lifecycle management, networking, storage, and security.
A #containerruntime , on the other hand, is responsible for executing the container images on the host machine. It provides an interface between the container engine and the container images, allowing the engine to interact with the container images to create and manage containers. Examples of container runtimes include containerd, CRI-O, and runc.
领英推荐
Kubernetes can work with multiple container runtimes and engines, depending on specific needs and preferences. When deploying Kubernetes, we can choose to use a specific container engine or runtime that we prefer. The cluster deployment will set up the node host's corresponding processes to enable them, and KUBELET will use that engine to create and manage containers for the application.
There are differences among the most famous container engines - Runc, Kata-runtime, and Clear Container.
In typical scenarios, Under the hood, containerd uses runc as the default container runtime to create and manage containers.
A high-level overview of the lifecycle process of a container managed by runc are the following.
Container Engine and Runtime collaboration with host server resources
In #linux process, separations are done through namespaces. ?#Namespaces are features of the kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a different set of resources. There are six namespaces in Linux, including the following.
During the Pod creation, the following steps take place.
Share the same network, IPC, and UTS namespace. Containers in the same pod can communicate with each other using standard inter-process communications such as System-V semaphores or POSIX shared memory. Containers in a Pod are accessible via “localhost”; they use the same network namespace. The containers, observable hostname is the same as the pod name because containers share the same IP address and port space. Each application-specific container [containers created as per pod configuration specification to meet the client-specific deployment needs] in the Pod should use different ports in containers for incoming connections.
Conclusion
Kubernetes has revolutionized IT infrastructure by providing unparalleled #scalability , #reliability , and #flexibility .
It has catalyzed businesses to move away from the traditional VM-based approach to modern container-based technologies and allowed them to substantially reduce costs with its more efficient resource utilization.
As the use of Kubernetes continues to expand, it will remain an essential tool in any organization's arsenal as they continue its journey toward digital transformation.
Kubernetes is the finest example of managing containers using event-driven #architecture collaboration with many asynchronous processes across client, platform, and low-level infrastructure interfaces to run containerized business application workloads for digital era customer satisfaction with the highest reliability and availability.
CEO | Quema | Building scalable and secure IT infrastructures and allocating dedicated IT engineers from our team
1 年Shaji, thanks for sharing!