Series: Hardcore Kubernetes

Series: Hardcore Kubernetes

Part 1 – The parturition of a pod

In this series, we will delve deep into the intricate mechanics of Kubernetes, scrutinizing and looking into every step of the Kubernetes lifecycle journey. Let’s start with the magic that breathes life in a Pod. Brace yourself for a journey steeped in technical marvels!

I won’t sugarcoat it – there may be moments of tedium, but I am committed to providing the most comprehensive insights possible. So settle in and let’s start our hardcore journey into Kubernetes! Now, let’s immerse ourselves in the fascinating process of Kubernetes pod creation and uncover the secrets that lie within.

Stage 1: Pod Specifications

It is a boy(pod)!

Defining the?Coveted State

The journey begins with the drafting of a Pod specification, usually crafted in YAML format. This encapsulates particulars such as containing image information, resource requirements, environment variables, volume mounts, network settings, among others. This serves as the master plan for the Pod’s lifecycle within the Kubernetes cluster.

apiVersion: v1

kind: Pod

metadata:

??name: Apache-pod

spec:

??containers:

??– name: apache-container

????image: httpd:latest

????ports:

????– containerPort: 80

Stage 2: Kubectl Unleashed

“Establishing the Link: Kubectl Commands and Requests to the Kubernetes API Server”

Let’s initiate the following command:

kubectl create -f apache-pod.yaml
        

In this stage, we engage with the Kubernetes API Server utilizing the Kubernetes command-line tool, aptly named “kubectl”,

To start off, kubectl executes client-side validation, its foremost action. This validation ensures that doomed requests, such as those trying to create an unsupported resource or using a misformatted image name, are promptly dismissed before reaching the kube-apiserver. This way, kubectl filters erroneous attempts, conserving the precious time(resources) of the already occupied kube-apiserver.

Next, it employs generators to mold data into an HTTP request, ready to be dispatched to the kube-api server. On top of that, kubectl undertakes the role of mediating version compatibility, making sure there is compatibility between client and server versions. In its pursuit of client authentication, kubectl rigorously scans for credentials, giving precedence to designated usernames. This includes embedding authentication particulars, like x509 certificates, bearer tokens, or basic authentication credentials, within the HTTP request directed at the kube-api server. When employing OpenID authentication, it falls on users to secure and embed the token as a bearer token in the request. Leveraging these functionalities, kubectl facilitates a smooth dialogue with the Kubernetes infrastructure.

Now, having sent our API request to the kube-api servers, let’s delve into what transpires next.

Stage 3: Inside the API Server

The API server functions as the central control plane, accepting and handling requests from clients. It verifies the incoming request, ensures adequate authentication, and initiates the orchestration procedure.

Authentication: Establishing Our Identity

Each request proceeds through the authenticator chain until one is successful. This involves validating TLS keys, inspecting bearer tokens, or validating basic auth credentials; the Kube-apiserver is diligent in authenticating the identity of the requester. Once authentication is successful, the Authorization header is removed and user details are attached to the request context, facilitating further processes such as authorization and the operation of admission controllers, leveraging the confirmed user identity.

Authorization: Access Rights Acknowledged

While authentication grants entry, authorization determines if we possess the required permissions to execute the demanded action. The kube-apiserver manages authorization through the arrangement of a series of authorizers,

Formulating a series of Authorizers where requests traverse (Webhook, ABAC, RBAC, Node)

— -> If unanimously denied, the request is met with a “Forbidden” response

— -> If any Authorizer approves, the process transitions to the subsequent phase…

Admission Control: Enforcing Cluster Protocols

An admission controller represents a segment of code that intercepts requests aimed at the Kubernetes API server before object persistence, albeit post-authentication and authorization.

Through the Admission controller chain, consisting of a series of plugins, — -> if any one of them rejects the request, the process halts. (Note: The expectation here diverges slightly from authorization; here, the request must secure approval across all Admission controller chains)

A brief example of admission controller plugins:

LimitRanger — determines default container request and restrictions.

ResourceQuota — evaluates and rejects requests if the object tally within the namespace surpasses the stipulated request (pods, service , rc, load balancers).

The Admission controller might be a topic for a future part of the hardcore kubernetes series.. Stay tuned if you enjoy the series so far and harbor future interest in the hardcore kubernetes way. This is the way!

The kube-apiserver unpacks the HTTP request, crafting runtime objects from it (somewhat of a reverse operation compared to kubectl’s generators), and persists them in the datastore.

In summary: the Deployment resource now has a place in the etc.

Stage 4: Scheduling

Identifying where our Pod will be born:

Monitoring the Baby’s Progress — “At this moment, our pods are in a Pending state, as they haven’t been allocated to a Node yet. The final controller that tackles this issue is the scheduler.

The scheduler undertakes a sequence of processes to arrive at the decision.

Step 1: Node Selection

The scheduler initiates by pinpointing a group of suitable nodes where the Pod might be accommodated. It considers several aspects including resource requirements, node capacity, affinity and anti-affinity specifications, along with taints and tolerations. This guarantees that the Pod bears on a node that is capable of fulfilling its resource needs while adhering to any specific placement prerequisites.

Step 2: Filtering

After identifying a roster of prospective nodes, the scheduler implements a range of filters to discard nodes that do not satisfy particular standards. These filters might encompass elements such as node readiness, node selectors, node conditions, resource thresholds, among others. Nodes that do not clear these filters are excluded from the consideration set.

Step 3: Scoring

Post the filtering phase, the scheduler allocates scores to the remaining nodes to rank their appropriateness for hosting the Pod. Each node is scored based on criteria like resource availability, proximity to other Pods, intra-pod affinity or anti-affinity rules, service quality requirements, along with any user-specified rules or preferences set by the user. A node with a higher score is viewed as more desirable for scheduling.

Step 4: Final Selection

Having distributed the scores, the scheduler proceeds to a conclusive assessment, electing the node with the highest score as the destined host for the Pod. In instances where several nodes share the top score, the scheduler might utilize supplementary tiebreaker strategies, such as random choice or user-imposed rules, to select the ultimate recipient.

Step 5: Binding and Allocation

After the node’s selection, the scheduler communicates its verdict to the Kubernetes control plane. The Pod’s binding information is documented with the name of the chosen node, indicating the Pod’s allotment to that specific node. Consequently, the control plane modifies the cluster’s status to mirror this allocation.

Step 6: Node Preparation — Setting up the Host

Upon the node’s selection, the designated Kubernetes controller takes charge of initiating the preparation phase. It gears up the node by establishing the required namespaces, setting up network bridges, and allocating the necessary resources for the Pod’s operation. Subsequently, the container, runtime, is it Docker or container, is activated to establish the container within the Pod.

Stage 5: Containers

Here, the process of container creation takes place, let’s explore how Kubelet, Container Runtime and CNI Plugins are all stitched together:

Stage 6: Containers Join The Festivity

Here, the Process of Container Creation Takes Place:

Exploring the Кubelet and its function in supervising Pod lifecycles:

The Кubelet functions as an agent situated on each node within a Kubernetes cluster, holding a pivotal role in directing the life cycles of Pods. Besides other duties, the Кubelet deciphers the Pod concept into its constituent containers, overseeing aspects like container lifecycle management, volume mounting, container logging, and garbage collection.

Pod synchronization: Regularly, usually at 20-second intervals, the Кubelet consults the kube-apiserver to fetch the roster of Pods linked to the node it operates on. This roster is cross-verified with its intrinsic cache to identify new additions or inconsistencies. In the event of a Pod’s creation, the Кubelet logs startup metrics and generates a PodStatus object that mirrors the current stage of the Pod. This phase encapsulates the Pod’s position in its lifecycle, indicating statuses like Pending, Running, Succeeded, Failed, or Unknown.

Admission handlers: Subsequent to the creation of PodStatus, the Кubelet activates admission handlers to verify the appropriate security permissions for the Pod. These facilitators impose safety protocols such as AppArmor profiles and NO_NEW_PRIVS. Should a Pod face denial at this stage, it retains the Pending status until security conflicts are rectified.

CRI and interim containers: Utilizing the Container Runtime Interface (CRI) for interaction with the underlying container runtime, such as Docker or rkt, the Кubelet facilitates communication with varied runtime adaptations through an abstraction stratum. When a Pod is initiated, the Кubelet calls upon the RunPodSandbox remote procedure, crafting a “sandbox” that functions as the parental entity for the Pod’s containers. In Docker’s scenario, this entails the assembly of an “interim” container which harbors shared namespaces (IPC, network, PID) amidst the containers in the Pod.

CNI and pod network configurations: Pod network operations are delegated to the Container Network Interface (CNI) plugins by the Кubelet. This interface permits various network providers to employ diverse network configurations for containers. Interaction with CNI plugins involves the transfer of JSON data, thereby structuring network preferences. For instance, the bridge CNI plugin establishes a native Linux bridge within the host’s network namespace, linking it to the interim container’s network namespace via a veth pair. This plugin allocates an IP address to the interim container’s interface and orchestrates routes, thereby equipping the Pod with a distinctive IP address.

Inter-node communication: To foster connectivity between Pods housed on separate hosts, Kubernetes predominantly utilizes overlay networks, such as Flannel, which coordinate route synchronization across numerous nodes in a cluster. Flannel extends a layer-3 IPv4 network amidst nodes, encapsulating outbound packets within UDP datagrams to ensure proper routing.

Container initiation: Following the establishment of networking, the Кubelet embarks on container initiation. Necessary container images are retrieved, utilizing secrets denoted in the PodSpec for private registries. Subsequently, the container is established via the CRI, incorporating relevant details from the PodSpec into a ContainerConfig structure. The CPU manager allocates containers to specific CPU clusters, initializing the container thereafter. If noted, post-initiation container lifecycle hooks like Exec or HTTP actions are triggered.

To Wrap It Up

And with that, we are at the end of our first journey into hardcore Kubernetes. How a Pod comes to be may not involve storks or delivery rooms, but is no less magical. So the next time you witness a pod taking its first breath in your cluster, be a proud and responsible parent! Cut it’s cord and enjoy as our newborn embarks on their exciting journey in the world of distributed systems.


要查看或添加评论,请登录

ITGix Ltd的更多文章

社区洞察

其他会员也浏览了