Kubernetes Architecture
Rocky Bhatia
Top 1% @LinkedIn | Architect @ Adobe | 350k+ Followers Across Social Media | Global Speaker
Kubernetes is popularly known as K8s
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines?over 15 years of google experience running production workloads at scale with best-of-breed ideas and practices from the community.
Why K8 so popular ?
One of the primary reasons why K8s became so popular is the ever-growing demand for businesses to support their micro-service-driven architectural needs.
Microservice architecture Supported by Kubernetes :
Kubernetes provides a powerful platform for managing microservices at scale. Its scalability, portability, resource utilization, service discovery, and load balancing make it an ideal platform for organizations adopting a microservices architecture.
Fundamental Architecture Of Kubernetes Cluster:
As we can see from the below diagram Kubernetes Follows master-worker architecture?
Worker Node
As a developer or K8s administrator, most of the time, you will deal with worker nodes. Whether you have to deploy your containerized app, autoscale it, or roll out any new app update on your production-grade server, you will often deal with worker nodes.
The role of the worker node is to execute the application workloads defined by the Kubernetes control plane. When a new workload is created or scaled up, the control plane schedules the workload to run on one or more worker nodes, based on available resources and other constraints.
For every worker, these are key processes:
Container Runtime:
Every Microservice module(micro-app) you deploy is packaged into a single pod that has its container runtime. Therefore, one must install a container runtime into each cluster worker node so Pods can run there.
Some of the container runtime examples are,
Kubelet:
kubelet is a primary node agent of the worker node, which interacts with both the node and the container in the given worker node.
The kubelet is responsible for
The main functions of kubelet service are:
The Kubelet is the primary and most important controller in Kubernetes. It's responsible for driving the container execution layer, typically Docker
Kube-proxy:
K8s cluster can have multiple worker nodes and each node has multiple pods running, so if one has to access this Pod, they can do so via Kube-proxy.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
To access the Pod via k8s services, there are certain network policies, that allow network communication to your Pods from network sessions inside or outside of your cluster. These rules are handled via kube-proxy
Kube-proxy has an intelligent algorithm to forward network traffics required for pod access which minimizes the overhead and makes service communication more performant
领英推荐
Pods
A pod is one or more containers that logically go together. Pods run on nodes. Pods run together as a logical unit. So they have the same shared content. They all share the same IP address but can reach other Pods via localhost, as well as shared storage. Pods don’t need to all run on the same machine as containers can span more than one machine. One node can run multiple pods.
So far we have seen that above processes need to be installed and running successfully within your worker nodes in-order to manage your containerized application efficiently,?
But then?
You rightly guessed its Master Node.
Master Node in K8s cluster:
The master node is also known as a control plane that is responsible for managing worker/slave nodes efficiently. They interact with the worker node to
Master Node Services?
Every master nodes in the K8s cluster runs the following key processes
API Server?
It is the main gateway to access the k8s cluster. It acts as the primary gatekeeper for client-level authentication, or we can say that the Kube-Episerver is the front end for the Kubernetes control plane.
So whenever you want to
You need to request the API server of the master node which in turn validates your requests before you get access to the processes in worker nodes.
Apiserver is designed to scale horizontally — that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances
Scheduler?
In Kubernetes, the scheduler assigns workloads, or "pods," to worker nodes based on available resources and other constraints. The scheduler is responsible for ensuring that pods are scheduled to run on nodes that can provide the resources needed for the workload, such as CPU and memory.
The scheduler operates on a continuous loop, constantly evaluating the state of the cluster and the availability of resources. It uses various algorithms to determine the best node to assign a workload to, such as bin packing or spread.
The scheduler takes into account various factors when assigning workloads to nodes, such as:
Overall, the scheduler is a critical component of the Kubernetes control plane, ensuring that workloads are efficiently scheduled to nodes based on available resources and other constraints. This helps optimize resource utilization and ensures that workloads run effectively and efficiently.
kube-controller-manager(Kubectl):
The kube-controller-manager is a component of the Kubernetes control plane that manages various controllers responsible for maintaining the system's desired state. Some of the critical controllers managed by the kube-controller-manager include:
In addition to these controllers, the kube-controller-manager also performs other important tasks, such as monitoring the overall health of the control plane and detecting and responding to changes in the cluster's configuration.
Overall, the kube-controller-manager plays a critical role in maintaining the desired state of the Kubernetes cluster, ensuring that workloads are running effectively and efficiently, and helping to optimize resource utilization.
ETCD
Etcd is a distributed key-value store that is used as the primary data store for Kubernetes. It serves as the "brain" of the Kubernetes cluster, storing the configuration data and state information for all of the resources in the system. Some of the key roles of Etcd in Kubernetes include:
Etcd plays a critical role in the functioning of the Kubernetes cluster, serving as the primary data store and ensuring that the configuration data is consistent and up-to-date across all nodes in the system.
Overall, Kubernetes is a powerful and flexible platform for deploying and managing containerized applications, making it easier for developers to build and deploy applications in any environment. It provides scalability, portability, automation, flexibility, and an open-source ecosystem, making it a popular choice for container orchestration and management.
We will cover the other concept of kubernetes in future newsletter .
--
8 个月Cloud-native application management is evolving! ? Which method are you relying on? Let's see what the community prefers. Vote and share your insights! ?? https://shorturl.at/tstj5
Lead Engineer (Target)| Product development expertise | Java-J2EE, Spring Framework| Microservices | Concurrency | Cloud Computing | Event-driven Pgm | Team Player | Analytical Thinker | Problem solver
2 年Great share...... I have a question on how or is there any possibility of making worker node to communicate with each other.. With control plane being involved they can but if we exempt CP , can node to node communication possible??
Working with Adobe as Computer Scientist
2 年Great explanation Rocky Bhatia . Wanted to know is it possible to scale up and scale down pods according to computation need dynamically. For example.. let's say we expect heavy traffic during weekend and not so during weekdays. So is it possible to dynamically increase pods during heavy traffic and then scale down automatically. Thanks for the post.
Software Developer @ Girikon
2 年Thanks for sharing Rocky Bhatia
Helping Professional Build Influential Personal Brands for Job Opportunities and Career & Business Growth | Assisted 15+ Professional with Personal Brand Services | Engineering Manager @ Jio | Mentored 500+ Professionals
2 年Thanks Rocky Bhatia for explaining it in depth. Currently I am working with kubernetes and was looking for something similar. Thanks for sharing it at the right time