What is Kubernetes

What is Kubernetes

Problem Statement

Deploying and managing containerized applications at scale poses challenges in modern software development. Manual coordination of tasks is time-consuming and error-prone while ensuring optimal performance and reliability across diverse environments remains complex. There is a need for an automated solution to streamline deployment, manage resources efficiently, and ensure robust operation of containerized applications.

Solution

Kubernetes, originating from Google's internal project called Borg, was open-sourced in 2014 and later donated to the Cloud Native Computing Foundation (CNCF). It quickly gained traction as the go-to solution for automating containerized applications' deployment and management, inspired by Google's expertise in managing containers at scale.

Key features include container orchestration, automatic scaling, self-healing, rolling updates, and declarative configuration. It's highly portable and supports deployment across various environments, making it a popular choice for managing cloud-native applications.


Let's understand each concept about K8s in detail.


What is a Container?

A container encapsulates an entire application or parts of it, along with its dependencies, within a single executable unit of software.

It consists of the application's

  • binary files
  • libraries
  • runtimes
  • configuration files

By doing so, containers isolate the hosted application from the external environment, ensuring compatibility across various deployment environments. Additionally, containers share resources and offer isolated user spaces for running application code without the overhead of full operating systems. They are lightweight, efficient, and can be rapidly started and stopped, providing consistent runtime environments across diverse systems.


What is a Containerized Application?

In cloud computing, a containerized app is one that's made to work specifically within containers, which are like digital boxes for software. These containers can hold either a whole app or just parts of it, called microservices.

Containerization is the process of getting apps ready to run in these containers. When apps are set up this way, they can run on lots of different computers and devices without causing any problems with compatibility.

Advantages of Containers:

  • Containers provide a streamlined method for deploying applications that are both high-performing and scalable, focusing on the code itself.
  • They grant access to dependable hardware and software infrastructure, guaranteeing uniformity across diverse environments.
  • Swift deployment of incremental changes to container images accelerates the development cycle.
  • Containers support the adoption of microservices design patterns, enabling the creation of modular and scalable application structures.

What is a Kubernetes Pod?

In Kubernetes, containers aren't directly hosted on virtual machines; instead, they operate within pods. Pods serve as a mechanism for starting and stopping containers. If containers require direct communication for functionality, they are typically placed within the same pod. Shared storage volumes within pods ensure that data persists across container restarts.

When a pod is created, the platform automatically assigns it to run on a Node. The pod continues to run until its designated task is completed.

A Kubernetes pod is a group of one or more application containers. It serves as an additional layer of abstraction, offering shared storage (volumes), an IP address, and communication capabilities between containers. Additionally, it includes essential information for running application containers effectively.


What is a Kubernetes Node?

A Kubernetes node represents a virtual or physical machine where one or multiple Kubernetes pods are executed. It functions as a worker unit equipped with essential services to support pod execution, such as CPU and memory resources.

Every node in Kubernetes consists of three essential components:

  • Kubelet: Operating as an agent within each node, Kubelet ensures the proper functioning of pods, facilitating communication between the Master and nodes.
  • Container runtime: This software is responsible for executing containers. It oversees individual containers, handling tasks such as fetching container images from repositories or registries, unpacking them, and running the application.
  • Kube-proxy: Functioning as a network proxy within each node, Kube-proxy manages networking rules within the node (among its pods) and across the entire Kubernetes cluster.

Worker nodes derive their name from their role in running pods, which typically represent single instances of an application. Pods, containing containers, operate within nodes. A Kubernetes cluster always includes at least one worker node.


What is a Kubernetes Cluster?

Nodes typically collaborate in groups within a Kubernetes cluster, which consists of a collection of Worker Nodes. The workload across the cluster is automatically balanced among these nodes, facilitating smooth scalability.

Additionally, the Kubernetes cluster includes the Kubernetes Control Plane, also known as the Master Node, responsible for managing all nodes within the cluster. Acting as a container orchestration layer, the control plane provides an interface for defining, deploying, and managing container lifecycles via the Kubernetes API.

Master nodes serve as hosts for the K8s control plane components, containing configuration and state data crucial for maintaining the desired operational state. The control plane orchestrates communication with worker nodes to efficiently schedule containers. In a production environment, the control plane spans multiple nodes to ensure redundancy and fault tolerance in case of node failure.

The components of the Kubernetes control plane include:

  • Kube-API Server: This serves as the primary interface for external communication with the cluster, handling all interactions via the API.
  • Kube-Controller-Manager: Responsible for managing a set of controllers that govern various aspects of the cluster's operation, ensuring adherence to desired configurations and policies.
  • Etcd: Serving as the cluster's distributed database, Etcd stores and maintains the state of the entire Kubernetes cluster.
  • Kube Scheduler: Tasked with scheduling activities onto worker nodes based on events stored in Etcd, the scheduler also manages resource allocation to determine the optimal placement of newly scheduled pods on worker nodes.Characteristics:

  • A cluster typically consists of multiple master nodes.
  • Master nodes are distributed across different availability zones.
  • Only one master node is active at a time, managing the entire set of worker nodes.
  • Inactive master nodes remain on standby, ready to assume control in case the active node becomes unavailable.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了