Day 41 : Kubernetes Introduction #90DaysofDevOps
Ayushi Tiwari
Java Software Developer | Certified Microsoft Technology Associate (MTA)
Kubernetes is the most popular orchestrator for deploying and scaling containerized systems
In this article, you’ll learn what Kubernetes can do and how to get started running your own containerized solutions.
What is Kubernetes?
Kubernetes is an open-source system that automates container deployment tasks. It was originally developed at Google but is now maintained as part of the Cloud Native Computing Foundation (CNCF).
Kubernetes has risen to prominence because it solves many of the challenges around using containers in production. It makes it easy to launch limitless container replicas, distribute them across multiple physical hosts, and set up networking so users can reach your service.
Most developers begin their container journey with Docker. While this is a comprehensive tool, it’s relatively low-level and relies on CLI commands that interact with one container at a time. Kubernetes provides much higher-level abstractions for defining applications and their infrastructure using declarative schemas you can collaborate on.
Kubernetes Features
Kubernetes has a comprehensive feature set that includes a full spectrum of capabilities for running containers and associated infrastructure:
With so many features available, Kubernetes is ideal for any situation where you want to deploy containers with declarative configuration.
How Kubernetes Works
Kubernetes has a reputation for complexity because it has several moving parts. Understanding the basics of how they fit together will help you start out on your Kubernetes journey.
A Kubernetes environment is termed a cluster. It includes one or more nodes. A node is simply a machine that will run your containers. It could be physical hardware or a VM.
As well as nodes, the cluster also has a control plane. The control plane coordinates the entire cluster’s operations. It schedules new containers onto available nodes and provides the API server that you interact with. It’s possible to run a cluster with multiple control plane instances to create a highly available setup with greater resiliency.
Here are the most important Kubernetes components:
Kubectl is usually the final piece in a functioning Kubernetes environment. You’ll need this CLI to interact with your cluster and its objects. Once your cluster’s set up, you can also install the official dashboard or a third-party solution to control Kubernetes from a GUI.
Installation and Setup
There are many different ways to get started with Kubernetes because of the range of distributions on offer. Creating a cluster using the official distribution is relatively involved so most people use a packaged solution like Minikube, MicroK8s, K3s, or Kind.
Check out how to install Kubernetes using these four different methods.
We’ll use K3s for this tutorial. It’s an ultra-lightweight Kubernetes distribution that bundles all the Kubernetes components into a single binary. Unlike other options, there’s no dependencies to install or heavy VMs to run. It also includes the Kubectl CLI that you’ll use to issue Kubernetes commands.
Running the following command will install K3s on your machine:
$ curl -sfL https://get.k3s.io | sh -
...
[INFO] systemd: Starting k3s
It automatically downloads the latest available Kubernetes release and registers a system service for K3s.
After installation, run the following command to copy the auto-generated Kubectl config file into your .kube directory:
$ mkdir -p ~/.kube
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
$ sudo chown $USER:$USER ~/.kube/config
Now tell K3s to use this config file by running the following command:
$ export KUBECONFIG=~/.kube/config
You can add this line to your ~/.profile or ~/.bashrc file to automatically apply the change after you login.
Next run this command:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu22 Ready control-plane,master 102s v1.24.4+k3s1
You should see a single node appear, named with your machine’s hostname. The node shows as Ready so your Kubernetes cluster can now be used!
Kubernetes Basic Terms and Concepts
Your cluster’s running, but what can you do with it? It’s worth getting familiar with some key Kubernetes terms before you continue.
领英推荐
Nodes
Nodes represent the physical machines that form your Kubernetes cluster. They run the containers you create. Kubernetes tracks the status of your nodes and exposes each one as an object. You used Kubectl to retrieve a list of nodes in the example above.
While your fresh cluster has only one node, Kubernetes advertises support for up to 5,000 nodes. It’s theoretically possible to scale even further.
Namespaces
Namespaces isolate different groups of resources. They avoid name collisions by scoping the visibility of your resources.
Creating two objects with the same name is forbidden within the same namespace. If you’re in the default namespace, you can’t create two Pods that are both called database, for example. Namespaces resolve this by providing logical separation of resources. Two namespaces called app-1 and app-2 could each contain a Pod called database without causing a conflict.
Namespaces are flexible and can be used in many different ways. It’s a good idea to create a namespace for each workload in your cluster. You can also use namespaces to divide resources between users and teams by applying role-based access control
Pods
Pods are the fundamental compute unit in Kubernetes. A Pod is analogous to a container but with some key differences. Pods can contain multiple containers, each of which share a context. The entire Pod will always be scheduled onto the same node. The containers within a Pod are tightly coupled so you should create a new Pod for each distinct part of your application, such as its API and database.
In simple situations, Pods will usually map one-to-one with the containers your application runs. In more advanced cases, Pods can be enhanced with init containers and ephemeral containers to customize startup behavior and provide detailed debugging.
ReplicaSets
ReplicaSets are used to consistently replicate a Pod. They provide a guarantee that a set number of replicas will be running at any time. If a node goes offline or a Pod becomes unhealthy, Kubernetes will automatically schedule a new Pod instance to maintain the specified replica count.
Deployments
Deployments wrap ReplicaSets with support for declarative updates and rollbacks. They’re a higher level of abstraction that’s easier to control.
A Deployment object lets you specify the desired state of a set of Pods. This includes the number of replicas to run. Modifying the Deployment will automatically detect the required changes and scale the ReplicaSet as required. You can pause the rollout or revert to an earlier revision, features that aren’t available with plain ReplicaSets.
Services
Kubernetes Services are used to expose Pods to the network. They allow defined access to Pods either within your cluster or externally.
Ingresses are closely related objects. These are used to set up HTTP routes to services via a load balancer. Ingresses also support HTTPS traffic secured by TLS certificates.
Jobs
A Kubernetes Job is an object that creates a set of Pods and waits for them to terminate. It will retry any failed Pods until a specified number have exited successfully. The Job’s then marked as complete.
Jobs provide a mechanism for running ad-hoc tasks inside your cluster. Kubernetes also provides CronJobs that wrap Jobs with cron-like scheduling support. These let you automatically run a job on a regular cadence to accommodate batch activities, backups, and any other scheduled tasks your application requires.
Volumes
Volumes mount external filesystem storage inside your Pods. They abstract away the differences between different cloud providers’ storage implementations.
Volumes can and shared between your Pods. This allows Kubernetes to run stateful applications where data must be preserved after a Pod gets terminated or rescheduled. You’ll need to use a volume whenever you’re running a database or file server in your cluster.
Secrets and ConfigMaps
Secrets are used to inject sensitive data into your cluster such as API keys, certificates, and other kinds of credential. They can be supplied to Pods as environment variables or files mounted into a volume.
ConfigMaps are a similar concept for non-sensitive information. These objects should store any general settings your app requires.
DaemonSets
Kubernetes DaemonSets are used to reliably run a copy of a Pod on each of the Nodes in your cluster. When a new Node joins, it will automatically start an instance of the Pod. You can optionally restrict DaemonSet Pods to only running on specific Nodes in more advanced situations.
DaemonSets are useful when you’re adding global functionality to your cluster. DaemonSets are often used to run monitoring services and log aggregation agents. Placing these workloads into a DaemonSet guarantees they’ll always be running adjacent to your application’s Pods. It ensures metrics and logs will be collected irrespective of the Node a Pod gets scheduled to.
Networking Policies
Kubernetes supports a policy-based system for controlling network traffic
Network policies are expressed as an object that targets one or more matching Pods. Each Pod can be the subject of both ingress and egress policies. Ingress policies define whether incoming traffic is allowed, while egress rules affect outbound flows. Communications between two Pods are only permitted when no networking policy on either Pod denies ingress or egress from the other.