Kubernetes 101 : Kubernetes for Backend Developers

Kubernetes 101 : Kubernetes for Backend Developers

As a backend software developer, you might be familiar with deploying your applications to servers or cloud platforms. However, as your application grows and scales, managing and orchestrating multiple instances of your application can become a complex and time-consuming task. This is where Kubernetes comes into play. Kubernetes is an open-source container orchestration system that simplifies the deployment, scaling, and management of containerized applications.

What is Kubernetes?

Kubernetes is a Greek word that means "helmsman" or "pilot." It was originally developed by Google and later donated to the Cloud Native Computing Foundation (CNCF). Kubernetes provides a platform for automating deployment, scaling, and management of containerized applications across a cluster of nodes (physical or virtual machines).

Why Kubernetes?

  • Containerization: Kubernetes is designed to work with containerized applications, which package your application code, libraries, and dependencies into a single unit called a container. This ensures consistent and reproducible environments across different platforms.
  • Scalability: Kubernetes allows you to easily scale your application up or down by adding or removing containers based on demand. This helps you optimize resource utilization and manage costs effectively.
  • Self-healing: Kubernetes automatically restarts or replaces failed containers, ensuring high availability and resilience for your applications.
  • Load balancing: Kubernetes automatically distributes traffic across multiple instances of your application, providing load balancing out of the box.
  • Rolling updates: Kubernetes supports rolling updates, allowing you to deploy new versions of your application with zero downtime.
  • Storage orchestration: Kubernetes provides abstraction and management of storage resources, making it easier to persist data and share it across containers.

Kubernetes Architecture

Kubernetes architecture

The image illustrates the architecture of a Kubernetes cluster, which consists of a master node and worker nodes. The master node acts as the control plane, managing the cluster's operations and making scheduling decisions. It includes components like the API Server, which serves as the entry point for all cluster operations, the Scheduler, responsible for distributing workloads across worker nodes, and the Controller Manager, which ensures the desired state of the cluster is maintained. The etcd component is a distributed key-value store that persists the cluster's configuration data.

On the worker nodes, the Kubelet and Kube-proxy components handle the management and networking of the containers, respectively. The containers themselves are hosted on top of a container runtime, such as Docker, and are organized into Pods, which are the smallest deployable units in Kubernetes. Each worker node can host multiple Pods, each containing one or more containers that share resources and a network namespace.

Kubernetes hierarchy

Kubernetes hierarchy

Cluster: The overall Kubernetes cluster that consists of multiple nodes and manages the deployment and scaling of applications.

Node: A physical or virtual machine that hosts and runs containers as part of the Kubernetes cluster.

Pods: The smallest deployable unit in Kubernetes, consisting of one or more containers that share resources and a network namespace.

Container: A lightweight, standalone, executable package that includes everything needed to run an application, including code, runtime, system tools, and libraries.

Key Kubernetes Concepts

  • Deployments: Describe the desired state of your application and manage the lifecycle of Pods, including rolling updates and rollbacks.
  • Services: Provide a stable endpoint for communicating with a set of Pods, enabling load balancing and service discovery.
  • ConfigMaps and Secrets: Store configuration data and sensitive information separately from your application code.
  • Volumes: Persistent storage attached to Pods, allowing data to persist beyond the lifecycle of individual containers.
  • Namespaces: Provide logical isolation and resource partitioning within a single Kubernetes cluster.
  • Ingress: Manages external access to services within the cluster, typically handling load balancing, SSL/TLS termination, and name-based virtual hosting.

Getting Started with Kubernetes

  1. Set up a Kubernetes cluster: You can use a managed Kubernetes service like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Azure Kubernetes Service (AKS). Alternatively, you can set up a local development cluster using tools like Minikube or Kind.
  2. Learn the Kubernetes CLI (kubectl): The kubectl command-line tool is your primary interface for interacting with Kubernetes. Familiarize yourself with common kubectl commands for managing resources, checking cluster status, and troubleshooting issues.
  3. Containerize your application: Package your application and its dependencies into Docker containers or other compatible container runtimes.
  4. Define Kubernetes resources: Create YAML or JSON files that describe the desired state of your application, including Deployments, Services, ConfigMaps, and other necessary resources.
  5. Deploy your application: Use kubectl to apply your resource definitions and deploy your application to the Kubernetes cluster.
  6. Monitor and manage your application: Use Kubernetes tools and dashboards to monitor the health and performance of your application, and make adjustments as needed.

Deployment and service in K8S

A Deployment in Kubernetes is like a blueprint or a plan that defines how your application should run. It specifies details such as the number of instances (replicas) of your application that should be running, the container image to use, and other configurations.

Once you have defined a Deployment, Kubernetes needs a way to expose your application to the outside world so that users or other services can access it. This is where Services come into play.

Example of a deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-webapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-webapp
  template:
    metadata:
      labels:
        app: my-webapp
    spec:
      containers:
      - name: my-webapp
        image: my-webapp:v1
        ports:
        - containerPort: 8080        

  • apiVersion: The Kubernetes API version for the resource.
  • kind: The type of resource being defined (in this case, a Deployment).
  • metadata.name: The name of the Deployment.
  • spec.replicas: The number of replicas (Pods) to be created for the application.
  • spec.selector.matchLabels: Labels used to identify the Pods managed by this Deployment.
  • template.metadata.labels: Labels to be applied to the Pods created by this Deployment.
  • template.spec.containers: name: The name of the container within the Pod. image: The Docker image to be used for the container. ports.containerPort: The port on which the application listens within the container.

Example of service.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-webapp-service
spec:
  selector:
    app: my-webapp
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080        

  • apiVersion: The Kubernetes API version for the resource.
  • kind: The type of resource being defined (in this case, a Service).
  • metadata.name: The name of the Service.
  • spec.selector: Labels used to identify the Pods to which the Service should forward traffic.
  • spec.type: The type of Service (in this case, a LoadBalancer service, which exposes the application externally).
  • spec.ports: port: The port on which the Service listens for external traffic. targetPort: The port on the Pods to which the Service should forward traffic.

When you apply these resource definitions to your Kubernetes cluster using kubectl apply -f deployment.yaml -f service.yaml, Kubernetes will create a Deployment with three replicas (Pods) of your web application, and a LoadBalancer Service that exposes the application externally on port 80, forwarding traffic to the Pods on port 8080.

NodePort and ClusterIP

A ClusterIP Service is like an internal gateway or entry point that allows other components within the Kubernetes cluster to communicate with your application. It assigns a stable IP address (the ClusterIP) that other pods or services within the cluster can use to access your application.

However, if you want to access your application from outside the Kubernetes cluster, you need a different type of Service called a NodePort Service.

A NodePort Service is like a door or a gateway that opens your application to the outside world. It assigns a specific port number (the NodePort) on each node (machine) in the Kubernetes cluster. By accessing this port on any of the nodes, you can reach your application from outside the cluster.

In summary, a Deployment defines how your application should run, a ClusterIP Service allows internal components to communicate with your application, and a NodePort Service exposes your application to external access by assigning a port on each node in the cluster.

Conclusion

Kubernetes is a powerful container orchestration system that simplifies the deployment, scaling, and management of modern, cloud-native applications. By understanding its architecture, key concepts, and getting hands-on experience, backend software developers can leverage Kubernetes to build and operate highly available, scalable, and resilient applications.

Rahul Srivatsa S

SDE - 1 | React, React Native, JavaScript | App Developer @ IHX

3 个月

Thanks for sharing

要查看或添加评论,请登录

社区洞察

其他会员也浏览了