Local Disk Storage for Kubernetes

Local Disk Storage for Kubernetes

Introduction

In the realm of high-performance applications, efficient disk management is crucial for meeting escalating data access demands. Enter the local-path-provisioner, a transformative tool tailored exclusively for Kubernetes environments. By tapping into underutilized local disks on worker machines, this provisioning solution optimizes application performance.

Unlike cloud-based network storage, which can introduce latency, the local-path-provisioner integrates seamlessly into Kubernetes clusters. It dynamically creates and manages local disk resources, minimizing latency and enhancing speed. In this article, we delve into its Kubernetes-centric architecture, deployment strategies, and myriad benefits for high-performance use cases.

Installation

To install, simply run the following command:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml        

Usage

You simply need to define a straightforward PVC (persistence volume claim) using a file like the one below:

# test-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.k8s.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: nginx-www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi        

In the above file, you can observe that I’ve outlined a statefulset application with a volumeClaimTemplates field. This will automatically create a PVC. You can apply the template above using the following kubectl command:

$ kubectl apply -f ./test-statefulset.yaml        

You can view the created PVC using the following command:

$ kubectl get pvc
NAME        STATUS   VOLUME      CAPACITY  ACCESS MODES  STORAGECLASS  AGE
nginx-www   Bound    pvc-xxxxx   1Gi       RWO           local-path    161m        

As evident from the above output, the created PVC is associated with a storage class named local-path. In the context of the local-path-provisioner, the CAPACITY field is not directly applicable due to its dependence on the server’s disk capacity; therefore, it cannot be enforced at this stage.

The local-path-provisioner will automatically generate a Persistence Volume (PV) on your behalf, negating the need for manual creation. To inspect the PVs, you can utilize the following command:

$ kubectl get pv
NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM      STORAGECLASS   REASON   AGE
pvc-xxxxx   1Gi        RWO            Delete           Bound    nginx-www  local-path              160m        

As evident from the content above, the PV has been claimed by the nginx-www PVC. This indicates a successful connection between the PV and PVC at this point.

$ kubectl get pods -o wide
NAME      READY   STATUS   RESTARTS  AGE     IP            NODE            NOMINATED NODE   READINESS GATES
nginx-0   1/1     Running  0         161m    10.42.0.237   192.168.1.15    <none>           <none>        

As evident from the content above, the nginx-0 pod has been deployed on the Kubernetes worker node with the IP address 192.168.1.15. Consequently, the storage volume is created on this node. Let’s now explore the contents within the volume. (The default directory where the local-path-provisioner generates volumes is /var/volumes/):

$ ssh [email protected]
BTL-0081:~ #
BTL-0081:~ # cd /var/volumes/pvc-xxxxx/
BTL-0081:~ # ls -lth
total 8
drwxrwxrwx 2 10001 10001 4096 Aug  8 13:57 ./
drwxr-xr-x 8 root  root  4096 Aug 10 08:03 ../        

As evident from the content above, a directory has been created under the /var/volumes/ directory. This directory’s name corresponds to the name of the PVC.

How it works?

When you create a PVC, the following steps will occur:

Local Path Provisioner

  • The local-path-provisioner continuously monitors the PVCs, awaiting the creation of new ones.
  • A volume will be created on the worker node where the pod exists (typically under the /var/volumes/ directory, although this location can be customized).
  • The local-path-provisioner will subsequently generate a PV for the volume.
  • The volume will then be attached to the PV.
  • the PV will be bound to the corresponding PVC.

Rahul Sawant

DevOps@Siemens| Prompt Engineering | Multi cloud | Terraform | Azure x3 | CKA | Docker | CI CD | Python | GitHub Actions | .NET

11 个月

We can think of this as a dynamic provisioning where no system admins are required to create volumes prior to pod creation ultimately reducing the maintenance overhead on sys admins. What are the other ways we can dynamically provision the storage in k8s?

回复

要查看或添加评论,请登录

Gopal Das的更多文章

社区洞察

其他会员也浏览了