?? How to Integrate External Storage with Your Kubernetes Cluster – A Step-by-Step Guide
As organizations scale their Kubernetes environments, ensuring persistent and scalable storage becomes a crucial challenge. Kubernetes offers a flexible storage model, allowing you to integrate external storage solutions such as SAN, NAS, NFS, iSCSI, and cloud storage.
?? But how do you integrate an external storage system with your existing Kubernetes cluster?
In this guide, we’ll walk through the step-by-step process of setting up external storage for your Kubernetes workloads.
?? Step 1: Identify the Right Storage Solution
Before integration, determine the best storage type based on your use case:
? Network File System (NFS) – Shared file storage for multiple pods.
? iSCSI (Internet Small Computer System Interface) – Block storage for high-performance needs.
? Cloud Storage (EBS, Azure Disks, GCP Persistent Disks) – Managed storage from cloud providers.
? Storage Area Network (SAN) – Enterprise-grade block storage for large deployments.
? CephFS/GlusterFS – Distributed storage solutions for high availability.
?? Scenario-Based Decision:
Once you decide on the storage type, follow the relevant integration steps.
?? Step 2: Install Storage Dependencies on Kubernetes Nodes
To interact with external storage, install the necessary utilities on all Kubernetes nodes.
Install Storage Utilities (Run on Each Worker Node)
For NFS Storage, Run the below commands to install:
sudo apt update && sudo apt install nfs-common -y # Ubuntu/Debian
sudo yum install nfs-utils -y # RHEL/CentOS
For iSCSI Storage, Run the below commands to install iscsi-utils:
sudo apt update && sudo apt install open-iscsi -y # Ubuntu/Debian
sudo yum install iscsi-initiator-utils -y # RHEL/CentOS
Ensure iSCSI/NFS services are running by below commands:
systemctl enable --now iscsid
systemctl enable --now nfs-server
?? Step 3: Connect Kubernetes Nodes to External Storage
Once dependencies are installed, the Kubernetes nodes need to connect to the external storage.
For iSCSI-Based Storage (Run on Each Worker Node):
1?? Discover available iSCSI targets:
iscsiadm -m discovery -t sendtargets -p <STORAGE_IP_ADDRESS>
2?? Log in to the iSCSI target:
iscsiadm -m node -T <TARGET_NAME> -p <STORAGE_IP_ADDRESS> --login
3?? Verify the connected block device:
lsblk
For NFS-Based Storage (Run on Each Worker Node):
1?? Test mounting the NFS share manually:
mkdir -p /mnt/external-storage
mount -t nfs <STORAGE_IP>:/exported/path /mnt/external-storage
2?? Verify that the mount is successful:
df -h | grep external-storage
Once verified, unmount it before integrating it with Kubernetes:
umount /mnt/external-storage
?? Step 4: Define a Persistent Volume (PV) in Kubernetes
A Persistent Volume (PV) allows Kubernetes workloads to access external storage.
All the YAML files should be applied on the master node (or from a machine that has access to the Kubernetes cluster using kubectl).
?? Run the following commands on the master node or your local machine (with kubectl access):
For iSCSI Storage, Create a yaml file as below:
apiVersion: v1
kind: PersistentVolume
metadata:
name: external-iscsi-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: <STORAGE_IP>
iqn: <TARGET_IQN>
lun: 0
fsType: ext4
For NFS Storage, Create a yaml file as below:
领英推荐
apiVersion: v1
kind: PersistentVolume
metadata:
name: external-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /exported/path
server: <STORAGE_IP>
? Apply the PV configuration:
kubectl apply -f pv.yaml
?? Step 5: Create a Persistent Volume Claim (PVC)
A Persistent Volume Claim (PVC) allows Kubernetes workloads to request storage dynamically.
Create pvc.yaml as below for the same purpose.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: external-storage-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
? Apply the PVC configuration:
kubectl apply -f pvc.yaml
Check if the PVC is bound to the PV:
kubectl get pvc
The PVC should be in bound state.
?? Step 6: Use the PVC in a Kubernetes Pod
Now, let’s create a pod that uses external storage via the PVC. Create pod.yaml file as below:
apiVersion: v1
kind: Pod
metadata:
name: external-storage-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: "/mnt/data"
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: external-storage-pvc
? Deploy the Pod:
kubectl apply -f pod.yaml
Verify if storage is mounted inside the container:
kubectl exec -it external-storage-pod -- df -h
?? Step 7: Automate Storage Provisioning with StorageClass
Instead of manually creating PVs, you can enable dynamic provisioning with a StorageClass.
For iSCSI StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: iscsi-storage
provisioner: kubernetes.io/iscsi
parameters:
targetPortal: <STORAGE_IP>
iqn: <TARGET_IQN>
lun: "0"
fsType: ext4
Then, reference it in the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-external-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: iscsi-storage
?? Step 8: Troubleshooting & Debugging
?? Check if PVC is bound to PV:
kubectl get pvc
?? Check iSCSI/NFS connectivity:
iscsiadm -m session # For iSCSI
showmount -e <STORAGE_IP> # For NFS
?? Debug storage inside the Pod:
kubectl exec -it external-storage-pod -- df -h
?? Summary – Where to Run What?
1?? Install iSCSI/NFS utilities – Run these commands on each worker node to ensure they can communicate with the external storage.
2?? Connect to iSCSI/NFS storage – Run these commands on each worker node to discover and mount the storage before integrating it with Kubernetes.
3?? Apply pv.yaml, pvc.yaml, and pod.yaml – Run these kubectl apply commands on the Kubernetes master node or from any machine that has kubectl access to the cluster.
4?? Apply storage-class.yaml and dynamic-pvc.yaml (optional for dynamic provisioning) – Run these on the master node or kubectl machine to enable automated storage provisioning.
5?? Verify mounts and storage inside the pod – Use kubectl exec from the master node or any system with kubectl access to check if the storage is properly mounted inside the pod.
By following these steps, you can successfully integrate external storage into your Kubernetes cluster and ensure persistent storage for your applications.
?? Key Takeaways:
? Choose the right storage type based on workload needs.
? Install necessary dependencies on worker nodes.
? Define PVs & PVCs to enable storage access.
? Use StorageClass for dynamic provisioning.
? Monitor and troubleshoot storage integration.
?? What’s your biggest challenge with Kubernetes storage? Drop your thoughts in the comments!
#Kubernetes #Storage #DevOps #CloudNative #PersistentStorage #K8s
Telco Cloud Solution Architect at Broadcom, CNF, VNF, NFVI, SDN, Kubernetes (CKA), vCloud Director, OpenStack, VMware vSphere, vSAN, NSX-T, RHCSA, RHCE
1 个月Very informative