Advanced Scenarios for Volume Access Modes & Reclaim Policies in Kubernetes
Bavithran M
Senior Cloud & DevOps Engineer | AWS & Azure Certified | Kubernetes & Automation Advocate | Training | Mentoring | Uplifting Many IT Professionals
To provide deeper insights into Access Modes (RWO, RWX, ROX) and Reclaim Policies (Retain, Delete, Recycle), let's explore three advanced real-world scenarios that illustrate how these configurations can impact Kubernetes storage management.
?? Scenario 1: Migrating a Database with Persistent Data (Retain Policy for Disaster Recovery)
?? Overview
Imagine you are running a PostgreSQL database in Kubernetes with an EBS volume. If the pod crashes, you want the data to persist, and even if the PVC is deleted, the PV should be retained for later recovery.
?? Use Case:
? Ensuring critical databases retain data even if a PVC is accidentally deleted.
? Manually reattaching a Persistent Volume (PV) to a new PVC.
Step 1: Create a Storage Class for AWS EBS
Create database-storageclass.yaml:
vi database-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: database-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
fsType: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true
Apply it:
kubectl apply -f database-storageclass.yaml
? Now, Kubernetes will provision AWS EBS storage but retain it even if the PVC is deleted.
Step 2: Deploy PostgreSQL with a Persistent Volume Claim
Create postgres-pvc.yaml:
vi postgres-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: database-storage
Create postgres-deployment.yaml:
vi postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
env:
- name: POSTGRES_DB
value: "mydatabase"
- name: POSTGRES_USER
value: "admin"
- name: POSTGRES_PASSWORD
value: "password"
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: postgres-storage
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
Apply both:
kubectl apply -f postgres-pvc.yaml
kubectl apply -f postgres-deployment.yaml
? Now, PostgreSQL is running with a Persistent Volume using the Retain policy.
Step 3: Simulating Data Persistence After PVC Deletion
Delete the PVC (But Retain the PV)
kubectl delete pvc postgres-pvc
Check if the Persistent Volume still exists:
kubectl get pv
Expected Output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS
pv-xyz 10Gi RWO Retain Released
? The PV is still there, and its status is Released.
Step 4: Reattach the Persistent Volume to a New PVC
Modify the PV and remove the claimRef:
kubectl edit pv pv-xyz
Delete the old PVC and create a new one that binds to the existing PV:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Apply:
kubectl apply -f postgres-pvc.yaml
? The Persistent Volume is now reattached to a new PVC, ensuring disaster recovery.
?? Scenario 2: Enforcing ReadOnlyMany (ROX) for Backup Storage
?? Overview
You need a centralized storage system that is read-only for multiple applications that require shared configuration files.
?? Use Case:
? Multiple applications reading a common configuration but should not modify it.
Step 1: Create an NFS PV with ReadOnlyMany (ROX) Mode
Create nfs-pv-rox.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-config-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: "/config"
server: "<NFS_SERVER_IP>"
Apply it:
kubectl apply -f nfs-pv-rox.yaml
Step 2: Create a PVC That Uses the ROX PV
Create nfs-pvc-rox.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-config-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 5Gi
storageClassName: nfs-storage
Apply it:
kubectl apply -f nfs-pvc-rox.yaml
Step 3: Mount the ReadOnly Volume in Multiple Pods
Create app-pods.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: shared-config-app
spec:
replicas: 3
selector:
matchLabels:
app: config-app
template:
metadata:
labels:
app: config-app
spec:
containers:
- name: config-reader
image: busybox
command: [ "sh", "-c", "cat /config/app.conf; sleep 3600" ]
volumeMounts:
- mountPath: "/config"
name: shared-config
readOnly: true
volumes:
- name: shared-config
persistentVolumeClaim:
claimName: shared-config-pvc
Apply:
kubectl apply -f app-pods.yaml
? Multiple pods can now read the same configuration but cannot modify it!
?? Scenario 3: Automatically Recycling Storage (Recycle Policy - Deprecated but Demonstrated)
?? Overview
Some workloads generate temporary data that needs to be erased before reuse. The Recycle policy automatically deletes old data before reassigning storage.
?? Use Case: ? Ensuring that old data is erased before a new PVC can use the PV.
Step 1: Create a PV with the Recycle Policy
Create recycle-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: temp-storage
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/mnt/recycle"
Apply it:
kubectl apply -f recycle-pv.yaml
Step 2: Bind a PVC to the PV and Test Recycling
Delete the PVC:
kubectl delete pvc temp-storage-pvc
Check the PV status:
kubectl get pv
Expected Output:
NAME STATUS RECLAIM POLICY
temp-storage Available Recycle
? The storage is automatically wiped and ready for reuse!
?? Key Takeaways
?? Let’s Discuss!
What Access Modes & Reclaim Policies do you use in production? Have you encountered storage-related challenges in Kubernetes? Let’s discuss in the comments!
Follow Bavithran M for more DevOps, Kubernetes, and cloud-native insights.
Found this useful? Share it with your network!
Senior Cloud & DevOps Engineer | AWS & Azure Certified | Kubernetes & Automation Advocate | Training | Mentoring | Uplifting Many IT Professionals
2 周#connections