Kubernetes - Configmaps and Secrets
ConfigMaps handle non-sensitive data
Secrets are for sensitive information like passwords etc.
To understand better we will implement it for DEV and STAGE environment
For installing kind cluster you can check this article: https://www.dhirubhai.net/pulse/kubernetes-install-kind-create-multi-node-cluster-kiran-kulkarni-2ya8c/
Step 1: Create the namespaces
kubectl create namespace dev
kubectl create namespace stage
Step 2: Create a aws s3 bucket for testing:
Dev Environment:
Step 1: Create aws-config-dev.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-config
namespace: dev
data:
aws-s3-script.sh: |
#!/bin/sh
echo "Running in DEV environment"
echo "Listing S3 buckets in region: $AWS_REGION"
aws s3 ls --debug # Enable debug mode for development
echo "DEV Buckets:"
aws s3 ls # List dev buckets
Step 2: Create aws-secret-dev.yml
Note: All the passwords, keys etc has to be given in base64.
For eg: Converted the key to base64 below
apiVersion: v1
kind: Secret
metadata:
name: aws-secret
namespace: dev
type: Opaque
data:
aws-access-key-id: <BASE64_ENCODED_DEV_ACCESS_KEY_ID> # Replace with actual base64 value
aws-secret-access-key: <BASE64_ENCODED_DEV_SECRET_ACCESS_KEY> # Replace with actual base64 value
aws-region: <BASE64_ENCODED_DEV_REGION> # e.g., us-east-1
Step 3: AWS CLI Deployment aws-deploy-dev.yml
apiVersion: v1
kind: Pod
metadata:
name: aws-s3-pod
namespace: dev
spec:
containers:
- name: aws-cli
image: amazon/aws-cli
command: ["/bin/sh", "-c", "/scripts/aws-s3-script.sh"]
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-secret-access-key
- name: AWS_REGION
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-region
volumeMounts:
- name: script-volume
mountPath: "/scripts"
volumes:
- name: script-volume
configMap:
name: aws-config
defaultMode: 0744
Step 4: Apply aws-config-dev.yml aws-config-dev.yaml & aws-deploy-dev.yaml
kubectl apply -f aws-secret-dev.yaml
kubectl apply -f aws-config-dev.yaml
kubectl apply -f aws-deploy-dev.yaml
Step 5: Check the logs of the pod
kubectl logs -p aws-s3-pod -n dev
Similarly we can follow the same steps for the stage environment as well.
Stage Environment:
Step 1: Create aws-config-stage.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-config
namespace: stage
data:
aws-s3-script.sh: |
#!/bin/sh
echo "Running in STAGE environment"
echo "Listing S3 buckets in region: $AWS_REGION"
aws s3 ls
echo "STAGE Buckets:"
aws s3 ls # List stage buckets
Step 2: Create aws-secret-stage.yml
apiVersion: v1
kind: Secret
metadata:
name: aws-secret
namespace: stage
type: Opaque
data:
aws-access-key-id: <BASE64_ENCODED_STAGE_ACCESS_KEY_ID> # Replace with actual base64 value
aws-secret-access-key: <BASE64_ENCODED_STAGE_SECRET_ACCESS_KEY> # Replace with actual base64 value
aws-region: <BASE64_ENCODED_STAGE_REGION> # e.g., us-west-2
Step 3: Create a AWS deployment aws-deploy-prod.yml
apiVersion: v1
kind: Pod
metadata:
name: aws-s3-stage
namespace: stage
spec:
containers:
- name: aws-S
image: amazon/aws-cli
command: ["/bin/sh", "-c", "/scripts/aws-s3-script.sh"]
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-secret-access-key
- name: AWS_REGION
valueFrom:
secretKeyRef:
name: aws-secret
key: aws-region
volumeMounts:
- name: script-volume
mountPath: "/scripts"
volumes:
- name: script-volume
configMap:
name: aws-config
defaultMode: 0744
By decoupling configuration from application code, they enable greater flexibility, scalability, and security. Here are some key insights to take away: