Integrating Prometheus & Grafana Keeping their Data Persistent in K8s
You must have always wondered about keeping the data of your monitoring system having prometheus and grafana running as pods on kubernetes persistent so that even if you have to start from scratch due to any reason, you can just continue from the same state where you left off. So, let's have a brief overview of what we are going to achieve in this article, our main goals are :-.
- Deploy prometheus and grafana as pods on top of kubernetes
- Make their data to remain persistent including the configuration files
- Both prometheus and grafana should be exposed to the outside world
Keeping the article short and simple, we'll create a single yml file for whole prometheus configuration and same goes for garafana. So, here is the code for our prometheus.yml file :-
apiVersion: v1 kind: ConfigMap metadata: name: prom-config labels: app: prometheus namespace: default data: prometheus.yml: | global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'apache' static_configs: - targets: ['104.211.161.190:9117'] --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: prom-pvc labels: app: prometheus spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: prom-deploy labels: app: prometheus spec: replicas: 1 selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus spec: volumes: - name: prom-pvc persistentVolumeClaim: claimName: prom-pvc - name: prom-config configMap: name: prom-config containers: - name: prom-pod image: prom/prometheus ports: - containerPort: 9090 volumeMounts: - name: prom-pvc mountPath: /etc/prometheus - name: prom-config mountPath: /etc/prometheus/prometheus.yml subPath: prometheus.yml --- apiVersion: v1 kind: Service metadata: name: prom-svc labels: app: prometheus spec: selector: app: prometheus type: LoadBalancer ports: - port: 9090
targetPort: 9090
We have started with the resource creation of ConfigMap, this resource allows use to write configuration for the conf files of any program like mysql, apache etc. and here we have written some data for the systems or servers we want to manage so that we can make it's configuration data persistent whenever we create new pods and these jobs will be automatically configured whenever new pod will be created. We are going to monitor our own system running prometheus and an apache web server running on different system.
Next resource created by us is PersistentVolumeClaim, this resource allows us to store data permanently of whichever directory we ask it to store for using it in future and the new pods created will automatically have the same data if PVC is mounted on the same directory.
Next in our list comes the Deployment, this resource keeps and eye on pods and monitor them using ReplicaSet and maintain the no. of replicas desired by us. It automatically restarts if any pod goes down. We have mounted our PVC on the directory /etc/prometheus so that if any data is stored in this directory will become persistent and we have mounted our configMap on prometheus.yml file which is the configuration file for prometheus. Our prometheus.yml file is in same directory so we can even leave at PVC mount but there maybe case when we have configuration file in different directory and we need to make only that file persistent in the directory so we have done it using both ways.
Concept of subPath, How is it different from mountPath and Why is it Needed :-
mountPath in K8s allows us to mount a volume or configmap to any directory whichever we want but it will remove all other files which are not included in configmap or volume and those files maybe required for the program to work smoothly, so, here we have subPath which affects only the file mentioned in configMap and makes it persistent.
Next and last resource we created from our yml file is Service, this allows us to decide whether to expose our pods or deployment to the outside world or not and I have used type of service as LoadBalancer because mine cluster is running on Azure as my local machine can't handle it, you can use NodePort if you are running on a local machine and nodePort instead of targetPort will do because We have to expose it to the outside world so that we can access it from any network whenever we want. Prometheus runs on port 9090 that's why we have exposed that port.
kubectl create -f prometheus.yml
Run the above command to create all the resources of prometheus discussed above.
Now, we are going to do the same for grafana. Code for the grafana.yml is shown below :-
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: graf-pvc labels: app: grafana spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: graf-deploy labels: app: grafana spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: securityContext: runAsUser: 472 fsGroup: 472 volumes: - name: graf-pvc persistentVolumeClaim: claimName: graf-pvc containers: - name: graf-pod image: grafana/grafana ports: - containerPort: 3000 volumeMounts: - name: graf-pvc mountPath: /var/lib/grafana --- apiVersion: v1 kind: Service metadata: name: graf-svc labels: app: grafana spec: selector: app: grafana type: LoadBalancer ports: - port: 3000
targetPort: 3000
Here, in this we don't any pre-configurations so we don't need any configmap for grafana. We are going to do everything from the graphical user interface. We have followed the same process in this one too, created a PVC, a deployment and a service.
We have created PVC to mount on the directory /var/lib/grafana to make everything persistent inside this directory.
In deployment, we have specified user id because default user in the image doesn't have the permission to read write inside the directory where we have mounted the PVC and without that it won't be useful for us.
For the service, we have exposed the port 3000 which is the default port of grafana.
kubectl create -f grafana.yml
Run the above command to create all the resources of grafana discussed above.
kubectl get pods
Run the above command to see all the pods created by our yml files.
kubectl get svc
Run the above command to get the url and new port numbers given by Kubernetes service where we will be able to access our prometheus and grafana.
Now, let's access our prometheus and grafana in our browser using the urls we got.
Login with the default username - admin and password - admin and proceed further to change the default password. Now let's monitor the load on cpu of our apache web server.
So, finally we have achieved all the 3 points discussed above in our task and I hope you would have learnt many new things in this article. I have also attached the link to the github repository below which contains both the files discussed above in case you need them. Do share the article with your friends and colleagues if you like it. Also, if you feel there is a scope of improvement, all the feedbacks and suggestions are always welcome, you can comment below on the article.
Link to the Github Repo - https://github.com/dheeth/k8s-prometheus-grafana
ATSE OpenShift at RedHat | RedHat Certified System Administrator (RedHat Linux Ver. 8) | RedHat Certified Engineer (ansible 2.8)
4 年Great work Pawan Kumar