Making data Persistent with Prometheus and Grafana
Hello connections!!!
This is Milind Verma and I have written this article in order to show you that how can you integrate the Prometheus and Grafana on top of Kubernetes and make their data persistent.
I have written this article majorly on the Integration of the Grafana and the Prometheus with the Kubernetes and how can we make the data of the Prometheus permanent.
Here first we will install the prometheus using the Prometheus image , and will make the image using the Dockerfile.
So here our task is :
- Integrate the Prometheus and the Grafana and perform in following way:
- Deploy them as pods on top of the Kubernetes by creating resources such as Deployment, Replica Set, Pods and Services.
- Make their data to be permanent.
- both of the Prometheus and the Grafana should be exposed.
So let's start our Task.
So lets first make the image .For this we will write the command in the Dockerfile.
FROM centos:latest RUN yum install wget -y RUN wget https://github.com/prometheus/prometheus/releases/download/v2.19.0/prometheus-2.19.0.linux-amd64.tar.gz RUN tar -xzf prometheus-2.19.0.linux-amd64.tar.gz RUN mkdir -p /metrics CMD [ "./prometheus-2.19.0.linux-amd64/prometheus" ]
We will upload the image to Docker Hub, from there we can use for the Kubernetes.
For uploading use :
docker tag prometheus:v1 milindverma/prometheus:v1 docker push milindverma/prometheus:v1
Now after that the image will be pushed on the Docker Hub.
Similarly we will create the Dockerfile for the Grafana also.
FROM centos:latest RUN yum install wget -y RUN wget https://dl.grafana.com/oss/release/grafana-7.0.3-1.x86_64.rpm RUN yum install grafana-7.0.3-1.x86_64.rpm -y WORKDIR /usr/share/grafana
CMD /usr/sbin/grafana-server start && /usr/sbin/grafana-server enable && /bin/bash
We will upload the image to Docker Hub, from there we can use for the Kubernetes.
For uploading use :
docker tag grafana:v1 milindverma/grafana:v1 docker push milindverma/grafana:v1
Now as both the images are pushed to the Docker Hub. We will start the Minikube.
C:\WINDOWS\system32>minikube start * minikube v1.9.2 on Microsoft Windows 10 Home Single Language 10.0.18363 Build 18363 * Using the virtualbox driver based on existing profile * Starting control plane node m01 in cluster minikube * Restarting existing virtualbox VM for "minikube" ... * Preparing Kubernetes v1.18.0 on Docker 19.03.8 ... ! This VM is having trouble accessing https://k8s.gcr.io * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ * Enabling addons: default-storageclass, storage-provisioner * Done! kubectl is now configured to use "minikube" C:\WINDOWS\system32>
Now we will configure prometheus.yml file to get the data of the target node. But suppose if our pods or our deployment gets deleted then we have to again configure the prometheus.yml file and doing this thing, again and again is not a good practice.Now we will launch Prometheus so that we don't have to configure promtheus.yml file again and again even if the pods or deployment gets deleted.
So here I will be using Kubernetes installed with the help of minkube on top of windows.
Now we will create the prometheus.yml file
global: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor' # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'prom1' static_configs: - targets: ['192.168.99.103:9100']
So to make the data of the prometheus permanent then we have to make Config Map for that file.
kubectl create configmap prometheus-configmap --from-file prometheus.yml
As the config map created , now we create the deployment of the Prometheus.
kubectl create -f prom.yml
Now expose the Deployment.
kubectl expose deployment prometheus-deployment --port=9090 --type=NodePort
Now got to the browser by using the minikube IP and the exposed port no.
Now after this replace the prometheus.yml to the pod prometheus.yml file as we have already attached the configmap through our prometheus.yml file.
Now launch the GRAFANA.
For this we have made the grafana.yml file , and in it we have write the code for the deployment.
kubectl create -f grafana.yml
Now expose the grafana deployment.
kubectl expose deployment grafana --port=3000 --type=NodePort
Now open the grafana webUI by going to browser and typing the ip and the exposed port .
Put the username: admin password: admin and then login.
Now we can save our Prometheus file anywhere and launch anytime, anywhere we want to get the data of the target.