Deploying Prometheus and Grafana on top of Kubernetes
Naitik Shah
Data Scientist | Expert in Predictive Modeling, Machine Learning & Data Engineering | Python, SQL, Azure, Databricks | Achieved 15% Cost Reduction & Optimized Operations
Hello readers, this is my DevOps Task 5, and the problem statement is:
Integrate Prometheus and Grafana and perform in following way:
- Deploy them as pods on top of Kubernetes by creating resources Deployment, Replica Set, Pods or Services.
- And make their data to be remain persistent.
- And both of them should be exposed to outside world.
Let us first get an idea of what is Prometheus:
Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
Features:
- a multi-dimensional data model (timeseries defined by metric name and set of key/value dimensions)
- a flexible query language to leverage this dimensionality
- no dependency on distributed storage; single server nodes are autonomous
- timeseries collection happens via a pull model over HTTP
- pushing timeseries is supported via an intermediary gateway
- targets are discovered via service discovery or static configuration
- multiple modes of graphing and dashboarding support
- support for hierarchical and horizontal federation
And now lets get a rough idea of what is Grafana:
The open-source platform for monitoring and observability.
Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture:
- Visualize: Fast and flexible client side graphs with a multitude of options. Panel plugins for many different way to visualize metrics and logs.
- Dynamic Dashboards: Create dynamic & reusable dashboards with template variables that appear as dropdowns at the top of the dashboard.
- Explore Metrics: Explore your data through ad-hoc queries and dynamic drilldown. Split view and compare different time ranges, queries and data sources side by side.
- Explore Logs: Experience the magic of switching from metrics to logs with preserved label filters. Quickly search through all your logs or streaming them live.
- Alerting: Visually define alert rules for your most important metrics. Grafana will continuously evaluate and send notifications to systems like Slack, Pager Duty, VictorOps, OpsGenie.
- Mixed Data Sources: Mix different data sources in the same graph! You can specify a data source on a per-query basis. This works for even custom data sources.
So, the main problem statement solving starts from here:
Step 1: In this step we will be creating a PVC(Persistent Volume Claim) for Prometheus, I am providing the YML code:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-prometheus labels: name: pvcprometheus spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
Step 2: In this step, we will create a service to export the Prometheus using NodePort:
apiVersion: v1 kind: Service metadata: name: prometheus-svc labels: app: prometheus-service spec: selector: app: prometheus type: NodePort ports: - nodePort: 2300 port: 9090 targetPort: 9090 name: prometheus-port
Step 3: Now, we will be creating a deployment for Prometheus, I will be using an image which is readily available on DockerHub. However, if you want to customize it which caters your needs, then you may even create an image and push it to DockerHub. I have shown how to create your own image in my previous articles, so you may refer to to them.
The YML code for creating deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: prometheus-deployment spec: replicas: 1 selector: matchLabels: tier: monitor template: metadata: labels: tier: monitor spec: containers: - name: prometheus image: prom/prometheus ports: - containerPort: 9090 volumeMounts: - name: prometheus-volume mountPath: /data volumes: - name: prometheus-volume persistentVolumeClaim: claimName: pvc-prometheus
We will be following the same above steps for Grafana:
Step 4: PVC(Persistent Volume Claim) for Grafana:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-grafana labels: name: pvcgrafana spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
Step 5: Creating a service to expose Grafana using NodePort:
apiVersion: v1 kind: Service metadata: name: grafana-svc labels: app: grafana-service spec: selector: app: grafana type: NodePort ports: - nodePort: 2301 port: 3000 targetPort: 3000 name: port-grafana
Step 6: Creating a deployment for Grafana, even here I will be using a readily available image uploaded on DockerHub.
apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: grafana tier: deploy spec: replicas: 1 selector: matchLabels: tier: monitoring template: metadata: labels: tier: monitoring spec: containers: - name: grafana-container image: grafana/grafana:latest ports: - containerPort: 3000 volumeMounts: - name: grafana-volume mountPath: /var/lib/grafana volumes: - name: grafana-volume persistentVolumeClaim: claimName: pvc-grafana
Step 7: So, we have Grafana and Prometheus ready, we will be creating a Kustomization file in which we will be mentioning the sequence in which all the files need to be created, so run this kustomization file and that's all need to be done.
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - pvcprometheus.yml - pvcgrafana.yml - prometheusdeploy.yml - grafanadeploy.yml - prometheusservice.yml - grafanaservice.yml
Head over to the folder where you have saved your kustomization file in the terminal windows/command prompt and run:
kubectl apply -k .
After running this kustomization file, it will do everything, Prometheus and Grafana will have been deployed on Kubernetes and have been connected to PVC for no data loss. Both of the pods will be exposed to the outside world. Using Kubernetes, we don't need to worry about crashes, because that is the biggest advantage of using Kubernetes, and as we are using PVC, so there will be no data loss whatsoever.
All you need to do now is to open the Prometheus & Grafana in your browser using your Minikube IP & the mentioned port & start working!
So, that marks our task completion, any suggestions are welcome, see you later chaps!