Making data Persistent with Prometheus and Grafana

Making data Persistent with Prometheus and Grafana

Hello connections!!!
This is Milind Verma and I have written this article in order to show you that how can you integrate the Prometheus and Grafana on top of Kubernetes and make their data persistent.

I have written this article majorly on the Integration of the Grafana and the Prometheus with the Kubernetes and how can we make the data of the Prometheus permanent.

Here first we will install the prometheus using the Prometheus image , and will make the image using the Dockerfile.

So here our task is :

  1. Integrate the Prometheus and the Grafana and perform in following way:
  • Deploy them as pods on top of the Kubernetes by creating resources such as Deployment, Replica Set, Pods and Services.
  • Make their data to be permanent.
  • both of the Prometheus and the Grafana should be exposed.

So let's start our Task.

So lets first make the image .For this we will write the command in the Dockerfile.

FROM    centos:latest
RUN    yum install wget -y
RUN    wget https://github.com/prometheus/prometheus/releases/download/v2.19.0/prometheus-2.19.0.linux-amd64.tar.gz
RUN    tar -xzf prometheus-2.19.0.linux-amd64.tar.gz
RUN    mkdir -p /metrics
CMD    [ "./prometheus-2.19.0.linux-amd64/prometheus" ]          

We will upload the image to Docker Hub, from there we can use for the Kubernetes.

For uploading use :

docker tag prometheus:v1 milindverma/prometheus:v1
docker push milindverma/prometheus:v1

Now after that the image will be pushed on the Docker Hub.

No alt text provided for this image

Similarly we will create the Dockerfile for the Grafana also.

FROM centos:latest
RUN yum install wget -y
RUN wget https://dl.grafana.com/oss/release/grafana-7.0.3-1.x86_64.rpm
RUN yum install grafana-7.0.3-1.x86_64.rpm -y
WORKDIR /usr/share/grafana
CMD /usr/sbin/grafana-server start && /usr/sbin/grafana-server enable && /bin/bash

We will upload the image to Docker Hub, from there we can use for the Kubernetes.

For uploading use :

docker tag grafana:v1 milindverma/grafana:v1
docker push milindverma/grafana:v1

No alt text provided for this image

Now as both the images are pushed to the Docker Hub. We will start the Minikube.

C:\WINDOWS\system32>minikube start
* minikube v1.9.2 on Microsoft Windows 10 Home Single Language 10.0.18363 Build 18363
* Using the virtualbox driver based on existing profile
* Starting control plane node m01 in cluster minikube
* Restarting existing virtualbox VM for "minikube" ...
* Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
! This VM is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* Enabling addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"

C:\WINDOWS\system32>  
 

Now we will configure prometheus.yml file to get the data of the target node. But suppose if our pods or our deployment gets deleted then we have to again configure the prometheus.yml file and doing this thing, again and again is not a good practice.Now we will launch Prometheus so that we don't have to configure promtheus.yml file again and again even if the pods or deployment gets deleted.

So here I will be using Kubernetes installed with the help of minkube on top of windows.

Now we will create the prometheus.yml file

global:

scrape_interval: 15s # By default, scrape targets every 15 seconds.




# Attach these labels to any time series or alerts when communicating with

# external systems (federation, remote storage, Alertmanager).

external_labels:

monitor: 'codelab-monitor'




# A scrape configuration containing exactly one endpoint to scrape:

# Here it's Prometheus itself.

scrape_configs:

# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

- job_name: 'prometheus'




# Override the global default and scrape targets from this job every 5 seconds.

scrape_interval: 5s




static_configs:

- targets: ['localhost:9090']

- job_name: 'prom1'

static_configs:

- targets: ['192.168.99.103:9100']       

So to make the data of the prometheus permanent then we have to make Config Map for that file.

kubectl create configmap prometheus-configmap --from-file prometheus.yml

As the config map created , now we create the deployment of the Prometheus.

kubectl create -f prom.yml

No alt text provided for this image
No alt text provided for this image

Now expose the Deployment.

kubectl expose deployment prometheus-deployment --port=9090 --type=NodePort

No alt text provided for this image

Now got to the browser by using the minikube IP and the exposed port no.

No alt text provided for this image

Now after this replace the prometheus.yml to the pod prometheus.yml file as we have already attached the configmap through our prometheus.yml file.

Now launch the GRAFANA.

For this we have made the grafana.yml file , and in it we have write the code for the deployment.

kubectl create -f grafana.yml

No alt text provided for this image

Now expose the grafana deployment.

kubectl expose deployment grafana --port=3000 --type=NodePort

No alt text provided for this image

Now open the grafana webUI by going to browser and typing the ip and the exposed port .

No alt text provided for this image
No alt text provided for this image

Put the username: admin password: admin and then login.

Now we can save our Prometheus file anywhere and launch anytime, anywhere we want to get the data of the target.


要查看或添加评论,请登录

Milind Verma的更多文章

  • Red Hat OpenShift Deployment on AWS via ROSA Service

    Red Hat OpenShift Deployment on AWS via ROSA Service

    Hello everyone, welcome to my article. In this article, I will cover: what's Red Hat OpenShift Service on AWS (ROSA)?…

    10 条评论
  • JavaScript - "An Integral Part of Modern Apps"

    JavaScript - "An Integral Part of Modern Apps"

    Let’s see what’s so special about JavaScript, what we can achieve with it, and what other technologies play well with…

  • Cyber Crimes & Confusion Matrix

    Cyber Crimes & Confusion Matrix

    Cybercrime is vastly growing in the world of tech today. Criminals of the World Wide Web exploit internet users’…

  • AWS S3 Multipart Uploading

    AWS S3 Multipart Uploading

    Hello Guys, I am Milind Verma and in this article I am going to show how you can perform multipart upload in the S3…

  • ARTH TASK-6

    ARTH TASK-6

    OUR TASK: Create High Availability Architecture with AWS CLI The architecture includes- Webserver configured on EC2…

  • How Google uses Machine Learning

    How Google uses Machine Learning

    Google is the master of all. It takes advantage of machine learning algorithms and provides customers with a valuable…

  • Kubernetes CASE STUDY: Pearson

    Kubernetes CASE STUDY: Pearson

    Challenge A global education company serving 75 million learners, Pearson set a goal to more than double that number…

  • Launching webserver and python using Docker Conatiner

    Launching webserver and python using Docker Conatiner

    Our task: launching the HTTPD server over the docker container setting up the python interpreter and running python…

    2 条评论
  • Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

    Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

    In this article I am going to show you , how can we integrate the Hadoop with the LVM. First we will launch 2 EC2…

    1 条评论
  • Netflix Case Study on top of AWS

    Netflix Case Study on top of AWS

    NETFLIX Netflix is an American media services provider and production company headquartered in Los Gatos, California…

社区洞察

其他会员也浏览了