Metrics Collection and Monitoring Using Prometheus and Grafana

Metrics Collection and Monitoring Using Prometheus and Grafana

What is Metrics ?

Metrics represent the raw measurements of resource usage or behavior that can be observed and collected throughout your systems. These might be low-level usage summaries provided by the operating system, or they can be higher-level types of data tied to the specific functionality or work of a component, like requests served per second or membership in a pool of web servers. 

Why Do We Collect Them ?

Metrics are useful because they provide insight into the behavior and health of your systems, especially when analyzed in aggregate. They represent the raw material used by your monitoring system to build a holistic view of your environment, automate responses to changes, and alert human beings when required. Metrics are the basic values used to understand historic trends, correlate diverse factors, and measure changes in your performance, consumption, or error rates.

What is monitoring ?

While metrics represent the data in your system, monitoring is the process of collecting, aggregating, and analyzing those values to improve awareness of your components’ characteristics and behavior. The data from various parts of your environment are collected into a monitoring system that is responsible for storage, aggregation, visualization, and initiating automated responses when the values meet specific requirements.

What is Prometheus ?

Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database built using a HTTP pull model, with flexible queries and real-time alerting.

Prometheus data is stored in the form of metrics, with each metric having a name that is used for referencing and querying it. Each metric can be drilled down by an arbitrary number of key=value pairs (labels).

Prometheus collects data in the form of time series . The time series are built through a pull model: the Prometheus server queries a list of data sources (sometimes called exporters) at a specific polling frequency. Each of the data sources serves the current values of the metrics for that data source at the endpoint queried by Prometheus.

What is Grafana ?

Grafana is a multi-platform open source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources. It is expandable through a plug in system. End users can create complex monitoring dashboards using interactive query builders.

Task Description :

Integrate Prometheus and Grafana and perform in following way:

1. Deploy them as pods on top of Kubernetes by creating resources Deployment, ReplicaSet, Pods or Services.

2. And make their data to be remain persistent.

3. And both of them should be exposed to outside world.

TASK REQUIREMENT :-

  1. Minikube & kubectl cmd should be configured .
  2. concepts of kubernetes ,prometheus and grafana.

Let us start the work now:

Step 1: Deployment for Prometheus

In this step we are creating deployement for prometheus pod , and we also expose it to outer world using LoadBalancer service of Kubernetes .

Then we are creating a Persistent Volume Claim (PVC) to store the data of Prometheus POD persistently. Incase our POD goes down, we will not lose the data.

We are mounting PV to /etc/prometheus/data folder because the data of Prometheus is stored in this folder.

Here we are using vimal13/prometheus:latest image available on docker registry .

The yaml file I used for creating deployment is here

apiVersion: v1
kind: Service
metadata:
  name: prometheus-svc
  labels:
    env: prometheus
spec:
  ports:
    - port: 9090
  selector:
    env: prometheus
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: prometheuspvclaim
  labels:
    env: prometheus
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1 
kind: Deployment
metadata:
  name: prometheus-deployment
  labels:
    env: prometheus
spec:
  selector:
    matchLabels:
      env: prometheus
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        env: prometheus
    spec:
      containers:
      - image: vimal13/prometheus:latest
        name: prometheus
        ports:
        - containerPort: 9090
          name: prometheus
        volumeMounts:
        - name: prometheus-persistent-storage
          mountPath: /etc/prometheus/data
      volumes:
      - name: prometheus-persistent-storage
        persistentVolumeClaim:
          claimName: prometheuspvclaim

Run the below command to create deployment .

kubectl  create -f  filename.yaml 

Step 2: Deployment for Grafana

In this file, we are first creating a service for Grafana and using the LoadBalancer service to expose our POD to the outside world.

Then we are creating a Persistent Volume Claim (PVC) to store the data of Grafana POD persistently. Incase our POD goes down, we will not lose the data.

Finally, we are creating a deployment for Prometheus POD and mounting /var/lib/grafana folder because the data of Grafana is stored in this folder.

apiVersion: v1
kind: Service
metadata:
  name: grafana-svc
  labels:
    env: grafana
spec:
  ports:
    - port: 3000
  selector:
    env: grafana
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafanapvc
  labels:
    env: grafana
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1 
kind: Deployment
metadata:
  name: grafana-deployment
  labels:
    env: grafana
spec:
  selector:
    matchLabels:
      env: grafana
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        env: grafana
    spec:
      containers:
      - image: vimal13/grafana:latest
        name: grafana
        ports:
        - containerPort: 3000
          name: grafana
        volumeMounts:
        - name: grafana-persistent-storage
          mountPath: /var/lib/grafana
      volumes:
      - name: grafana-persistent-storage
        persistentVolumeClaim:
          claimName: grafanapvc

Run the below command to create deployment .

kubectl create -f  filename.yaml

Now we can the deployment from kubernetes

No alt text provided for this image
No alt text provided for this image

Now you can access the prometheus and Grafana from port no 32500 and 30491 respectively .

Remember to use the ip of minikube to access the dashboard .

Step 3: Configuring the target node to be monitored by prometheus

If you want to monitor a system, first you need to install a Node Exporter Program in that system. Then Prometheus will contact that Node Exporter Program and pull the metrics from that system.

In my case, I want to monitor an RHEL 8 system. For this, I need to download a node_exporter program for it.

Node exporter Program : here

tar -zxf node_exporter-0.18.1.linux-amd64.tar.gz  //unzipping
 
cd node_exporter-0.18.1.linux-amd64       //folder path

./node_exporter                           //working              


after this we can see the metrics data from web using redhat ip address and port no 9100 because by default the node exporter works on 9100.

No alt text provided for this image

Now you need to add the target nodes ip address to prometheus.yml file in pod of kubernetes .

No alt text provided for this image

You have to add the ip address of target node under scrape_configs section under job_name field , you can add no of targets node according to your need .

Now run this command to restart the program without changing its process id.

# kill -HUP 1

Now in the node system first stop the node_exporter program using <ctrl> <c> Then again run the following command.

# nohup ./node_exporter &

This command will keep the program running in the background and sends all the messages in the nohup.out file.

You can see the commands of this file also

# cat nohup.out

You will see this type Web UI when you havent added any target node

No alt text provided for this image

When you will add a target node you will see this type of Web UI

No alt text provided for this image

The by default username and password for grafana is admin and admin , you can change it later .

No alt text provided for this image
No alt text provided for this image

Now we can add our datasource , in our case it is prometheus

No alt text provided for this image

Now just add the URL of webfrom which we will access the data .

No alt text provided for this image
No alt text provided for this image

Then just save this .

Now we have to create the dashboard or we can also import the precreated dashboard .

No alt text provided for this image
No alt text provided for this image

All setup is done dashboard is ready .

No alt text provided for this image

Finally Done.

I have done this task with my friend Atharva Patil .

Suggestions are always Welcome

Thank you for reading.



hello

Premchand Gat

Embedded Systems | C | C++| Flutter | Python | Node.js | Git | GitHub| Arduino| Esp32 | Raspberry Pi | U-BOOT | Firebase | AWS | GCP

4 年

Great job

Saurabh Wani

Product Engineer at Digitate | Java Developer | Spring Boot | Microservices | Javascript | React | Docker | Jenkins

4 年

Your hard work inspire us, keep it up bro

Chirag Deepak Ratvekar

Mainframe Developer at Wipro | Microsoft Certified Azure Administrator Associate | Cloud & Data Enthusiast.

4 年

Keep it up

Aaditya Joshi

Software Engineer - Cloud Engineer at CRISIL Limited

4 年

Nice work shyam sulbhewar dude????

要查看或添加评论,请登录

Shyam Sulbhewar的更多文章

  • Website Deployment Using Jenkins via Job Creation Using Groovy code

    Website Deployment Using Jenkins via Job Creation Using Groovy code

    Jenkins is an incredibly powerful tool for DevOps that allows us to execute clean and consistent builds and deployments…

    6 条评论
  • CI/CD PIPELINE

    CI/CD PIPELINE

    The CI/CD pipeline is one of the best practices for devops teams to implement, for delivering code changes more…

    10 条评论
  • Launching Website on top of KUBERNETES

    Launching Website on top of KUBERNETES

    What is Kubernetes ? Kubernetes is a portable, extensible, open-source platform for managing containerized workloads…

    2 条评论
  • Launching A Secure Wordpress site

    Launching A Secure Wordpress site

    Task Description : Write an Infrastructure as code using Terraform, which automatically creates a VPC. In that VPC we…

    9 条评论
  • Launching Website Using AWS with EFS using Terraform

    Launching Website Using AWS with EFS using Terraform

    What is AWS ? Amazon web service is a platform that offers flexible, reliable, scalable, easy-to-use and cost-effective…

    8 条评论
  • Face Recognition Using Transfer Learning (VGG16)

    Face Recognition Using Transfer Learning (VGG16)

    Face Recognition is a method to identify the identity of an individual using their face. It is capable of identifying a…

    8 条评论
  • Amazon EKS

    Amazon EKS

    Cloud computing is an internet-based computing service in which large groups of remote servers are networked to allow…

    6 条评论
  • MLOPS & DEVOPS TASK2

    MLOPS & DEVOPS TASK2

    1. Create container image that’s has Jenkins installed using dockerfile 2.

    4 条评论
  • MLOPS & DEVOPS Task1

    MLOPS & DEVOPS Task1

    Integration of git , github ,jenkins ,docker . Task Description : JOB#1 If Developer push to master branch then Jenkins…

    2 条评论
  • Integrating aws with Terraform

    Integrating aws with Terraform

    Task 1 : How to create/launch Application using Terraform on the top of aws. 1.

社区洞察