Deploying Prometheus and Grafana on top of Kubernetes

Deploying Prometheus and Grafana on top of Kubernetes

Hello readers, this is my DevOps Task 5, and the problem statement is:

Integrate Prometheus and Grafana and perform in following way:

  1. Deploy them as pods on top of Kubernetes by creating resources Deployment, Replica Set, Pods or Services.
  2. And make their data to be remain persistent.
  3. And both of them should be exposed to outside world.

Let us first get an idea of what is Prometheus:

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.

Features:

  • a multi-dimensional data model (timeseries defined by metric name and set of key/value dimensions)
  • a flexible query language to leverage this dimensionality
  • no dependency on distributed storage; single server nodes are autonomous
  • timeseries collection happens via a pull model over HTTP
  • pushing timeseries is supported via an intermediary gateway
  • targets are discovered via service discovery or static configuration
  • multiple modes of graphing and dashboarding support
  • support for hierarchical and horizontal federation

And now lets get a rough idea of what is Grafana:

The open-source platform for monitoring and observability.

Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture:

  • Visualize: Fast and flexible client side graphs with a multitude of options. Panel plugins for many different way to visualize metrics and logs.
  • Dynamic Dashboards: Create dynamic & reusable dashboards with template variables that appear as dropdowns at the top of the dashboard.
  • Explore Metrics: Explore your data through ad-hoc queries and dynamic drilldown. Split view and compare different time ranges, queries and data sources side by side.
  • Explore Logs: Experience the magic of switching from metrics to logs with preserved label filters. Quickly search through all your logs or streaming them live.
  • Alerting: Visually define alert rules for your most important metrics. Grafana will continuously evaluate and send notifications to systems like Slack, Pager Duty, VictorOps, OpsGenie.
  • Mixed Data Sources: Mix different data sources in the same graph! You can specify a data source on a per-query basis. This works for even custom data sources.

So, the main problem statement solving starts from here:

Step 1: In this step we will be creating a PVC(Persistent Volume Claim) for Prometheus, I am providing the YML code:

apiVersion: v1
  kind: PersistentVolumeClaim
  metadata: 
    name: pvc-prometheus
    labels:
      name: pvcprometheus
  spec:    
    accessModes:
    - ReadWriteOnce
    resources:
      requests:   
       storage: 5Gi

Step 2: In this step, we will create a service to export the Prometheus using NodePort:

apiVersion: v1
   kind: Service
   metadata:
     name: prometheus-svc
     labels:
       app: prometheus-service
   spec:
     selector:
       app: prometheus
   type: NodePort
   ports:
   - nodePort: 2300
     port: 9090
     targetPort: 9090
     name: prometheus-port

Step 3: Now, we will be creating a deployment for Prometheus, I will be using an image which is readily available on DockerHub. However, if you want to customize it which caters your needs, then you may even create an image and push it to DockerHub. I have shown how to create your own image in my previous articles, so you may refer to to them.

The YML code for creating deployment:

apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: prometheus-deployment
  spec:
    replicas: 1
    selector:
      matchLabels: 
        tier: monitor
    template:
      metadata:
        labels:
          tier: monitor
      spec:
        containers:
        - name: prometheus
          image: prom/prometheus

          ports:
          - containerPort: 9090

          volumeMounts:
          - name: prometheus-volume
            mountPath: /data

       volumes:
       - name: prometheus-volume
         persistentVolumeClaim:
           claimName: pvc-prometheus

We will be following the same above steps for Grafana:

Step 4: PVC(Persistent Volume Claim) for Grafana:

apiVersion: v1
 kind: PersistentVolumeClaim
 metadata: 
   name: pvc-grafana
   labels:
     name: pvcgrafana
 spec:
   accessModes:
     - ReadWriteOnce
 resources:
   requests:
     storage: 5Gi 

Step 5: Creating a service to expose Grafana using NodePort:

 apiVersion: v1
    kind: Service
    metadata:
      name: grafana-svc
      labels:
        app: grafana-service
    spec:
      selector:
        app: grafana
      type: NodePort
      ports:
      - nodePort: 2301
        port: 3000
        targetPort: 3000
        name: port-grafana

Step 6: Creating a deployment for Grafana, even here I will be using a readily available image uploaded on DockerHub.

apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: grafana
    labels: 
      app: grafana
      tier: deploy
  spec:
    replicas: 1
    selector:
      matchLabels: 
        tier: monitoring
    template:
      metadata:
        labels:
          tier: monitoring
      spec:
        containers:
        - name: grafana-container
          image: grafana/grafana:latest

          ports:
          - containerPort: 3000

          volumeMounts:
          - name: grafana-volume
            mountPath: /var/lib/grafana

        volumes:
        - name: grafana-volume
          persistentVolumeClaim:
            claimName: pvc-grafana

Step 7: So, we have Grafana and Prometheus ready, we will be creating a Kustomization file in which we will be mentioning the sequence in which all the files need to be created, so run this kustomization file and that's all need to be done.

apiVersion: kustomize.config.k8s.io/v1beta1
   kind: Kustomization

   resources:
    - pvcprometheus.yml
    - pvcgrafana.yml
    - prometheusdeploy.yml
    - grafanadeploy.yml 
    - prometheusservice.yml  
    - grafanaservice.yml 

Head over to the folder where you have saved your kustomization file in the terminal windows/command prompt and run:

 kubectl apply -k .

After running this kustomization file, it will do everything, Prometheus and Grafana will have been deployed on Kubernetes and have been connected to PVC for no data loss. Both of the pods will be exposed to the outside world. Using Kubernetes, we don't need to worry about crashes, because that is the biggest advantage of using Kubernetes, and as we are using PVC, so there will be no data loss whatsoever.

All you need to do now is to open the Prometheus & Grafana in your browser using your Minikube IP & the mentioned port & start working!

No alt text provided for this image

So, that marks our task completion, any suggestions are welcome, see you later chaps!




要查看或添加评论,请登录

Naitik Shah的更多文章

  • JavaScript - Journey from Zero to Hero with Vimal Daga Sir

    JavaScript - Journey from Zero to Hero with Vimal Daga Sir

    I have seen a lot of "Free" courses on YouTube, which assure you to take your basic level in JavaScript to next level…

  • Hybrid Computing Task 1

    Hybrid Computing Task 1

    Why Cloud? Many companies have a hard time maintaining their data centers. It's also inconvenient for new startups to…

    2 条评论
  • Chest X-Ray Medical Diagnosis with Deep Learning

    Chest X-Ray Medical Diagnosis with Deep Learning

    Project Name: Chest X-Ray Medical Diagnosis with Deep Learning Team Members: Naitik Shah Ashutosh Kumar Sah This…

    2 条评论
  • Top 5 Billion Dollar Companies Using AWS Cloud

    Top 5 Billion Dollar Companies Using AWS Cloud

    Hello Readers, AWS has captured a whopping 51% of the total cloud computing service providers, and their competitors…

    2 条评论
  • Multi-Cloud Project

    Multi-Cloud Project

    A quick summary of the project: The purpose is to deploy a WordPress framework using Terraform on Kubernetes. For this,…

    2 条评论
  • Data Problem of Big Tech Companies

    Data Problem of Big Tech Companies

    Every hour, 30,000 hours of videos are uploaded to YouTube, crazy isn't it? and that data is of March 2019, so, I am…

    2 条评论
  • Hybrid Cloud Computing Task 4

    Hybrid Cloud Computing Task 4

    Hey fellas, presenting you my Hybrid Cloud Computing Task 4, which I am doing under the mentorship of Vimal Daga Sir…

  • Hybrid Cloud Computing Task 3

    Hybrid Cloud Computing Task 3

    Hey fellas, I bring you the Hybrid Cloud Computing task 3. What is this task all about? The motive is for our company…

  • Automating AWS Service(EC2, EFS, S3, Cloud Front) using Terraform

    Automating AWS Service(EC2, EFS, S3, Cloud Front) using Terraform

    So let me take you through the steps: First of all, create an IAM user by going to AWS GUI, and don't forget to…

  • Integrating Groovy with Kubernetes and Jenkins (DevOps Task 6)

    Integrating Groovy with Kubernetes and Jenkins (DevOps Task 6)

    Hola! so you guys might remember my DevOps Task 3 , if you haven't read it, then do give it a read, because this Task…

社区洞察

其他会员也浏览了