Simplify Kubernetes Monitoring: Kube-prometheus-stack Made Easy with Glasskube
What do we, as developers and engineers, value most above all else? The answer is simple: our time.
Tools that deliver value in the shortest amount of time have the highest chance of user adoption, it's as simple as that.
What else do most engineers value? Beautiful and data-rich dashboards.
Prometheus and Grafana are open-source, community-backed solutions with stellar reputations. They bring immense value by fetching and storing metrics while enabling the creation of dashboards that are not only useful but also easy on the eyes.
The uncomfortable truth is that anyone who has ever set up Prometheus alongside Grafana as their environment's monitoring stack from scratch has probably felt the frustration of not getting value especially quickly. Metric exporter configuration, dashboard widget customisation and deciding what to monitor and alert on in the first place takes time.
That's why Kube-Prometheus-Stack was created. It installs a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules, providing an easy-to-operate, end-to-end Kubernetes cluster monitoring solution with Prometheus using the Prometheus Operator.
This sounds like good news, and it is, but the stack is bundled in a Helm chart, and just the values.yaml file has over 4000 lines. Configuring and maintaining the Helm chart isn’t necessarily straightforward or “fun.”
With so many configuration options, we must be getting something good right? Well yeah, we are, by deploying kube-prometheus-stack we get all of this right out of the box:
Top Layer:
Visualization and Alerting Layer:
Exporters Layer:
Kubernetes Cluster:
Luckily, Glasskube now supports the Kube-Prometheus-Stack. Package configuration, lifecycle management, and installation can be done in record time.
In this blog post, we will explore the steps to configure and install the Kube-Prometheus-Stack using Glasskube, wasting no unnecessary time wrestling with never-ending values files and getting you working dashboards and alerts quicker than ever before.
Requirements:
Before we begin
For us at Glasskube crafting great content is as important as building great software. If this is the first time you've heard of us, we are working to build the next generation Package Manager for Kubernetes.
If you like our content and want to support us on this mission, we'd appreciate it if you could give us a star ?? on GitHub.
Create a cluster
Install Minikube then run:
minikube start
Check your installation by running:
minukube status
Desired output:
? ~ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
Install Glasskube
If you already installed glasskube you can skip this step. If not, glasskube can easily be installed by following your distribution's specific instructions.
MacOs:
brew install glasskube/tap/glasskube
Linux (Ubuntu/Debian)
curl -LO https://releases.dl.glasskube.dev/glasskube_v0.4.0_amd64.deb
sudo dpkg -i glasskube_v0.4.0_amd64.deb
More installation guides here
After installing Glasskube on your local machine, make sure to install the necessary components in your Kubernetes cluster by running glasskube bootstrap. For more information, check out our bootstrap guide.
Once Glasskube has been installed access the UI with:
glasskube serve
Navigate to https://localhost:8580/ to access it.
Kube-prometheus-stack installation
Installation can be done via the CLI, UI or even through a YAML package definition file. Since we will customize the deployment we will use the UI for this example.
Package Customization
Glasskube offers a series of customisations that we can be tweaked and adjusted from the CLI or GUI, saving you from having to render and configure the values.yaml file directly.
领英推荐
Let’s take them one by one.
Enable Alertmanager ??
We want Alertmanager to be enabled so we can leverage the metrics prometheus exposes to create helpful alerts.
Grafana Domain ??
We will leave this empty for this demo since we would need to deploy an ingress controller to our cluster to handle the ingress object associated with the Grafana service. We could use Ingress-nginx or Caddy-ingress which are also supported by Glasskube for this.
Glasskube will automatically port-forward the Grafana pod so we can access the dashboard via the Open button.
Node Exporter host network ??
Let’s also enable this to export node level metrics like memory and node level CPU usage.
Prometheus retention ??
This is a duration in days for how long we want to persist the collected metrics.
Prometheus storage size
The amount of storage requests we consider the package will need.
Parameter input methods ???
Glasskube allows for various methods of parameter input:
By choosing to inject data via Kubernetes Secrets, ConfigMaps and Package configuration we can maintain simplicity without compromising security.
Here is the example of how we would reference a specific configMap we have already created and deployed to our cluster.
?? If you're using Kube-prometheus-stack and considering Glasskube for package lifecycle management but need support for specific key parameter customizations, please open an issue on GitHub with your use case. We'll do our best to expand the parameter list accordingly.
Install via Glasskkube
Once the configuration section is complete, install kube-prometheus-stack.
Upon installation you can see that the kube-prometheus-stack namespace has been created. And a series of pods have been deployed, including the grafana dashboard, the prometheus operator and the kube state metrics pods too.
Access the dashboards
In next weeks blog post we will access the dashboard via a custom dedicated Grafana URL
Hit the Open button or if you want to access Grafana on a different port you can simply port-forward the pod, which will map the exposed Grafana port to a port on your localhost. I've arbitrarily chosen to port-forward to localhost 52222 since it's available.
kubectl port-forward POD_NAME 52222:3000
Head over to https://localhost:52222/ and you will then be greated by the Granfana login page. To find your credentials which are stored in a Kubernetes secret that was generated as part of the deployed stack, run:
kubectl get secret kube-prometheus-stack-kube-prometheus-stack-grafana -o go-template='
{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
Which will output something like:
admin-password: <password>
admin-user: admin
ldap-toml:
Upon access you we be greeted by a long list of powerful pre-configured Grafana dashboards which are already showing local cluster metrics:
Easily access CPU usage information
Here is a segment of the nifty CoreDNS dashboard that also comes preconfigured
Alerting
We already get many useful alerts created for us right out of the box
In this snippet you can see that some of the preconfigured alerts are already firing: ↘?
If you want to be notified in via email, PagerDuty or any number of third party supported you will just need to add your contact points of preference and then add them as destination inside custom notification policies.
The Kube-prometheus-stack offers tremendous "out-of-the-box" value for Kubernetes cluster monitoring, eliminating the need to start from scratch. It bundles essential components for metrics exposure, extraction, alerting, and visualization, helping you establish a robust monitoring posture from the get-go. With official support from Glasskube, managing and updating a comprehensive, best practice-compliant monitoring stack has never been easier.
If you like our content and want to support us on this mission, we'd appreciate it if you could give us a star ?? on GitHub.