Red Hat OpenShift Logging using Loki and Red Hat OpenShift Data Foundation (ODF) on IBM Z and IBM? LinuxONE

Red Hat OpenShift Logging using Loki and Red Hat OpenShift Data Foundation (ODF) on IBM Z and IBM? LinuxONE

Red Hat OpenShift is a very powerful container platform, and it offers a variety of capabilities and technologies. One of these capabilities is to allow you to deploy a logging subsystem to aggregate all the logs from your OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs and maintain that data persistent for your cluster deployment. The logging subsystem aggregates these logs from throughout your cluster and stores them in a default log store.

From the official Red Hat OpenShift documentation, the logging subsystem aggregates the following types of logs:

  • application?- Container logs generated by user applications running in the cluster, except infrastructure container applications.
  • infrastructure?- Logs generated by infrastructure components running in the cluster and OpenShift Container Platform nodes, such as journal logs. Infrastructure components are pods that run in the?openshift*,?kube*, or?default?projects.
  • audit?- Logs generated by auditd, the node audit system, which are stored in the?/var/log/audit/audit.log?file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.

If you want to send the audit logs to an external log store, you must use the Log Forwarding API as described in?Forward audit logs to the log store.


Why Loki?

Loki?refers to the log store as either the individual component or an external store.

Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the logging subsystem. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion, and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly.

LokiStack?refers to the logging subsystem that offers the combination of Loki, and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy.

The following guide provides the instructions to configure Red Hat Logging Operator to use Loki and store all data in s3 volumes provided by Red Hat OpenShift Data Foundation

This guide was created based on an existing blog that covers multiple ways to configure Loki.

These instructions assume the following:

  • Red Hat OpenShift is already installed.
  • Red Hat OpenShift Data Foundation/IBM Storage Fusion Data Foundation is already installed.

Install Loki operator

Operators -> OperatorHUB

Filter by Loki

No alt text provided for this image

Select Loki?

No alt text provided for this image

Clink on the install button

No alt text provided for this image

Install OpenShift Logging operator

Operators -> OperatorHub

Filter by Red Hat OpenShift L

No alt text provided for this image

Select the Tile Red Hat OpenShift Logging

No alt text provided for this image

Click the install Button

No alt text provided for this image

Select the Enable Operator recommended cluster monitoring on this Namespace (if available)

Click install button

Create a s3 Bucket Claim

Storage -> Object Bucket Claims

No alt text provided for this image

Click Create ObjectBucketClaim button

No alt text provided for this image

Provide a ObjectBucketClaim Name (any name you would like to identify your bucket claim)

Select the openshift-storage.noobaa.io in Storage Class

Select noobaa-default-bucket-class in BucketClass

Click Create button

No alt text provided for this image

Click the option reveal values to see the Object Bucket Claim Data

To create a secret to access the s3 bucket

- Endpoint?

- How to get the external route for the s3 ENDPOINT:

# oc get route -n openshift-storage s3 -o jsonpath={.spec.host}
Example: s3-openshift-storage.apps.ocp-55000600nv-55og.cloud.techzone.ibm.com        

- Bucket Name

- Access Key

- Secret Key


Workloads -> Secrets

No alt text provided for this image

Click on the Create button

No alt text provided for this image

Complete the Secret name and all the keys and values

No alt text provided for this image

Create a Config Map containing the Ingress controller certificate chain

You'll need to perform the steps of this section if your OpenShift cluster's Ingress controller uses a self-signed certificate (or a certificate signed by a root CA that's not a trusted root).

Else, you can skip this section.

You can use the following command line sequence to capture the Ingress controller's certificate chain and create a Config Map

# oc -n openshift-logging create cm <cmname> --from-literal=service-ca.crt="$(oc -n openshift-ingress extract secret/router-certs-default --keys=tls.crt --to=-)"        

Where:

  • <cmname> is a name you choose for your Config Map name


Verify the contents of the Config Map

You can run the following oc cli command to retrieve the contents of the Config Map

Where: <cmname> is the name of the Config Map you created earlier.

The results should look similar to the following yaml.

oc -n openshift-logging get configmap <cmname> -o yaml


apiVersion: v1
kind: ConfigMap
metadata:
? name: <cmname>
? namespace: openshift-logging
data:
? service-ca.crt: |-
? ? -----BEGIN CERTIFICATE-----
? ? MIIDaTCCAlGgAwIBAgIIXgwi5jbfg1YwDQYJKoZIhvcNAQELBQAwJjEkMCIGA1UE
? ? AwwbaW5ncmVzcy1vcGVyYXRvckAxNjkwOTUyODAxMB4XDTIzMDgwMjA1MDY0MVoX
? ? DTI1MDgwMTA1MDY0MlowJDEiMCAGA1UEAwwZKi5hcHBzLm9jcDEwLmludGVybmFs
? ? /b7C/XIjy6y/g+sBni7vzXdO21/l6TROBao1s/yyQfmQB1pEUw2nNw1a3sJzaOaZ
? ? vbZ7hpz2YuUFn83+3vrOws8g4LXhqqMK0J74tmLAUKYx+CyRbBgGQGa/LBLEqC3M
? ? FBbuBQUOTTAqdVnXQQ==
? ? -----END CERTIFICATE-----
? ? -----BEGIN CERTIFICATE-----
? ? MIIDDDCCAfSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtpbmdy
? ? ZXNzLW9wZXJhdG9yQDE2OTA5NTI4MDEwHhcNMjMwODAyMDUwNjQwWhcNMjUwODAx
? ? MDUwNjQxWjAmMSQwIgYDVQQDDBtpbmdyZXNzLW9wZXJhdG9yQDE2OTA5NTI4MDEw
? ? ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYluDnzrSCNayxu3/6aScO
? ? uPN8FNgO3O15VYnQzBl0NpTGxAY9kIt1mrOy1nKh8dWoyZnsKfWYqwrZFGz413jv
? ? 2Z9qAxq5xhFe+Vxub+BVyKD/xMQbeRrAh2iv0cgIf3Dc38MGgFbmcrmoB+iEqym5
? ? DU6ZG+d+YBD1BIW+yRtV8Q==
? ? -----END CERTIFICATE-----        

Create the Lokistack from the OpenShift web console?

Operators -> Installed Operators -> Loki Operator -> LokiStack -> Create LokiStack

No alt text provided for this image

Provide a Name and LokiStack Size:

No alt text provided for this image

Select Object Storage

Select Correct Object Storage Secret Name

Select Object Storage Secret Type

Select Storage Class Name

No alt text provided for this image

(optional) If you wish to define a Global Limit for data retention

No alt text provided for this image

Click Create

And wait for the Status to change from Pending to Ready (it might take 1-2 minutes)

No alt text provided for this image

Configure OpenShift Logging:

Select Operators -> Installed Operators -> Red Hat OpenShift Logging -> Cluster Logging

No alt text provided for this image

Inside Cluster Logging, select Create ClusterLogging button

Select the YAML view

Copy and paste the following code, replacing any existing yaml definitions:

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
? name: instance
? namespace: openshift-logging
spec:
? collection:
? ? type: vector
? logStore:
? ? lokistack:
? ? ? name: lokistack-sample
? ? type: lokistack
? managementState: Managed        
No alt text provided for this image

Click Create button

No alt text provided for this image

After a few minutes the OpenShift web console will present you an option to refresh the web console

(if not, just refresh the browser's window)

This will add to your OpenShift Observe menu (on the left of the screen) a new option under Targets called Logs

Visualize the Logs?

Observe -> Logs

Select the option Logs?

No alt text provided for this image

This concludes the guide, but the learning is far from over. Feel free to go over the Red Hat OpenShift official documentation on OpenShift Logging and understand the many other options that can be used with this setup.

Co-author: Pat Fruth


Stephane Menard

Hybrid Cloud Infrastructure Solutions and Delivery Architect. OpenShift/Kubernetes DevSecOps Specialist.

7 个月

Great article I did perform the migration and it was pretty straight forward. Is there a way to export the logs to a file? I know at this time it's not possible to do it from the console, but is there a way to do this from the command line ?

Thanks for posting and encouraging the cloud Pak

Thanks. Great article. It will be very helpful as the openshift docs about logging are pretty confusing at this moment.

要查看或添加评论,请登录

Filipe Miranda的更多文章

社区洞察

其他会员也浏览了