Using Velero for Taking Backup from Multi Master ELK Cluster with istio Service Mesh in Multi Master k8s Cluster and recover them to another cluster

Using Velero for Taking Backup from Multi Master ELK Cluster with istio Service Mesh in Multi Master k8s Cluster and recover them to another cluster

In this document we take a backup from Multi Master ELK Cluster with istio Service Mesh in Multi Master Kubernetes Cluster and recover them to a Single Master Kubernetes.

https://www.dhirubhai.net/in/hamedesmaeili/

?

Velero supports many options for object store and because of my necessity to test this on onpremise environment I would like to test this with?Ceph and Rados Gateway.

Check all supported object store in?this link .

Create a S3 user:

[root@hamed-ceph1 ~]# sudo radosgw-admin user create --subuser=velero-new:s3 --display-name="Velero Kubernetes Backup" --key-type=s3 --access=full        

{

??? "user_id": "velero-new",

??? "display_name": "Velero Kubernetes Backup",

??? "email": "",

??? "suspended": 0,

??? "max_buckets": 1000,

??? "subusers": [

??????? {

??????????? "id": "velero-new:s3",

??????????? "permissions": "full-control"

??????? }

??? ],

??? "keys": [

??????? {

??????????? "user": "velero-new:s3",

??????????? "access_key": "OG6E1P86NEGI1YJCGMV4",

??????????? "secret_key": "eZTa1SWvsYfFEgF9zIYrYE6f5p2nYlBJWYtv2zZ8"

??????? },

??????? {

??????????? "user": "velero-new:s3",

??????????? "access_key": "VGOEY27ETHT3B6A98RTP",

??????????? "secret_key": "lXcafCtyOwjdB7eqL1NFvjABf3v07b6BUmpDCDk5"

??????? }

??? ],

??? "swift_keys": [],

??? "caps": [],

??? "op_mask": "read, write, delete",

??? "default_placement": "",

??? "default_storage_class": "",

??? "placement_tags": [],

??? "bucket_quota": {

??????? "enabled": false,

??????? "check_on_raw": false,

??????? "max_size": -1,

??????? "max_size_kb": 0,

??????? "max_objects": -1

??? },

??? "user_quota": {

??????? "enabled": false,

??????? "check_on_raw": false,

??????? "max_size": -1,

??????? "max_size_kb": 0,

??????? "max_objects": -1

??? },

??? "temp_url_keys": [],

??? "type": "rgw",

??? "mfa_ids": []

}

Install s3cmd to create a bucket by CLI:?

[root@ceph-mon1 ~]# yum install s3cmd        

?Configure s3cmd to use my Rados GW endpoint:

[root@ceph-mon1 ~]# s3cmd –configure        

Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options.Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key []: OG6E1P86NEGI1YJCGMV4 Secret Key []: eZTa1SWvsYfFEgF9zIYrYE6f5p2nYlBJWYtv2zZ8 Default Region [US]: Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [radosgw.local.lab:80]: Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.radosgw.local.lab]: 192.168.163.140:80(ip of ceph master)

Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [No]: On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: ? Access Key: OG6E1P86NEGI1YJCGMV4 ? Secret Key: eZTa1SWvsYfFEgF9zIYrYE6f5p2nYlBJWYtv2zZ8 ? Default Region: US ? S3 Endpoint: 192.168.163.140:80 ? DNS-style bucket+hostname:port template for accessing a bucket: 192.168.163.140:80 ? Encryption password: ? Path to GPG program: /usr/bin/gpg ? Use HTTPS protocol: False ? HTTP Proxy server name: ? HTTP Proxy server port: 0Test access with supplied credentials? [Y/n] y Please wait, attempting to list all buckets... Success. Your access key and secret key worked fine :-)Now verifying that encryption works... Not configured. Never mind.Save settings? [y/N] y Configuration saved to '/root/.s3cfg'

?

Then create a bucket for Velero:

[root@ceph-mon1 ~]# s3cmd mb s3://velero        
Bucket 's3://velero/' created        

Then Velero Install:

CLI Download:

$ wget?https://github.com/vmware-tanzu/velero/releases/download/v1.2.0/velero-v1.2.0-linux-amd64.tar.gz        
$ tar -xzf velero-v1.2.0-linux-amd64.tar.gz        
$ sudo cp velero-v1.2.0-linux-amd64/velero /usr/local/sbin        

Create a file with s3 credentials:

$ vi credentials-velero

[default]

aws_access_key_id = OG6E1P86NEGI1YJCGMV4

aws_secret_access_key = eZTa1SWvsYfFEgF9zIYrYE6f5p2nYlBJWYtv2zZ8        

We are using the S3 credentials for access the bucket velero with aws S3 sdk plugin:

velero install \
--provider aws \
--use-node-agent \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket velero \
--secret-file /root/velero-v1.13.1-linux-amd64/credentials-velero \
--use-volume-snapshots=false \
--uploader-type=restic \
--default-volumes-to-fs-backup \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=https://192.168.163.140        

??? After install check velero pods:

[root@nps-k8s-master1 ~]# kubectl get pod -n velero        

NAME???????????????????? READY?? STATUS??? RESTARTS?? AGE

node-agent-4bllb???????? 1/1???? Running?? 0????????? 41m

node-agent-9zsjj???????? 1/1???? Running?? 0????????? 41m

node-agent-jpbtg???????? 1/1???? Running?? 0????????? 41m

node-agent-lzfh8???????? 1/1???? Running?? 0????????? 41m

node-agent-pgsgm???????? 1/1???? Running?? 0????????? 41m

node-agent-qfvn6???????? 1/1???? Running?? 0????????? 41m

velero-c9c64c544-d7cc6?? 1/1???? Running?? 0????????? 41m

Considering that we want to back up the volumes in order to increase performance and prevent errors during backup, we will increase the resources and the number of replicas by using the following command:

kubectl -n velero edit deployment velero        

In this file, we change and save the following values.

?replicas: 4

resources:

????????? limits:

??????????? cpu: "2"

??????????? memory: 2048Mi

????????? requests:

??????????? cpu: “2”

??????????? memory: 2048Mi

?

[root@nps-k8s-master1 ~]# kubectl get pod -n velero        

NAME????????????????????? READY?? STATUS??????? RESTARTS?? AGE

node-agent-4bllb????????? 1/1???? Running?????? 0????????? 49m

node-agent-9zsjj????????? 1/1???? Running?????? 0????????? 49m

node-agent-jpbtg????????? 1/1???? Running?????? 0????????? 49m

node-agent-lzfh8????????? 1/1???? Running?????? 0????????? 49m

node-agent-pgsgm????????? 1/1???? Running?????? 0????????? 49m

node-agent-qfvn6????????? 1/1??? ?Running?????? 0????????? 49m

velero-7789fbd47c-2xkzt?? 1/1???? Running?????? 0????????? 13s

velero-7789fbd47c-5fqzt?? 1/1???? Running?????? 0????????? 17s

velero-7789fbd47c-7tvwf?? 1/1???? Running?????? 0????????? 17s

velero-7789fbd47c-vcs4g?? 1/1???? Running?????? 0????????? 14s

?

?Next, we will prepare a backup and restore it:

Here we make a backup of the logging namespace.

[root@nps-k8s-master1 ~]# ?velero backup create logging44 --include-namespaces logging        
[root@nps-k8s-master1 ~]# ?velero backup describe logging44        

Name:???????? logging44

Namespace:??? velero

Labels:?????? velero.io/storage-location=default

Annotations:? velero.io/resource-timeout=10m0s

????????????? velero.io/source-cluster-k8s-gitversion=v1.28.2

????????????? velero.io/source-cluster-k8s-major-version=1

????????????? velero.io/source-cluster-k8s-minor-version=28

Phase:? Completed

Namespaces:

? Included:? logging

? Excluded:? <none>

?Resources:

? Included:??????? *

? Excluded:??????? <none>

? Cluster-scoped:? auto

Label selector:? <none>

Or label selector:? <none>

Storage Location:? default

Velero-Native Snapshot PVs:? auto

Snapshot Move Data:????????? false

Data Mover:????????????????? velero

TTL:? 720h0m0s

CSISnapshotTimeout:??? 10m0s

ItemOperationTimeout:? 4h0m0s

Hooks:? <none>

Backup Format Version:? 1.1.0

Started:??? 2024-04-30 09:52:37 +0330 +0330

Completed:? 2024-04-30 10:02:29 +0330 +0330

Expiration:? 2024-05-30 09:52:37 +0330 +0330

Total items to be backed up:? 233

Items backed up:????????????? 233

Backup Volumes:

? Velero-Native Snapshots: <none included>

? CSI Snapshots: <none included>

? Pod Volume Backups - restic (specify --details for more information):

??? Completed:? 101

HooksAttempted:? 0

HooksFailed:???? 0

In order to see the details of the backup prepared at the end of the command, we use –details

?

[root@nps-k8s-master1 ~]# velero backup describe logging44 –details        

Name:???????? logging44

Namespace:??? velero

Labels:?????? velero.io/storage-location=default

Annotations:? velero.io/resource-timeout=10m0s

????????????? velero.io/source-cluster-k8s-gitversion=v1.28.2

????????????? velero.io/source-cluster-k8s-major-version=1

????????????? velero.io/source-cluster-k8s-minor-version=28

?

Phase:? Completed

Namespaces:

? Included:? logging

? Excluded:? <none>

Resources:

? Included:??????? *

? Excluded:??????? <none>

? Cluster-scoped:? auto

Label selector:? <none>

Or label selector:? <none>

Storage Location:? default

Velero-Native Snapshot PVs:? auto

Snapshot Move Data:????????? false

Data Mover:????????????????? velero

TTL:? 720h0m0s

CSISnapshotTimeout:??? 10m0s

ItemOperationTimeout:? 4h0m0s

?Hooks:? <none>

Backup Format Version:? 1.1.0

Started:??? 2024-04-30 09:52:37 +0330 +0330

Completed:? 2024-04-30 10:02:29 +0330 +0330

Expiration:? 2024-05-30 09:52:37 +0330 +0330

Total items to be backed up:? 233

Items backed up:????????????? 233

Resource List:

? apiextensions.k8s.io/v1/CustomResourceDefinition:

??? - beats.beat.k8s.elastic.co

??? - elasticsearches.elasticsearch.k8s.elastic.co

??? - kibanas.kibana.k8s.elastic.co

??? - logstashes.logstash.k8s.elastic.co

? apps/v1/ControllerRevision:

??? - logging/elasticsearch-es-cold-5b7dc97db6

??? - logging/elasticsearch-es-data-7d6848ddf

??? - logging/elasticsearch-es-hot-db7887699

??? - logging/elasticsearch-es-masters-67bb9c9c74

??? - logging/elasticsearch-es-ml-6b45db7c4b

??? - logging/elasticsearch-es-warm-6c7dff44c5

??? - logging/filebeat-beat-filebeat-db496c7bb

??? - logging/logstash-ls-567f8787cd

? apps/v1/DaemonSet:

??? - logging/filebeat-beat-filebeat

? apps/v1/Deployment:

??? - logging/kibana-kb

? apps/v1/ReplicaSet:

??? - logging/kibana-kb-67c5b67777

??? - logging/kibana-kb-7dbf9fd4c5

? apps/v1/StatefulSet:

??? - logging/elasticsearch-es-cold

??? - logging/elasticsearch-es-data

??? - logging/elasticsearch-es-hot

??? - logging/elasticsearch-es-masters

??? - logging/elasticsearch-es-ml

??? - logging/elasticsearch-es-warm

??? - logging/logstash-ls

? beat.k8s.elastic.co/v1beta1/Beat:

??? - logging/filebeat

? coordination.k8s.io/v1/Lease:

??? - logging/elastic-operator-leader

? discovery.k8s.io/v1/EndpointSlice:

??? - logging/elastic-webhook-server-rsl92

??? - logging/elasticsearch-es-cold-wkpd9

??? - logging/elasticsearch-es-data-gmv97

??? - logging/elasticsearch-es-hot-rwrm5

??? - logging/elasticsearch-es-http-2gfr4

??? - logging/elasticsearch-es-internal-http-64kx8

??? - logging/elasticsearch-es-masters-xk5x9

??? - logging/elasticsearch-es-ml-l2jlh

??? - logging/elasticsearch-es-transport-cwvs6

??? - logging/elasticsearch-es-warm-52ztk

??? - logging/kibana-kb-http-pmbz7

??? - logging/logstash-ls-api-pqkfl

??? - logging/logstash-ls-beats-pmzqn

? elasticsearch.k8s.elastic.co/v1/Elasticsearch:

??? - logging/elasticsearch

? kibana.k8s.elastic.co/v1/Kibana:

??? - logging/kibana

? logstash.k8s.elastic.co/v1alpha1/Logstash:

??? - logging/logstash

? policy/v1/PodDisruptionBudget:

??? - logging/elasticsearch-es-default

? rbac.authorization.k8s.io/v1/ClusterRole:

??? - filebeat

? rbac.authorization.k8s.io/v1/ClusterRoleBinding:

??? - filebeat

? v1/ConfigMap:

??? - logging/elastic-licensing

??? - logging/elastic-operator

??? - logging/elastic-operator-uuid

??? - logging/elasticsearch-es-scripts

??? - logging/elasticsearch-es-unicast-hosts

??? - logging/istio-ca-root-cert

??? - logging/kube-root-ca.crt

? v1/Endpoints:

??? - logging/elastic-webhook-server

??? - logging/elasticsearch-es-cold

??? - logging/elasticsearch-es-data

??? - logging/elasticsearch-es-hot

??? - logging/elasticsearch-es-http

??? - logging/elasticsearch-es-internal-http

??? - logging/elasticsearch-es-masters

??? - logging/elasticsearch-es-ml

??? - logging/elasticsearch-es-transport

??? - logging/elasticsearch-es-warm

??? - logging/kibana-kb-http

??? - logging/logstash-ls-api

??? - logging/logstash-ls-beats

? v1/Event:

??? - logging/elasticsearch-es-cold-0.17ca1fa865a4eef3

??? - logging/elasticsearch-es-cold-0.17ca200a609e87d6

??? - logging/elasticsearch-es-cold-1.17ca1fa867823660

??? - logging/elasticsearch-es-cold-1.17ca200a5f3b79ff

??? - logging/elasticsearch-es-cold-1.17ca206c64ae0c76

??? - logging/elasticsearch-es-cold-2.17ca1fa863c53c2d

??? - logging/elasticsearch-es-cold-2.17ca200a648618d4

??? - logging/elasticsearch-es-cold-2.17ca20ce6d8929e1

??? - logging/elasticsearch-es-cold-2.17ca21f47290945e

??? - logging/elasticsearch-es-cold-2.17ca24a2938a380c

??? - logging/elasticsearch-es-data-0.17ca1fa871f2e89a

??? - logging/elasticsearch-es-data-0.17ca200a5e0817d8

??? - logging/elasticsearch-es-data-0.17ca206c64af137f

??? - logging/elasticsearch-es-data-0.17ca20ce4fc0f399

??? - logging/elasticsearch-es-data-1.17ca1fa86fbb50d3

??? - logging/elasticsearch-es-data-1.17ca200a7229cab8

??? - logging/elasticsearch-es-data-1.17caa263cce5e159

??? - logging/elasticsearch-es-data-1.17caf842c398501e

??? - logging/elasticsearch-es-data-2.17ca1fa87306ccb0

??? - logging/elasticsearch-es-data-2.17ca200a5d6a2247

??? - logging/elasticsearch-es-data-2.17ca47d69b7b0544

??? - logging/elasticsearch-es-hot-0.17ca1fa85ea7d12e

??? - logging/elasticsearch-es-hot-0.17ca200a31db1b1f

??? - logging/elasticsearch-es-hot-0.17ca21f3ec95def0

??? - logging/elasticsearch-es-hot-0.17ca24a17c041c1a

??? - logging/elasticsearch-es-hot-1.17ca1fa85eca4d0a

??? - logging/elasticsearch-es-hot-1.17ca200a5a867b4c

??? - logging/elasticsearch-es-hot-2.17ca1fa83f8c4df6

??? - logging/elasticsearch-es-hot-2.17ca200a269f0670

??? - logging/elasticsearch-es-hot-2.17ca22b7b43aaaf5

??? - logging/elasticsearch-es-hot-2.17ca2874acc3e4a5

??? - logging/elasticsearch-es-masters-0.17ca1ea77efe7aae

??? - logging/elasticsearch-es-masters-0.17ca1f6b93b0bed7

??? - logging/elasticsearch-es-masters-0.17ca233ee5af35aa

??? - logging/elasticsearch-es-masters-0.17ca23a0d514fca3

??? - logging/elasticsearch-es-masters-1.17ca1fa8847e8304

??? - logging/elasticsearch-es-masters-1.17ca200a660f96cc

??? - logging/elasticsearch-es-masters-1.17ca20ce6bcad5fe

??? - logging/elasticsearch-es-masters-1.17ca21f4715f8e6e

??? - logging/elasticsearch-es-masters-2.17ca1fa876d04216

??? - logging/elasticsearch-es-masters-2.17ca200a58c79669

??? - logging/elasticsearch-es-ml-0.17ca1fa8890a5f90

??? - logging/elasticsearch-es-ml-0.17ca200a64ad7036

??? - logging/elasticsearch-es-warm-0.17ca1fa869b0be46

??? - logging/elasticsearch-es-warm-0.17ca200a654d348a

??? - logging/elasticsearch-es-warm-0.17ca21f470c6ce24

??? - logging/elasticsearch-es-warm-0.17ca268cb8667803

??? - logging/elasticsearch-es-warm-1.17ca1fa86d78e2e8

??? - logging/elasticsearch-es-warm-1.17ca200a4f504a47

??? - logging/elasticsearch-es-warm-1.17ca21300b9408e5

??? - logging/elasticsearch-es-warm-2.17ca1fa86a8fea50

??? - logging/elasticsearch-es-warm-2.17ca200a5a8a2c69

??? - logging/elasticsearch-es-warm-2.17ca31a655f655ee

??? - logging/elasticsearch-es-warm-2.17caa263cce61c17

??? - logging/elasticsearch-es-warm-2.17caf842f1107cb9

??? - logging/elasticsearch.17caa277386481a8

??? - logging/filebeat-beat-filebeat-82m68.17caa1453dac476a

??? - logging/logstash-ls-0.17ca20259d5708e0

? v1/Namespace:

??? - logging

? v1/PersistentVolume:

??? - pvc-11bb4ef5-cacb-421b-ac00-6b653d5ad1df

??? - pvc-455671fa-1c6c-4bba-9011-39e2f7fb802f

??? - pvc-46433f8b-29ef-4710-8d50-6caacdf97b54

??? - pvc-46ff7179-8660-4407-aa3d-109862a76ddc

??? - pvc-4aa93903-a5fc-431a-9153-9d321ec67563

??? - pvc-69497e07-fdaa-4527-9cd2-7960b73aaab9

??? - pvc-6b33a8f1-cad5-49ed-ab48-ad07a2266570

??? - pvc-724bbc13-0b1c-4a72-bf75-9fc77f766a25

??? - pvc-788b82d4-d22c-4b9e-9181-8ec92ac061eb

??? - pvc-86598b25-b2ef-4627-8ad7-7b3204f65391

??? - pvc-9749b07d-c1d6-4508-9c4f-de763573760d

??? - pvc-b1bd15c8-e175-4673-987c-e7fe39db394d

??? - pvc-d30452cf-03b0-4322-bd0a-f62ccce19f55

??? - pvc-d430872c-951d-4956-ae8f-66a5e41acf0b

??? - pvc-f31c218f-5e75-41d7-9bf1-81a495bb16fd

??? - pvc-f5335d44-7291-4727-8a25-fbff835acd8c

??? - pvc-ff89d146-34c0-4ca3-8e85-9b03fc5e7dd8

? v1/PersistentVolumeClaim:

??? - logging/elasticsearch-data-elasticsearch-es-cold-0

??? - logging/elasticsearch-data-elasticsearch-es-cold-1

??? - logging/elasticsearch-data-elasticsearch-es-cold-2

??? - logging/elasticsearch-data-elasticsearch-es-data-0

??? - logging/elasticsearch-data-elasticsearch-es-data-1

??? - logging/elasticsearch-data-elasticsearch-es-data-2

??? - logging/elasticsearch-data-elasticsearch-es-hot-0

??? - logging/elasticsearch-data-elasticsearch-es-hot-1

??? - logging/elasticsearch-data-elasticsearch-es-hot-2

??? - logging/elasticsearch-data-elasticsearch-es-masters-0

??? - logging/elasticsearch-data-elasticsearch-es-masters-1

??? - logging/elasticsearch-data-elasticsearch-es-masters-2

??? - logging/elasticsearch-data-elasticsearch-es-ml-0

??? - logging/elasticsearch-data-elasticsearch-es-warm-0

??? - logging/elasticsearch-data-elasticsearch-es-warm-1

??? - logging/elasticsearch-data-elasticsearch-es-warm-2

??? - logging/logstash-data-logstash-ls-0

? v1/Pod:

??? - logging/elasticsearch-es-cold-0

??? - logging/elasticsearch-es-cold-1

??? - logging/elasticsearch-es-cold-2

??? - logging/elasticsearch-es-data-0

??? - logging/elasticsearch-es-data-1

??? - logging/elasticsearch-es-data-2

??? - logging/elasticsearch-es-hot-0

??? - logging/elasticsearch-es-hot-1

??? - logging/elasticsearch-es-hot-2

??? - logging/elasticsearch-es-masters-0

??? - logging/elasticsearch-es-masters-1

??? - logging/elasticsearch-es-masters-2

??? - logging/elasticsearch-es-ml-0

??? - logging/elasticsearch-es-warm-0

??? - logging/elasticsearch-es-warm-1

??? - logging/elasticsearch-es-warm-2

??? - logging/filebeat-beat-filebeat-4vbfq

??? - logging/filebeat-beat-filebeat-6nmql

??? - logging/filebeat-beat-filebeat-82m68

??? - logging/filebeat-beat-filebeat-8g2n4

??? - logging/filebeat-beat-filebeat-fwght

??? - logging/filebeat-beat-filebeat-g5w8b

??? - logging/kibana-kb-7dbf9fd4c5-4pqrt

??? - logging/logstash-ls-0

? v1/Secret:

??? - logging/elastic-webhook-server-cert

??? - logging/elasticsearch-es-cold-es-config

??? - logging/elasticsearch-es-cold-es-transport-certs

??? - logging/elasticsearch-es-data-es-config

??? - logging/elasticsearch-es-data-es-transport-certs

??? - logging/elasticsearch-es-elastic-user

??? - logging/elasticsearch-es-file-settings

??? - logging/elasticsearch-es-hot-es-config

??? - logging/elasticsearch-es-hot-es-transport-certs

??? - logging/elasticsearch-es-http-ca-internal

??? - logging/elasticsearch-es-http-certs-internal

??? - logging/elasticsearch-es-http-certs-public

??? - logging/elasticsearch-es-internal-users

??? - logging/elasticsearch-es-masters-es-config

??? - logging/elasticsearch-es-masters-es-transport-certs

??? - logging/elasticsearch-es-ml-es-config

??? - logging/elasticsearch-es-ml-es-transport-certs

??? - logging/elasticsearch-es-remote-ca

??? - logging/elasticsearch-es-transport-ca-internal

??? - logging/elasticsearch-es-transport-certs-public

??? - logging/elasticsearch-es-warm-es-config

??? - logging/elasticsearch-es-warm-es-transport-certs

??? - logging/elasticsearch-es-xpack-file-realm

??? - logging/filebeat-beat-filebeat-config

??? - logging/kibana-kb-config

??? - logging/kibana-kb-es-ca

??? - logging/kibana-kb-http-ca-internal

??? - logging/kibana-kb-http-certs-internal

??? - logging/kibana-kb-http-certs-public

??? - logging/kibana-kibana-user

??? - logging/logging-kibana-kibana-user

??? - logging/logging-logstash-logging-elasticsearch-logstash-user

??? - logging/logstash-logging-elasticsearch-logstash-user

??? - logging/logstash-logstash-es-logging-elasticsearch-ca

??? - logging/logstash-ls-config

??? - logging/logstash-ls-pipeline

? v1/Service:

??? - logging/elastic-webhook-server

??? - logging/elasticsearch-es-cold

??? - logging/elasticsearch-es-data

??? - logging/elasticsearch-es-hot

??? - logging/elasticsearch-es-http

??? - logging/elasticsearch-es-internal-http

??? - logging/elasticsearch-es-masters

??? - logging/elasticsearch-es-ml

??? - logging/elasticsearch-es-transport

??? - logging/elasticsearch-es-warm

??? - logging/kibana-kb-http

??? - logging/logstash-ls-api

??? - logging/logstash-ls-beats

? v1/ServiceAccount:

??? - logging/default

??? - logging/elastic-operator

??? - logging/filebeat

?Backup Volumes:

? Velero-Native Snapshots: <none included>

? CSI Snapshots: <none included>

? Pod Volume Backups - restic:

??? Completed:

????? logging/elasticsearch-es-cold-0: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-cold-1: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-cold-2: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-data-0: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-data-1: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-data-2: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-hot-0: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-hot-1: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-hot-2: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-masters-0: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-masters-1: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-masters-2: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-ml-0: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-warm-0: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-warm-1: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/elasticsearch-es-warm-2: elastic-internal-elasticsearch-bin-local, elastic-internal-elasticsearch-config-local, elastic-internal-elasticsearch-plugins-local, elasticsearch-data, elasticsearch-logs, tmp-volume

????? logging/kibana-kb-7dbf9fd4c5-4pqrt: elastic-internal-kibana-config-local, kibana-data

????? logging/logstash-ls-0: config, logstash-data, logstash-logs

?

HooksAttempted:? 0

HooksFailed:???? 0

?

After the end of the operation, we make sure that all the components have been properly backed up, and in the Pod Volume Backups - Restic section, we make sure that all the PVCs have been backed up. If one or more PVCs have an error, you can prepare another backup or take a separate backup from the same PVC. Before that, we check the reason for not taking the backup with the following command:

[root@nps-k8s-master1 ~]# velero backup logs logging44        

?

With the following command, we can see the list of prepared backups:

[root@nps-k8s-master1 ~]# velero get backup        

NAME??????????????????? STATUS???????????? ERRORS?? WARNINGS?? CREATED?????????????????????????? EXPIRES?? STORAGE LOCATION?? SELECTOR

logging44?????????? Completed???????? 0??????? 0????????? 2024-04-15 11:18:56 +0330 +0330?? 29d??????? default??????????? <none>

?

In order to restore the backup prepared on the destination cluster, we install Velero according to the installation steps mentioned in this document. After installation with the following command, we should see the list of backups available on the bucket:

[root@nps-k8s-master1 ~]# velero get backup        

NAME??????????????????? STATUS???????????? ERRORS?? WARNINGS?? CREATED?????????????????????????? EXPIRES?? STORAGE LOCATION?? SELECTOR

logging44?????????????? Completed???????? 0??????? 0????????? 2024-04-15 11:18:56 +0330 +0330?? 29d??????? default??????????? <none>

??

Considering that the istio service mesh is used to manage the services in the cluster, before restoring logging namespace, we must transfer istio first.

[root@nps-k8s-master1 ~]# velero backup create istio-system44 --include-namespaces istio-system        
[root@nps-k8s-master1 ~]# velero backup describe istio-system44 --details        

Name:???????? istio-system44

Namespace:??? velero

Labels:?????? velero.io/storage-location=default

Annotations:? velero.io/resource-timeout=10m0s

????????????? velero.io/source-cluster-k8s-gitversion=v1.28.2

????????????? velero.io/source-cluster-k8s-major-version=1

????????????? velero.io/source-cluster-k8s-minor-version=28

?Phase:? Completed

Namespaces:

? Included:? istio-system

? Excluded:? <none>

Resources:

? Included:??????? *

? Excluded:??????? <none>

? Cluster-scoped:? auto

Label selector:? <none>

Or label selector:? <none>

Storage Location:? default

Velero-Native Snapshot PVs:? auto

Snapshot Move Data:????????? false

Data Mover:????????????????? velero

TTL:? 720h0m0s

CSISnapshotTimeout:??? 10m0s

ItemOperationTimeout:? 4h0m0s

Hooks:? <none>

Backup Format Version:? 1.1.0

?Started:??? 2024-04-30 11:20:33 +0330 +0330

Completed:? 2024-04-30 11:26:17 +0330 +0330

Expiration:? 2024-05-30 11:20:33 +0330 +0330

Total items to be backed up:? 142

Items backed up:????????????? 142

Resource List:

? apiextensions.k8s.io/v1/CustomResourceDefinition:

??? - gateways.networking.istio.io

??? - istiooperators.install.istio.io

??? - peerauthentications.security.istio.io

??? - telemetries.telemetry.istio.io

??? - virtualservices.networking.istio.io

? apps/v1/ControllerRevision:

??? - istio-system/istio-cni-node-7c4476d68b

? apps/v1/DaemonSet:

??? - istio-system/istio-cni-node

? apps/v1/Deployment:

??? - istio-system/egressgateway

??? - istio-system/flagger

??? - istio-system/grafana

??? - istio-system/ingressgateway

??? - istio-system/istiod

??? - istio-system/jaeger

??? - istio-system/kiali

??? - istio-system/prometheus

? apps/v1/ReplicaSet:

??? - istio-system/egressgateway-754cb7f4f7

??? - istio-system/flagger-675cf7b84f

??? - istio-system/flagger-6788cb5548

??? - istio-system/flagger-7bf57cc4dc

??? - istio-system/flagger-86d57c84c5

??? - istio-system/grafana-b8bbdc84d

??? - istio-system/ingressgateway-66c86c649b

??? - istio-system/ingressgateway-69c8bb8484

??? - istio-system/ingressgateway-6cf57c888f

??? - istio-system/ingressgateway-744d4754c

??? - istio-system/istiod-5d95465974

??? - istio-system/jaeger-7d7d59b9d

??? - istio-system/kiali-545878ddbb

??? - istio-system/prometheus-db8b4588f

? autoscaling/v2/HorizontalPodAutoscaler:

??? - istio-system/egressgateway

??? - istio-system/ingressgateway

??? - istio-system/istiod

? coordination.k8s.io/v1/Lease:

??? - istio-system/istio-gateway-deployment-default

? discovery.k8s.io/v1/EndpointSlice:

??? - istio-system/egressgateway-2txff

??? - istio-system/grafana-hdc5k

??? - istio-system/ingressgateway-5d4lq

??? - istio-system/istiod-f4p9h

??? - istio-system/jaeger-collector-lf7d6

??? - istio-system/kiali-6q6ss

??? - istio-system/prometheus-jxsph

??? - istio-system/tracing-mkfgc

??? - istio-system/zipkin-rt9xm

? install.istio.io/v1alpha1/IstioOperator:

??? - istio-system/installed-state-control-plane

??? - istio-system/installed-state-egress-gateway

??? - istio-system/installed-state-ingress-gateway

? networking.istio.io/v1beta1/Gateway:

??? - istio-system/eis-gateway

??? - istio-system/keycloak-gateway

? networking.istio.io/v1beta1/VirtualService:

??? - istio-system/eis-vs-from-gw

??? - istio-system/keycloak-vs-from-gw

? policy/v1/PodDisruptionBudget:

??? - istio-system/egressgateway

??? - istio-system/ingressgateway

??? - istio-system/istiod

? rbac.authorization.k8s.io/v1/ClusterRole:

??? - flagger

??? - istio-cni

??? - istio-cni-repair-role

??? - istio-reader-clusterrole-istio-system

??? - istiod-clusterrole-istio-system

??? - istiod-gateway-controller-istio-system

??? - kiali

? rbac.authorization.k8s.io/v1/ClusterRoleBinding:

??? - flagger

??? - istio-cni

??? - istio-cni-repair-rolebinding

??? - istio-reader-clusterrole-istio-system

??? - istiod-clusterrole-istio-system

??? - istiod-gateway-controller-istio-system

??? - kiali

? rbac.authorization.k8s.io/v1/Role:

??? - istio-system/egressgateway-sds

??? - istio-system/ingressgateway-sds

??? - istio-system/istiod

??? - istio-system/kiali-controlplane

? rbac.authorization.k8s.io/v1/RoleBinding:

??? - istio-system/egressgateway-sds

??? - istio-system/ingressgateway-sds

??? - istio-system/istiod

??? - istio-system/kiali-controlplane

? scheduling.k8s.io/v1/PriorityClass:

??? - system-node-critical

? security.istio.io/v1beta1/PeerAuthentication:

??? - istio-system/default

? telemetry.istio.io/v1alpha1/Telemetry:

??? - istio-system/ingress-gateway

??? - istio-system/mesh-default

? v1/ConfigMap:

??? - istio-system/grafana

??? - istio-system/istio

??? - istio-system/istio-ca-root-cert

??? - istio-system/istio-cni-config

??? - istio-system/istio-gateway-status-leader

??? - istio-system/istio-grafana-dashboards

??? - istio-system/istio-leader

??? - istio-system/istio-namespace-controller-election

??? - istio-system/istio-services-grafana-dashboards

??? - istio-system/istio-sidecar-injector

??? - istio-system/kiali

??? - istio-system/kube-root-ca.crt

??? - istio-system/prometheus

? v1/Endpoints:

??? - istio-system/egressgateway

??? - istio-system/grafana

??? - istio-system/ingressgateway

??? - istio-system/istiod

??? - istio-system/jaeger-collector

??? - istio-system/kiali

??? - istio-system/prometheus

??? - istio-system/tracing

??? - istio-system/zipkin

? v1/Event:

??? - istio-system/istio-cni-node-bhldx.17c8ec84126a4077

? v1/Namespace:

??? - istio-system

? v1/Pod:

??? - istio-system/egressgateway-754cb7f4f7-ns9mk

??? - istio-system/flagger-86d57c84c5-5rlc9

??? - istio-system/grafana-b8bbdc84d-db92f

??? - istio-system/ingressgateway-744d4754c-j7g5z

??? - istio-system/istio-cni-node-2wpp6

??? - istio-system/istio-cni-node-68jcc

??? - istio-system/istio-cni-node-8lf6s

??? - istio-system/istio-cni-node-bhldx

??? - istio-system/istio-cni-node-cb9tq

??? - istio-system/istio-cni-node-fp996

??? - istio-system/istio-cni-node-js87b

??? - istio-system/istio-cni-node-ng2rb

??? - istio-system/istio-cni-node-pk24q

??? - istio-system/istiod-5d95465974-jzpgf

??? - istio-system/jaeger-7d7d59b9d-h5rrl

??? - istio-system/kiali-545878ddbb-j6hpb

??? - istio-system/prometheus-db8b4588f-v9kgk

? v1/Secret:

??? - istio-system/eis-credential

??? - istio-system/istio-ca-secret

??? - istio-system/sh.helm.release.v1.flagger.v1

??? - istio-system/webapp-credential

? v1/Service:

??? - istio-system/egressgateway

??? - istio-system/grafana

??? - istio-system/ingressgateway

??? - istio-system/istiod

??? - istio-system/jaeger-collector

??? - istio-system/kiali

??? - istio-system/prometheus

??? - istio-system/tracing

??? - istio-system/zipkin

? v1/ServiceAccount:

??? - istio-system/default

??? - istio-system/egressgateway-service-account

??? - istio-system/flagger

??? - istio-system/grafana

??? - istio-system/ingressgateway-service-account

??? - istio-system/istio-cni

??? - istio-system/istio-reader-service-account

??? - istio-system/istiod

??? - istio-system/kiali

??? - istio-system/prometheus

?Backup Volumes:

? Velero-Native Snapshots: <none included>

? CSI Snapshots: <none included>

? Pod Volume Backups - restic:

??? Completed:

????? istio-system/egressgateway-754cb7f4f7-ns9mk: credential-socket, istio-data, istio-envoy, workload-certs, workload-socket

????? istio-system/grafana-b8bbdc84d-db92f: storage

????? istio-system/ingressgateway-744d4754c-j7g5z: credential-socket, istio-data, istio-envoy, workload-certs, workload-socket

????? istio-system/istiod-5d95465974-jzpgf: local-certs

????? istio-system/jaeger-7d7d59b9d-h5rrl: data

????? istio-system/prometheus-db8b4588f-v9kgk: storage-volume

?HooksAttempted:? 0

HooksFailed:???? 0

?

Considering that istio also uses crd for work, we will prepare a backup of all crds separately:

?

[root@nps-k8s-master1 ~]# velero backup create crds --include-resources crds        
[root@nps-k8s-master1 ~]# velero backup describe crds        

Name:???????? crds

Namespace:??? velero

Labels:?????? velero.io/storage-location=default

Annotations:? velero.io/resource-timeout=10m0s

????????????? velero.io/source-cluster-k8s-gitversion=v1.28.2

????????????? velero.io/source-cluster-k8s-major-version=1

????????????? velero.io/source-cluster-k8s-minor-version=28

?Phase:? Completed

?Namespaces:

? Included:? *

? Excluded:? <none>

?Resources:

? Included:??????? crds

? Excluded:??????? <none>

? Cluster-scoped:? auto

?Label selector:? <none>

?Or label selector:? <none>

?Storage Location:? default

?Velero-Native Snapshot PVs:? auto

Snapshot Move Data:????????? false

Data Mover:????????????????? velero

?TTL:? 720h0m0s

?CSISnapshotTimeout:??? 10m0s

ItemOperationTimeout:? 4h0m0s

?Hooks:? <none>

?Backup Format Version:? 1.1.0

?Started:??? 2024-05-01 13:49:01 +0330 +0330

Completed:? 2024-05-01 13:49:06 +0330 +0330

?Expiration:? 2024-05-31 13:49:01 +0330 +0330

?Total items to be backed up:? 84

Items backed up:????????????? 84

?Backup Volumes:

? Velero-Native Snapshots: <none included>

?? CSI Snapshots: <none included>

?? Pod Volume Backups: <none included>

?HooksAttempted:? 0

HooksFailed:???? 0

?

On the second server, we can see the list of backups:

[root@nps-k8s-master1 ~]# velero get backup        

NAME??????????????????? STATUS???????????? ERRORS?? WARNINGS?? CREATED?????????????????????????? EXPIRES?? STORAGE LOCATION?? SELECTOR

logging44?????????????? Completed???????? 0??????? 0????????? 2024-04-15 11:18:56 +0330 +0330?? 29d??????? default??????????? <none>

istio-system44????? Completed???????? 0??????? 0????????? 2024-04-30 11:20:33 +0330 +0330?? 24d?????? default??????????? <none>

crds??????????????????? ????Completed???????? 0??????? 0???? ?????2024-05-01 13:49:01 +0330 +0330?? 25d?????? default??????????? <none>

?

In operational environments, it is recommended that before attempting to restore data, the storage location of backups should be set to readonly mode in order to prevent any backup manipulation by automatic tasks. For this purpose, we use the following command:

kubectl patch backupstoragelocation default --namespace velero --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'        

?

First, we recover the crds:

velero restore create --from-backup crds --existing-resource-policy=update        

?

Then we restore istio-system:

velero restore create --from-backup istio-system44        

?

After the recovery is completed, we make sure that all the pods of this tool are in Running mode:

[root@nps-k8s-hamed-master1 velero-helm]# kubectl get po -n istio-system        

NAME???????????????????????????? READY?? STATUS??? RESTARTS?? AGE

egressgateway-754cb7f4f7-ns9mk?? 1/1???? Running?? 0????????? 4d3h

flagger-86d57c84c5-tc2nk???????? 1/1???? Running?? 0????????? 4d2h

grafana-b8bbdc84d-db92f????????? 1/1???? Running?? 0????????? 4d3h

ingressgateway-744d4754c-j7g5z?? 1/1???? Running?? 0????????? 4d3h

istio-cni-node-2wpp6???????????? 1/1???? Running?? 0????????? 4d3h

istio-cni-node-6g4t2???????????? 1/1???? Running?? 0????????? 4d3h

istio-cni-node-8lf6s???????????? 1/1???? Running?? 0????????? 4d3h

istio-cni-node-fsdk7???????????? 1/1???? Running?? 0????????? 4d3h

istio-cni-node-js87b???????????? 1/1???? Running?? 0????????? 4d3h

istio-cni-node-xcmmr???????????? 1/1???? Running?? 0????????? 4d3h

istiod-5d95465974-jzpgf????????? 1/1???? Running?? 0????????? 4d3h

jaeger-7d7d59b9d-h5rrl?????????? 1/1???? Running? ?0????????? 4d3h

kiali-545878ddbb-dt5dz?????????? 1/1???? Running?? 0????????? 4d2h

prometheus-db8b4588f-v9kgk?????? 2/2???? Running?? 0????????? 4d3h

?

Considering that the Logging Namespace contains a number of pv and pvc, first we create StorageClasses with the same name as the source cluster in the destination server. Note that the type of Storageclass is important (rbd or cephf or …)

kubectl get sc        

NAME???????????????? PROVISIONER?????????? RECLAIMPOLICY?? VOLUMEBINDINGMODE?? ALLOWVOLUMEEXPANSION?? AGE

cephfs-hdd?????????? cephfs.csi.ceph.com?? Retain????????? Immediate?????????? true?????????????????? 4d3h

rook-cephfs-retain?? cephfs.csi.ceph.com?? Retain????????? Immediate?????????? true?????????????????? 4d3h

Then we restore the Logging Namespace:

velero restore create --from-backup logging44 --existing-resource-policy=update        

?

After that, we make sure that all the pods are in Running mode:

k get po -n logging

NAME?????????????????????????? READY?? STATUS??? RESTARTS????? AGE

elasticsearch-es-cold-0??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-cold-1??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-cold-2??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-data-0??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-data-1??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-data-2??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-hot-0???????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-hot-1???????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-hot-2???????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-masters-0???? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-masters-1???? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-masters-2???? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-ml-0????????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-warm-0??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-warm-1??????? 1/1???? Running?? 0???????????? 4d

elasticsearch-es-warm-2??????? 1/1???? Running?? 0?????? ??????4d

filebeat-beat-filebeat-9bx68?? 1/1???? Running?? 0???????????? 4d

filebeat-beat-filebeat-m9q9g?? 1/1???? Running?? 0???????????? 4d

filebeat-beat-filebeat-mmwsh?? 1/1???? Running?? 0???????????? 4d

filebeat-beat-filebeat-p7vv5?? 1/1???? Running?? 0???????????? 4d

filebeat-beat-filebeat-pgpqr?? 1/1???? Running?? 0???????????? 4d

kibana-kb-7dbf9fd4c5-7g28n???? 1/1???? Running?? 88 (8h ago)?? 4d

logstash-ls-0????????????????? 1/1???? Running?? 0???????????? 4d

?

Regarding this Namespace, considering that Elasticsearch cluster with 3 masters has been used, after complete recovery, we must extract the IP of the master servers and replace them in the corresponding configmap.

k get po -n logging -o wide        

NAME?????????????????????????? READY?? STATUS??? RESTARTS????? AGE?? IP??????????????? NODE????????????????????? NOMINATED NODE?? READINESS GATES

elasticsearch-es-cold-0??????? 1/1???? Running?? 0???????????? 4d??? 10.244.161.208??? nps-k8s-hamed-worker1?? <none>?????????? <none>

elasticsearch-es-cold-1??????? 1/1???? Running?? 0???????????? 4d??? 10.244.235.148??? nps-k8s-ceph3???????????? <none>?????????? <none>

elasticsearch-es-cold-2??????? 1/1???? Running?? 0???????????? 4d??? 10.244.98.148???? nps-k8s-ceph2???????????? <none>?????????? <none>

elasticsearch-es-data-0??????? 1/1???? Running?? 0???????????? 4d??? 10.244.148.104??? nps-k8s-hamed-worker2?? <none>?????????? <none>

elasticsearch-es-data-1??????? 1/1???? Running?? 0???????????? 4d??? 10.244.166.216??? nps-k8s-ceph1???????????? <none>?????????? <none>

elasticsearch-es-data-2??????? 1/1???? Running?? 0???????????? 4d??? 10.244.161.206??? nps-k8s-hamed-worker1?? <none>?????????? <none>

elasticsearch-es-hot-0???????? 1/1???? Running?? 0???????????? 4d??? 10.244.98.150???? nps-k8s-ceph2???????????? <none>?????????? <none>

elasticsearch-es-hot-1???????? 1/1???? Running?? 0???????????? 4d??? 10.244.161.207??? nps-k8s-hamed-worker1?? <none>???????? ??<none>

elasticsearch-es-hot-2???????? 1/1???? Running?? 0???????????? 4d??? 10.244.148.103??? nps-k8s-hamed-worker2?? <none>?????????? <none>

elasticsearch-es-masters-0???? 1/1???? Running?? 0???????????? 4d??? 10.244.235.146??? nps-k8s-ceph3??????????? ?<none>?????????? <none>

elasticsearch-es-masters-1???? 1/1???? Running?? 0???????????? 4d??? 10.244.98.149???? nps-k8s-ceph2???????????? <none>?????????? <none>

elasticsearch-es-masters-2???? 1/1???? Running?? 0???????????? 4d??? 10.244.166.215??? nps-k8s-ceph1???????????? <none>?????????? <none>

elasticsearch-es-ml-0????????? 1/1???? Running?? 0???????????? 4d??? 10.244.235.147??? nps-k8s-ceph3???????????? <none>?????????? <none>

elasticsearch-es-warm-0??????? 1/1???? Running?? 0???????????? 4d??? 10.244.166.214??? nps-k8s-ceph1???????????? <none>?????????? <none>

elasticsearch-es-warm-1??????? 1/1???? Running?? 0???????????? 4d??? 10.244.148.105??? nps-k8s-hamed-worker2?? <none>?????????? <none>

elasticsearch-es-warm-2??????? 1/1???? Running?? 0????????? ???4d??? 10.244.98.151???? nps-k8s-ceph2???????????? <none>?????????? <none>

filebeat-beat-filebeat-9bx68?? 1/1???? Running?? 0???????????? 4d??? 192.168.163.226?? nps-k8s-ceph1???????????? <none>?????????? <none>

filebeat-beat-filebeat-m9q9g?? 1/1???? Running?? 0???????????? 4d??? 192.168.160.89??? nps-k8s-hamed-worker2?? <none>?????????? <none>

filebeat-beat-filebeat-mmwsh?? 1/1???? Running?? 0???????????? 4d??? 192.168.165.30??? nps-k8s-hamed-worker1?? <none>?????????? <none>

filebeat-beat-filebeat-p7vv5?? 1/1???? Running?? 0???????????? 4d??? 192.168.163.248?? nps-k8s-ceph3???????????? <none>?????????? <none>

filebeat-beat-filebeat-pgpqr?? 1/1???? Running?? 0???????????? 4d??? 192.168.163.231?? nps-k8s-ceph2???????????? <none>?????????? <none>

kibana-kb-7dbf9fd4c5-7g28n???? 1/1???? Running?? 88 (8h ago)?? 4d??? 10.244.161.210??? nps-k8s-hamed-worker1?? <none>?????????? <none>

logstash-ls-0????????????????? 1/1???? Running?? 0???????????? 4d??? 10.244.166.213??? nps-k8s-ceph1???????????? <none>?????????? <none>

?

k get configmaps -n logging        

NAME???????????????????????????? DATA?? AGE

elastic-licensing??????????????? 5????? 4d2h

elastic-operator???????????????? 1????? 4d2h

elastic-operator-uuid??????????? 1????? 4d2h

elasticsearch-es-scripts???????? 5????? 4d2h

elasticsearch-es-unicast-hosts?? 1????? 4d2h

istio-ca-root-cert?????????????? 1????? 4d2h

kube-root-ca.crt???????????????? 1????? 4d2h

?

k edit configmaps elasticsearch-es-unicast-hosts -n logging        

apiVersion: v1

data:

? unicast_hosts.txt: |-

??? 10.244.235.146:9300

??? 10.244.98.149:9300

??? 10.244.166.215:9300

kind: ConfigMap

metadata:

? creationTimestamp: "2024-05-01T10:25:35Z"

? labels:

??? common.k8s.elastic.co/type: elasticsearch

??? elasticsearch.k8s.elastic.co/cluster-name: elasticsearch

??? velero.io/backup-name: logging44

??? velero.io/restore-name: logging44-20240501135525

? name: elasticsearch-es-unicast-hosts

? namespace: logging

? resourceVersion: "1223013"

? uid: e8455bd5-0b61-4528-b026-be3eb263740f

?

After making sure that the recovery is correct, we set the location of backups in ReadWrite status:

kubectl patch backupstoragelocation default --namespace velero --type merge --patch '{"spec":{"accessMode":"ReadWrite"}}'        

?

Special cases:

If the recovery of one or more pv or pvc or any other resource encounters an error, we use the following method to recover again:

In the pv and pvc destination server, we delete the pod that is in pending status and has not been run. If it is in terminating mode after deletion, we use the following commands to delete them:

kubectl patch pvc pvc_name -p '{"metadata":{"finalizers":null}}' –n namespacename        
kubectl patch pv pv_name -p '{"metadata":{"finalizers":null}}' –n namespacename        

?

With the following command, we will restore pv and pvcs:

velero restore create logging-data0 --from-backup logging44 --existing-resource-policy=update --include-resources persistentvolumeclaims,persistentvolumes --restore-volumes=true        

?

With the following commands, we make sure that they are correctly restored and placed in Bound mode:

K get pvc –n logging        
K get pv –n logging        

?

After that, the status of all pods should be in Running mode.

With the following command, you can see which PVC each pod is connected to:

kubectl get pods --all-namespaces -o=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName:.spec.volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'        

?

Note: The Velero tool provides a backup of pv and pvcs that must be connected to a pod and has nothing to do with orphan objects.

Note: TTL of backups prepared by velero is 720 hours or 30 days by default. This value can be reduced during backup with -ttl=24h command. After the end of the TTL, the backup will be automatically deleted from the storage.

Note: To delete extra backups from storage, we use the following command:

kubectl delete -n velero backup.velero.io/logging5        

Note: Velero can't running more than one backup and not supporting parallelism backup. When a backup is Inprogress state you can't running another backup task. For cancel Inprogress Backup and run another task you need execute this commands:

kubectl delete backup <inprogress backup name> -n velero
kubectl rollout restart daemonset/node-agent -n velero
kubectl rollout restart deployment -n velero        

Note: To remove the Velero tool, we use the following commands. Obviously, deleting Velero does not lead to the deletion of backups from the storage, and as soon as the old backups are installed again, they will be visible:

kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero        

Note: Using the powerful Velero tool, you can make a backup of all cluster components. Due to the fact that the amount of related material is very large, please refer to its website to continue working and gain more information about this tool.

https://velero.io/

?

?

?

?

?

?

?

?

?

?

?

?

?

?

Very informative ??

AliReza Ghaziseedi

DevOps Engineer at ISC | Kubernetes | Kubernetes Security | Container Orchestration | Prometheus | Linux Admin

10 个月

Interesting!

Alireza Monsefpour

Oracle Database Consultant and Administrator

10 个月

Very helpful!

Yashar Esmaildokht

Devops / Platform / Cloud Eng |Gnu/Linux System/Network/Security/Storage Engineer/Admin/architecture & Oracle Dba | Linux Trainer |Consultant

10 个月

It is a good article. In my opinion, if you write about the benchmark and performance tests. It was much more efficient. As we know, there are many methods for backup & restore. If you examine this concept in your article, what is the reason for your choice of radosgw and velero . I think it would be a perfect article. Anyway, it's a good article.

要查看或添加评论,请登录

Hamed Esmaeili的更多文章

社区洞察

其他会员也浏览了