Pgackrest and Minio, the perfect match

Hi there,

today I want to present pgbackrest with backups on S3 compatible storage, in my case Minio.

My test bed is 4 Virtualbox machines running OEL 7.9 :

sandbox1 : Postgres primary

sandbox2 : Postgres standby

sandbox3 : Minio server

sandbox4 : used to restore

Postgres version is 13.4

For primary/standby management repmgr is used.

1 - Minio installation

The installation itself is described on Minio website.

So no need to explain here, i will just expose my configuration.

It is installed under /opt/minio with minio:minio as user and group.

To have versioned buckets, i need to have a minimum of 4 devices configured on the server machine.

/dev/sd[d-g]1 40GBs each are dedicated for that.

They are mounted under /minio/data[1-4].

Do not forget to change ownership to minio:minio of /minio/data[1-4].

Once completed, we can create a minio.conf under the home directory of minio.

My minio config looks like that :

MINIO_VOLUMES="/minio/data1 /minio/data2 /minio/data3 /minio/data4"

MINIO_ROOT_USER=admin

MINIO_ROOT_PASSWORD=xxxx

MINIO_OPTS="--address 192.168.11.104:9000 --console-address 192.168.11.104:9090 --compat"
        

In MINIO_OPTS, i set the address where the API is configured and where the console is configured.

An init script is created to start automatically Minio :

[Unit]

Description=Minio

Documentation=https://docs.minio.io

Wants=network-online.target

After=network-online.target

AssertFileIsExecutable=/opt/minio/bin/minio

?

[Service]

AmbientCapabilities=CAP_NET_BIND_SERVICE

WorkingDirectory=/opt/minio

User=minio

Group=minio

PermissionsStartOnly=true

EnvironmentFile=/opt/minio/minio.conf

ExecStartPre=/bin/bash -c "[ -n \"${MINIO_VOLUMES}\" ] || echo \"Variable MINIO_VOLUMES not set in /opt/minio/minio.conf\""

ExecStart=/opt/minio/bin/minio server $MINIO_OPTS $MINIO_VOLUMES

StandardOutput=journal

StandardError=inherit

# Specifies the maximum file descriptor number that can be opened by this process

LimitNOFILE=65536

# Disable timeout logic and wait until process is stopped

TimeoutStopSec=0

# SIGTERM signal is used to stop Minio

KillSignal=SIGTERM

SendSIGKILL=no

SuccessExitStatus=0




[Install]

WantedBy=multi-user.target
        

Before running Minio, we need to generate self signed certificates to be able pgbackrest to interact with Minio.

Minio has a dedicated tool called certgen-linux-amd64 that you can download.

It will generate certificates with this command :

/tmp/certgen-linux-amd64 -ca -host 192.168.11.104        

where 192.168.11.104 is the Minio server.

Copy the generated certificates under the certs directory of Minio installation

cp public.crt private.key /opt/minio/.minio/certs        

That's it, we can now start Minio if not started.

On sandbox3, it is running :

[root@sandbox3 ~]# systemctl status minio

● minio.service - Minio

??Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: disabled)

??Active: active (running) since Fri 2021-12-03 08:06:32 CET; 1min 54s ago

???Docs: https://docs.minio.io

?Process: 6667 ExecStartPre=/bin/bash -c [ -n "${MINIO_VOLUMES}" ] || echo "Variable MINIO_VOLUMES not set in /opt/minio/minio.conf" (code=exited, status=0/SUCCESS)

?Main PID: 6670 (minio)

??Tasks: 15

??CGroup: /system.slice/minio.service

??????└─6670 /opt/minio/bin/minio server --address 192.168.11.104:9000 --console-address 192.168.11.104:9090 --compat /minio/data1 /minio/da...




Dec 03 08:06:32 sandbox3 systemd[1]: Starting Minio...

Dec 03 08:06:32 sandbox3 systemd[1]: Started Minio.

Dec 03 08:06:34 sandbox3 minio[6670]: Verifying if 1 bucket is consistent across drives...

Dec 03 08:06:34 sandbox3 minio[6670]: Automatically configured API requests per node based on available memory on the system: 101

Dec 03 08:06:34 sandbox3 minio[6670]: Status:?????4 Online, 0 Offline.

Dec 03 08:06:34 sandbox3 minio[6670]: API: https://192.168.11.104:9000

Dec 03 08:06:34 sandbox3 minio[6670]: Console: https://192.168.11.104:9090

Dec 03 08:06:34 sandbox3 minio[6670]: Documentation: https://docs.min.io
        

we are now ready to create our bucket dedicated to Postgres backup.

2 - Bucket creation

To create a bucket, login to the admin page, enter the name and password of the user that you defined as admin.

No alt text provided for this image

Then choose Buckets and Create Bucket

No alt text provided for this image

Because i'm using multiple devices, i'm able to enable versioning on the bucket (if you want to retrieve deleted backup sets and be able to recatalog them with pgbackrest)

3 - User creation

To create a user dedicated for Postgres backup, choose Users and then enter the user and password and privileges to allow.

username will be later referenced as s3 secret key and password as s3 secret access key.

No alt text provided for this image

Once completed, we are ready to install and configure pgbackrest

4 - pgbackrest installation and configuration

pgbackrest is installed as a package on sandbox1, sandbox2, sandbox3 and sandbox4.

We need to have SSH user equivalency between these machines for user postgres.

bash-4.2$ ssh sandbox2

Last login: Thu Dec?2 13:23:27 2021




-bash-4.2$ ssh sandbox1

Last login: Thu Dec?2 13:17:26 2021
        

Once we are sure that connectivity is ok between all hosts, we can configure pgbackrest.

I choose to have a server dedicated for that (sandbox3), meaning that the backups will be started from there.

Pgbackrest is smart enough to know who is the primary instance, and even if we switchover to standby for any reasons with repmgr, the backup will be taken from the new primary (because i choose to configure like that, but backups can be taken on standby too).

The clients (sandbox1,sandbox2 or sandbox4) will be used only for restore.

pgbackrest is configured through /etc/pgbackrest.conf.

Here is my configuration on sandbox3 :


[global]

repo1-type=s3
repo1-s3-endpoint=sandbox3
repo1-storage-port=9000
repo1-s3-uri-style=path
repo1-path=/pgbackup
repo1-s3-key=pgbackrest
repo1-s3-key-secret=xxxxxxxxx
repo1-s3-bucket=pgbackrest
repo1-s3-region=eu-west-1
repo1-s3-verify-ssl=n
repo1-retention-full=7
repo1-retention-full-type=time
process-max=4
log-level-console=info
log-level-file=debug
start-fast=y

[mycluster]
pg1-path=/pgdata
pg1-host=sandbox1
pg2-path=/pgdata
pg2-host=sandbox2
        

This file says that, to backup, i'm using a S3 like storage, endpoint is sandbox3 listening on port 9000, my bucket is pgbackrest, the objects will be stored under /pgbackrest/pgbackup, and do not verify certificates as it is a self signed one.

To access S3 like, i'm using path and not virtual hosting.

Retention is set to 7 days.

My Postgres mycluster is composed of sandbox1 and sandbox2 and files are located under /pgdata.

We have to create the same file on sandbox1,sandbox2 and sandbox4.

The global section is the same but the mycluster contain only the path for the current machine.


[global]

repo1-type=s3
repo1-s3-endpoint=sandbox3
repo1-storage-port=9000
repo1-s3-uri-style=path
repo1-path=/pgbackup
repo1-s3-key=pgbackrest
repo1-s3-key-secret=xxxxxxxx
repo1-s3-bucket=pgbackrest
repo1-s3-region=eu-west-1
#repo1-s3-verify-tls=n
repo1-s3-verify-ssl=n
repo1-retention-full=7
repo1-retention-full-type=time
process-max=4
log-level-console=info
log-level-file=debug
start-fast=y

[mycluster]
pg1-path=/pgdata
        

That's it, we can now create our stanza (this is the name used by pgbackrest for the directory where to store files).

5 - Stanza creation

(on sandbox3)

It is so simple as :

/usr/bin/pgbackrest --stanza=mycluster stanza-create        

if the configuration is ok, you should see something similar to :

...

P00 INFO:stanza-create command end: completed successfully
        

To have informations concerning the stanza :

/usr/bin/pgbackrest --stanza=mycluster info

...

????wal archive min/max (13): none present
        

We can see that the bucket now has some "folders" created plus some files needed for pgbackrest using s3cmd or mc tools.

We are good now to configure postgres to use this bucket to archive WALs.

6 - Postgres archive_command

(on sandbox1 and sandbox2)

To tell Postgres to use pgbackrest and save WALs on this bucket,

add or change these parameters in postgresql.conf :

archive_mode = on # enables archiving; off, on, or always

archive_command = '/usr/bin/pgbackrest --stanza=mycluster archive-push %p' # command to use to archive a logfile segment

archive_timeout = 300 # force a logfile segment switch after this
        

Once modified, restart Postgres and monitor the Postgres logfile, you will see entries like (only on sandbox1 primary as archive_mode is on, standby do not archive WALs) :

2021-11-17 15:43:57.871 P00??INFO: archive-push command begin 2.36: [pg_wal/000000320000000100000093] --exec-id=9695-1bee34c0 --log-level-console=info --log-level-file=debug --pg1-path=/pgdata --process-max=4 --repo1-path=/pgbackup --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=minio.local --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-storage-host=sandbox3 --no-repo1-storage-verify-tls --repo1-type=s3 --stanza=mycluster

2021-11-17 15:43:58.008 P00??INFO: pushed WAL file '000000320000000100000093' to the archive

2021-11-17 15:43:58.008 P00??INFO: archive-push command end: completed successfully (137ms)

        

Good, it's working as expected, the WALs are now stored on our versioned bucket.

7 - Backup

(on sandbox3)

To be sure that it is working perfectly,

we can use the check command.

/usr/bin/pgbackrest --stanza=mycluster check

2021-12-03 08:48:40.251 P00??INFO: check command begin 2.36: --exec-id=9801-9008da54 --log-level-console=info --log-level-file=debug --pg1-host=sandbox1 --pg2-host=sandbox2 --pg1-path=/pgdata --pg2-path=/pgdata --repo1-path=/pgbackup --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=sandbox3 --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-s3-uri-style=path --repo1-storage-port=9000 --no-repo1-storage-verify-tls --repo1-type=s3 --stanza=mycluster

2021-12-03 08:48:41.682 P00??INFO: check repo1 (standby)

2021-12-03 08:48:41.695 P00??INFO: switch wal not performed because this is a standby

2021-12-03 08:48:41.698 P00??INFO: check repo1 configuration (primary)

2021-12-03 08:48:41.909 P00??INFO: check repo1 archive for WAL (primary)

2021-12-03 08:48:42.013 P00??INFO: WAL segment 0000003200000001000000CB successfully archived to '/pgbackup/archive/mycluster/13-1/0000003200000001/0000003200000001000000CB-29708af554a12b6d6efd8fa9bb159c9f04731eb3.gz' on repo1

2021-12-03 08:48:42.220 P00??INFO: check command end: completed successfully (1969ms)
        

Pgbackrest is able to know which one is the primary and do the WALs switch on the correct instance.

Let's do a backup full :

/usr/bin/pgbackrest --stanza=mycluster --type=full backup

2021-12-03 09:09:32.529 P00??INFO: backup command begin 2.36: --exec-id=11222-0af99089 --log-level-console=info --log-level-file=debug --pg1-host=sandbox1 --pg2-host=sandbox2 --pg1-path=/pgdata --pg2-path=/pgdata --process-max=4 --repo1-path=/pgbackup --repo1-retention-full=7 --repo1-retention-full-type=time --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=sandbox3 --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-s3-uri-style=path --repo1-storage-port=9000 --no-repo1-storage-verify-tls --repo1-type=s3 --stanza=mycluster --start-fast --type=full

2021-12-03 09:09:34.145 P00??INFO: execute non-exclusive pg_start_backup(): backup begins after the requested immediate checkpoint completes

2021-12-03 09:09:34.553 P00??INFO: backup start archive = 0000003200000001000000D1, lsn = 1/D1000028

2021-12-03 09:09:44.061 P00??INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive

2021-12-03 09:09:44.263 P00??INFO: backup stop archive = 0000003200000001000000D1, lsn = 1/D1000138

2021-12-03 09:09:44.273 P00??INFO: check archive for segment(s) 0000003200000001000000D1:0000003200000001000000D1

2021-12-03 09:09:44.410 P00??INFO: new backup label = 20211203-090934F

2021-12-03 09:09:44.505 P00??INFO: full backup size = 191.3MB, file total = 2444

2021-12-03 09:09:44.505 P00??INFO: backup command end: completed successfully (11976ms)

2021-12-03 09:09:44.505 P00??INFO: expire command begin 2.36: --exec-id=11222-0af99089 --log-level-console=info --log-level-file=debug --repo1-path=/pgbackup --repo1-retention-full=7 --repo1-retention-full-type=time --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=sandbox3 --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-s3-uri-style=path --repo1-storage-port=9000 --no-repo1-storage-verify-tls --repo1-type=s3 --stanza=mycluster

2021-12-03 09:09:44.526 P00??INFO: repo1: time-based archive retention not met - archive logs will not be expired

2021-12-03 09:09:44.627 P00??INFO: expire command end: completed successfully (122ms)
        

We can do incremental one too :

/usr/bin/pgbackrest --stanza=mycluster --type=incr backup

2021-12-03 09:10:44.438 P00??INFO: backup command begin 2.36: --exec-id=11330-030beb84 --log-level-console=info --log-level-file=debug --pg1-host=sandbox1 --pg2-host=sandbox2 --pg1-path=/pgdata --pg2-path=/pgdata --process-max=4 --repo1-path=/pgbackup --repo1-retention-full=7 --repo1-retention-full-type=time --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=sandbox3 --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-s3-uri-style=path --repo1-storage-port=9000 --no-repo1-storage-verify-tls --repo1-type=s3 --stanza=mycluster --start-fast --type=incr

2021-12-03 09:10:46.005 P00??INFO: last backup label = 20211203-090934F, version = 2.36

2021-12-03 09:10:46.005 P00??INFO: execute non-exclusive pg_start_backup(): backup begins after the requested immediate checkpoint completes

2021-12-03 09:10:46.415 P00??INFO: backup start archive = 0000003200000001000000D3, lsn = 1/D3000028

2021-12-03 09:10:48.187 P00??INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive

2021-12-03 09:10:48.390 P00??INFO: backup stop archive = 0000003200000001000000D3, lsn = 1/D3000138

2021-12-03 09:10:48.393 P00??INFO: check archive for segment(s) 0000003200000001000000D3:0000003200000001000000D3

2021-12-03 09:10:48.535 P00??INFO: new backup label = 20211203-090934F_20211203-091045I

2021-12-03 09:10:48.628 P00??INFO: incr backup size = 119.4KB, file total = 2444

2021-12-03 09:10:48.628 P00??INFO: backup command end: completed successfully (4191ms)

2021-12-03 09:10:48.628 P00??INFO: expire command begin 2.36: --exec-id=11330-030beb84 --log-level-console=info --log-level-file=debug --repo1-path=/pgbackup --repo1-retention-full=7 --repo1-retention-full-type=time --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=sandbox3 --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-s3-uri-style=path --repo1-storage-port=9000 --no-repo1-storage-verify-tls --repo1-type=s3 --stanza=mycluster

2021-12-03 09:10:48.642 P00??INFO: repo1: time-based archive retention not met - archive logs will not be expired

2021-12-03 09:10:48.744 P00??INFO: expire command end: completed successfully (116ms)
        

Retrieve informations of backup :

/usr/bin/pgbackrest --stanza=mycluster info

stanza: mycluster

??status: ok

??cipher: none




??db (current)

????wal archive min/max (13): 0000003200000001000000B6/0000003200000001000000D3




????full backup: 20211202-160618F

??????timestamp start/stop: 2021-12-02 16:06:18 / 2021-12-02 16:06:30

??????wal start/stop: 0000003200000001000000B7 / 0000003200000001000000B7

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB




????full backup: 20211202-160829F

??????timestamp start/stop: 2021-12-02 16:08:29 / 2021-12-02 16:08:39

??????wal start/stop: 0000003200000001000000B9 / 0000003200000001000000B9

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB




????full backup: 20211203-090934F

??????timestamp start/stop: 2021-12-03 09:09:34 / 2021-12-03 09:09:44

??????wal start/stop: 0000003200000001000000D1 / 0000003200000001000000D1

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB




????incr backup: 20211203-090934F_20211203-091045I

??????timestamp start/stop: 2021-12-03 09:10:45 / 2021-12-03 09:10:48

??????wal start/stop: 0000003200000001000000D3 / 0000003200000001000000D3

??????database size: 191.3MB, database backup size: 119.7KB

??????repo1: backup set size: 24.8MB, backup size: 15.6KB

??????backup reference list: 20211203-090934F

        

Retrieve informations of a specific backup :

/usr/bin/pgbackrest --stanza=mycluster --set=20211202-160829F info

stanza: mycluster

??status: ok

??cipher: none




??db (current)

????wal archive min/max (13): 0000003200000001000000B6/0000003200000001000000D3




????full backup: 20211202-160829F

??????timestamp start/stop: 2021-12-02 16:08:29 / 2021-12-02 16:08:39

??????wal start/stop: 0000003200000001000000B9 / 0000003200000001000000B9

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB

??????database list: cave (16450), penelope (32902), postgres (14175), repmgr (16385), test (41094), ulysse (32907)

        

8 - Restore

We can do full restore, PITR restore, with a very simple command.

/usr/bin/pgbackrest --stanza=mycluster restore --archive-mode=off --link-all        

It restores up too the last WAL archived, disable the archive of WALs (if restored on another machine) and restore the filesystem links

/usr/bin/pgbackrest --stanza=mycluster restore --target_type=time --target="2021/09/24 15:00" --archive-mode=off        

This is a point in time recovery and archive is disabled to avoid mess in backup repository if restored on another machine.

We can recover through timeline too.

/usr/bin/pgbackrest --stanza=mycluster --archive_mode=off --type=time --target="2021-11-15 11:30:00" --target-timeline=31 restore        

Every time you do a restore on another machine, check the postgresql.conf and postgresql.auto.conf, as it contains specific things (like replication stuff, or ssl configuration etc ... that are not useful on your target machine).

9 - Why using a versioned bucket with pgbackrest ?

With versioned bucket, we are able to retrieve deleted backups by mistake.

And pgbackrest has the posibility to recatalog the recovered one with good parameters.

Let's experiment a bit :

/usr/bin/pgbackrest --stanza=mycluster info

stanza: mycluster

??status: ok

??cipher: none




??db (current)

????wal archive min/max (13): 0000003200000001000000B6/0000003200000001000000D5




????full backup: 20211202-160618F

??????timestamp start/stop: 2021-12-02 16:06:18 / 2021-12-02 16:06:30

??????wal start/stop: 0000003200000001000000B7 / 0000003200000001000000B7

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB




????full backup: 20211202-160829F

??????timestamp start/stop: 2021-12-02 16:08:29 / 2021-12-02 16:08:39

??????wal start/stop: 0000003200000001000000B9 / 0000003200000001000000B9

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB




????full backup: 20211203-090934F

??????timestamp start/stop: 2021-12-03 09:09:34 / 2021-12-03 09:09:44

??????wal start/stop: 0000003200000001000000D1 / 0000003200000001000000D1

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB




????incr backup: 20211203-090934F_20211203-091045I

??????timestamp start/stop: 2021-12-03 09:10:45 / 2021-12-03 09:10:48

??????wal start/stop: 0000003200000001000000D3 / 0000003200000001000000D3

??????database size: 191.3MB, database backup size: 119.7KB

??????repo1: backup set size: 24.8MB, backup size: 15.6KB

??????backup reference list: 20211203-090934F

        

Imagine, we made a mistake and deleted the backup set 20211202-160829F.


/usr/bin/pgbackrest --stanza=mycluster --set=20211202-160829F expire

2021-12-03 09:27:08.711 P00??INFO: expire command begin 2.36: --exec-id=12419-9dadbdea --log-level-console=info --log-level-file=debug --repo1-path=/pgbackup --repo1-retention-full=7 --repo1-retention-full-type=time --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=sandbox3 --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-s3-uri-style=path --repo1-storage-port=9000 --no-repo1-storage-verify-tls --repo1-type=s3 --set=20211202-160829F --stanza=mycluster

2021-12-03 09:27:08.741 P00??INFO: repo1: expire adhoc backup 20211202-160829F

2021-12-03 09:27:08.750 P00??INFO: repo1: remove expired backup 20211202-160829F

2021-12-03 09:27:12.028 P00??INFO: repo1: time-based archive retention not met - archive logs will not be expired

2021-12-03 09:27:12.028 P00??INFO: expire command end: completed successfully (3318ms)
        

Pgbackrest is no more aware of this backup.

/usr/bin/pgbackrest --stanza=mycluster --set=20211202-160829F info

stanza: mycluster

??status: error (requested backup not found)

??cipher: none        

But i want to retrieve it and recatalog it.

I'm using mc for this purpose (mc is the client for minio).

First , i confirm that i have a versioned bucket :

./mc stat --insecure minio/pgbackrest

Name???: pgbackrest/

Size???: 0 B???

Type???: folder?

Metadata?:

?Versioning: Enabled

?Location: eu-west-1

?Policy: none
        

Then i check the versions for my objects involved in the set i deleted.

./mc ls --insecure --recursive --versions minio/pgbackrest/pgbackup/backup/mycluster/20211202-160829F


[2021-12-03 09:27:08 CET]???0B 5d127b4a-34a8-43fe-81d4-7cf49843eb06 v2 DEL backup.manifest

[2021-12-02 16:08:39 CET] 294KiB 7b3c964a-b731-4376-a2ce-39d3a9ba305c v1 PUT backup.manifest

[2021-12-03 09:27:08 CET]???0B ffdc07de-3e5c-47f8-8521-f3ffe910abf0 v4 DEL backup.manifest.copy

[2021-12-02 16:08:39 CET] 294KiB 66b949de-f677-4f57-b373-283fff3eda9d v3 PUT backup.manifest.copy

[2021-12-02 16:08:39 CET] 294KiB 56759762-2913-4b30-aeb8-f7d2e71aec00 v2 PUT backup.manifest.copy

[2021-12-02 16:08:30 CET] 148KiB afe2a23a-3baa-4880-b459-bf17970164f2 v1 PUT backup.manifest.copy

[2021-12-03 09:27:08 CET]???0B f9af0319-a19c-4017-bea8-846ac73222c5 v2 DEL pg_data/PG_VERSION.gz

[2021-12-02 16:08:37 CET]??23B d01ad97b-053d-45e0-80fb-2b980ca29651 v1 PUT pg_data/PG_VERSION.gz

[2021-12-03 09:27:08 CET]???0B 559bcd4f-88a2-4977-8634-c9615947913e v2 DEL pg_data/backup_label.gz

...

[2021-12-02 16:08:37 CET]??235B b95ec63b-3266-4aba-8843-a1e350194546 v1 PUT pg_data/postgresql.auto.conf.gz

[2021-12-03 09:27:11 CET]???0B 436f8168-5558-41d1-a710-ec3a367f4d2c v2 DEL pg_data/postgresql.conf.gz

[2021-12-02 16:08:33 CET] 8.3KiB 05bd8ca7-411b-4d84-872a-2b438899d4e4 v1 PUT pg_data/postgresql.conf.gz


        

I will now revert the DEL operation.

./mc undo --insecure --recursive --force minio/pgbackrest/pgbackup/backup/mycluster/20211202-160829F/

? Last delete of `backup.manifest` is reverted.

? Last delete of `backup.manifest.copy` is reverted.

? Last delete of `pg_data/PG_VERSION.gz` is reverted.

? Last delete of `pg_data/backup_label.gz` is reverted.

? Last delete of `pg_data/base/1/112.gz` is reverted.

? Last delete of `pg_data/base/1/113.gz` is reverted.

? Last delete of `pg_data/base/1/1247.gz` is reverted.

? Last delete of `pg_data/base/1/1247_fsm.gz` is reverted.

? Last delete of `pg_data/base/1/1247_vm.gz` is reverted.

? Last delete of `pg_data/base/1/1249.gz` is reverted.

? Last delete of `pg_data/base/1/1249_fsm.gz` is reverted.

? Last delete of `pg_data/base/1/1249_vm.gz` is reverted.

...

? Last delete of `pg_data/pg_xact/0000.gz` is reverted.

? Last delete of `pg_data/postgresql-repmgr.conf.gz` is reverted.

? Last delete of `pg_data/postgresql.auto.conf.gz` is reverted.

? Last delete of `pg_data/postgresql.conf.gz` is reverted.

        

Ok so now, the backup set is back on "filesystem".

Let's check if pgbackrest is able to see it again.

/usr/bin/pgbackrest --stanza=mycluster --repo1-retention-full-type=time --repo1-retention-full=7 expire2021-12-03 09:43:19.879 P00??INFO: expire command begin 2.36: --exec-id=13683-aae20455 --log-level-console=info --log-level-file=debug --repo1-path=/pgbackup --repo1-retention-full=7 --repo1-retention-full-type=time --repo1-s3-bucket=pgbackrest --repo1-s3-endpoint=sandbox3 --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=eu-west-1 --repo1-s3-uri-style=path --repo1-storage-port=9000 --no-repo1-storage-verify-tls --repo1-type=s3 --stanza=mycluster

WARN: backup '20211202-160829F' found in repository added to backup.info
2021-12-03 09:43:19.954 P00??INFO: repo1: time-based archive retention not met - archive logs will not be expired
2021-12-03 09:43:19.954 P00??INFO: expire command end: completed successfully (76ms)
        

Bingo!

/usr/bin/pgbackrest --stanza=mycluster --set=20211202-160829F info

stanza: mycluster

??status: ok

??cipher: none




??db (current)

????wal archive min/max (13): 0000003200000001000000B6/0000003200000001000000DB




????full backup: 20211202-160829F

??????timestamp start/stop: 2021-12-02 16:08:29 / 2021-12-02 16:08:39

??????wal start/stop: 0000003200000001000000B9 / 0000003200000001000000B9

??????database size: 191.3MB, database backup size: 191.3MB

??????repo1: backup set size: 24.8MB, backup size: 24.8MB

??????database list: cave (16450), penelope (32902), postgres (14175), repmgr (16385), test (41094), ulysse (32907)

        

So with S3 versioned buckets, we are able to recover mistakes.

Policies can be applied too, to restrict access to specific buckets for specific users.

And more, as Minio offers bucket lifecycle and bucket policies to restrict access to specific user.

That's the next experiment for my POC.

Hope it helps !

Marco Rodrigues

IT Infrastruture Magicien ?? IT Enthusiast ??

2 年

Great!! Je viens de faire un cluster multi-master de Percona XtraDB MySQL.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了