Upgrading ReportPortal with backup/restore of Postgres and binary data
Gaurav Singh
Senior Staff SDET @ CRED, ex Meta, ex Gojek | Engineering leader | Test engineering | Engineering productivity | 13+ years in driving Quality, automation, and leading teams.
In this blog, learn how to take ReportPortal Postgres db, binary data backup, make upgrades in docker-compose.yml file, and then restore the DB data and binary data safely without losing launches and dashboards
Why?
Once you have a running instance of ReportPortal with one or more teams using it successfully, it becomes important to keep patching it with upgrades and ensuring you get the benefits of bug fixes and new features.
But, If not done carefully, you can risk losing data (launches, logs, and dashboards) and having to set up integration again.?
Trust me, You would not want to be the one who loses data for the company, especially when people depend on previous analysis, widgets, and dashboards for their day-to-day operations. ??
In this blog, I’ll help provide clear steps to achieve the below workflow:
This blog is inspired by the guides published by the ReportPortal team.?
I intend to explain nuances for someone new to Docker, Postgres, and Minio. If you are curious you can use them as additional resources
Let’s go ??
Backup postgres
ReportPortal keeps most of the data in a Postgres instance including launches, filters, widgets, and dashboards. To backup Postgres, we can execute the pg_dump command inside the docker container. This generates an SQL file with all the information needed to restore data after the upgrade.
DB_USER=rpuser
DB_NAME=reportportal
DB_PASSWORD=rppass
DB_CONTAINER=postgres
docker exec -e PGPASSWORD=$DB_PASSWORD $DB_CONTAINER pg_dump -U $DB_USER -d $DB_NAME > reportportal_docker_db_backup.sql
Explanation:
?Tip: This command may take time if the database has lots of data. Please be patient as that completes. Do not exit out early as you may have incomplete backup. Once this is completed, it would be a good idea to make a copy of this file by executing something like this:
cp reportportal_docker_db_backup.sql YYYY_MM_DD_reportportal_docker_db_backup.sql
This will help avoid surprises if you somehow corrupt or delete the primary backup.?
Backup binary data
Next, we take a backup of binary data. ReportPortal uses minIO as the object store to store additional data similar to how a file system would.?
VOLUME_NAME=reportportal_storage
docker run --rm -v "$VOLUME_NAME":/data -v "$(pwd)":/backup busybox tar -zcvf /backup/reportportal_storage_backup.tar.gz /data
Explanation:
Remove all containers
We will now use docker-compose to bring down all existing containers. If any container is already running when you try to upgrade it, you must stop and manually remove it (using docker stop <container_name> && docker rm <container_name>)
This can be a tedious process to do one by one, as there could be dependencies between containers. Docker Compose simplifies this process for us
docker compose -p reportportal down
Remove Postgres volume
This step is optional, you should run this only if you want a clean slate in your DB. This would remove existing Postgres volume.
docker volume rm reportportal_postgres
Make changes in your docker-compose.yml file
Now that everything has been torn down, you can change the docker-compose.yml file.
For example, below are a few use cases when this may be relevant
Upgrade versions
You may want to pull a new version of ReportPortal
curl -LO https://raw.githubusercontent.com/reportportal/reportportal/master/docker-compose.yml
Expose Postgres db
You may want to expose Postgres running in the container to the host machine to allow engineers to connect to it and make use of stored data for their purposes
## Uncomment to expose Database
ports:
- "5432:5432"
## PostgreSQL as the main database for ReportPortal
postgres:
image: bitnami/postgresql:16.4.0-debian-12-r7
container_name: *db_host
logging:
<<: *logging
shm_size: '512m'
environment:
POSTGRES_USER: *db_user
POSTGRES_PASSWORD: *db_password
POSTGRES_DB: *db_name
POSTGRESQL_CHECKPOINT_COMPLETION_TARGET: 0.9
WORK_MEM: 96M
WAL_WRITER_DELAY: 20ms
SYNCHRONOUS_COMMIT: off
WAL_BUFFERS: 32MB
MIN_WAL_SIZE: 2GB
MAX_WAL_SIZE: 4GB
volumes:
- postgres:/bitnami/postgresql
## Uncomment to expose Database
ports:
- "5432:5432"
healthcheck:
test: [ "CMD-SHELL", "pg_isready -d $$POSTGRES_DB -U $$POSTGRES_USER" ]
interval: 10s
timeout: 120s
retries: 10
networks:
- reportportal
restart: always
Add OpenSearch dashboards
You may want to add the capability also to serve open search dashboards so that I can add custom search/visualization capabilities on top of what ReportPortal provides out of the box
## OpenSearch for search and analytical capabilities
opensearch:
image: opensearchproject/opensearch:2.16.0
container_name: *opensearch_host
logging:
<<: *logging
environment:
discovery.type: single-node
plugins.security.disabled: "true"
bootstrap.memory_lock: "true"
OPENSEARCH_JAVA_OPTS: -Xms512m -Xmx512m
DISABLE_INSTALL_DEMO_CONFIG: "true"
ulimits:
memlock:
soft: -1
hard: -1
## Uncomment the following lines to expose OpenSearch on ports 9200 and 9600
ports:
- "9200:9200" # OpenSearch HTTP API
- "9600:9600" # OpenSearch Performance Analyzer
volumes:
- opensearch:/usr/share/opensearch/data
healthcheck:
test: [ "CMD", "curl","-s" ,"-f", "https://0.0.0.0:9200/_cat/health" ]
networks:
- reportportal
restart: always
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.16.0
container_name: opensearch-dashboards
ports:
- "5601:5601" # Dashboard UI
environment:
OPENSEARCH_HOSTS: https://opensearch:9200
DISABLE_SECURITY_DASHBOARDS_PLUGIN: "true" # Add this line to disable security
OPENSEARCH_ALLOW_INSECURE: "true" # Add this line to allow insecure connections
depends_on:
- opensearch
networks:
- reportportal
restart: always
Start only postgres
After your changes are done, we’ll first only bring up the database
docker compose -p reportportal up -d postgres
领英推荐
Restore Postgres
Next, let’s restore the data. To restore the data we use psql
DB_USER=rpuser
DB_PASSWORD=rppass
DB_NAME=reportportal
DB_CONTAINER=postgres
docker exec -i -e PGPASSWORD=$DB_PASSWORD $DB_CONTAINER psql -U $DB_USER -d $DB_NAME < reportportal_docker_db_backup.sql > upgrade_db.log 2>&1
Explanation
To verify the data is indeed restored, we should check the contents of the upgrade_db.log file
Also, you can connect to the postgres db server?
Connect to the docker container
docker exec -it postgres bash
Open psql
psql -U rpuser -d reportportal
When prompted for the password, enter the postgres password from the docker-compose.yml file or the new one if you have reset it
Check if the below tables have values:
SELECT * FROM test_item limit 20;
SELECT * FROM test_item_results limit 20;
Start all other services
Next, we should bring up all the other containers
docker compose -p reportportal up -d
Restore binary data
Finally, let's restore binary data as well
VOLUME_NAME=reportportal_storage
docker run --rm -v $VOLUME_NAME:/data -v $(pwd):/backup busybox tar -xzvf /backup/reportportal_storage_backup.tar.gz -C /
We follow a similar process as the above to restore the gzip tar backup that we had taken back into minIO
Explanation
And you are done! ??
Congratulation! You have successfully managed to take a backup, upgrade the instance, and restore it all back now.
To verify login to ReportPortal https://localhost:8080/ui/ and check your launches, filters and dashboards are all available
Connect to Postgres from the host machine
To verify if we can connect to Postgres running inside the docker container, you will need postgres on your local machine, on mac you can use homebrew to install
brew install postgresql@16
# Postgresql
export PATH="/opt/homebrew/opt/postgresql@16/bin:$PATH"
export LDFLAGS="-L/opt/homebrew/opt/postgresql@16/lib"
export CPPFLAGS="-I/opt/homebrew/opt/postgresql@16/include"
Then start the postgres service
brew services start postgresql@16
You can connect to Postgres from any client (like DBeaver) by using the below details
jdbc:postgresql://192.168.29.131:5432/reportportal
User name: rpuser
Password: rppass
Or, from the host machine, you can also connect to the PostgreSQL instance using psql to ensure the port is accessible:
psql -h localhost -p 5432 -U rpuser -d reportportal
OpenSearch dashboards
To test access to OpenSearch, you can hit https://<host_ip>:9200
This should print details about the OpenSearch cluster
{
"name": "0fdbaba7ac12",
"cluster_name": "docker-cluster",
"cluster_uuid": "R0DM_scOSRuB0kba7Q6rGw",
"version": {
"distribution": "opensearch",
"number": "2.16.0",
"build_type": "tar",
"build_hash": "f84a26e76807ea67a69822c37b1a1d89e7177d9b",
"build_date": "2024-08-06T20:32:34.547531562Z",
"build_snapshot": false,
"lucene_version": "9.11.1",
"minimum_wire_compatibility_version": "7.10.0",
"minimum_index_compatibility_version": "7.0.0"
},
"tagline": "The OpenSearch Project: https://opensearch.org/"
}
Finally, you can open OpenSearch Dashboards via https://<host_ip>:5601
Summary
Let’s conclude on the workflow we learned today
Thanks for the time you spent reading this ??. If you found this post helpful, please subscribe to the newsletter and follow my YouTube channel (@automationhacks) for more insights into software testing and automation. Until next time ??, Happy Testing ???? and Learning! ??| Substack | YouTube | Blog | LinkedIn | X | BlueSky
Originally published at newsletter.automationhacks.io on December 9, 2024