In our CNPG series, we have mentioned that backups are crucial in every database system in case of any disaster. PostgreSQL has powerful recovery capabilities as well as backup capabilities, including the ability to restore a database cluster to a specific moment in time. This is extremely useful in scenarios where we need to recover from user errors—such as accidental data deletion, wrong updates, dropped tables, or even dropped databases.
Table of Contents
We will now walk through how to perform Point-in-Time Recovery (PITR) with CloudNativePG (CNPG) using a Barman Cloud plugin for backups.
First, we need to simulate a typical disaster. In our CNPG cluster instance:
1 2 3 4 5 |
postgres=# create database pitr_test; CREATE DATABASE postgres=# \c pitr_test You are now connected to database "pitr_test" as user "postgres". |
Create a sample table and load some data:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
CREATE TABLE test_data ( id SERIAL PRIMARY KEY, name TEXT, age INT, created_at TIMESTAMP DEFAULT NOW() ); INSERT INTO test_data (name, age, created_at) SELECT 'User_' || gs, (random() * 60 + 18)::INT, NOW() - (random() * interval '365 days') FROM generate_series(1, 10000) gs; |
Update some rows to check later:
1 2 3 |
UPDATE test_data SET age = 1 WHERE id <= 15; |
Note the current Write-Ahead Log (WAL) position and timestamp to recover exactly to this point with PITR:
1 2 3 4 5 6 7 8 9 |
SELECT pg_current_wal_lsn(); pg_current_wal_lsn -------------------- 0/85BDBB0 SELECT now(); now ------------------------------ 2025-09-19 13:39:18.89106+00 |
Now, simulate a disaster:
1 |
DROP TABLE test_data; |
At this point, the table is dropped and the only option we have to recover the table to timestamp we want is PITR.
We already have a backup and WAL archive system in place (as covered in the CNPG Backup with Barman Cloud blog). For point in time recovery, WAL files are essential since they contain all transactions after the last backup.
We can confirm WAL switching and the WAL filename:
1 2 3 4 5 6 7 8 9 |
SELECT pg_switch_wal(); pg_switch_wal --------------- 0/85C0B00 SELECT pg_walfile_name(pg_current_wal_lsn()); pg_walfile_name -------------------------- 000000010000000000000009 |
To recover the cluster to a point just before the DROP TABLE, we’ll define a new cluster manifest with bootstrap.recovery:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-backup-pitr spec: instances: 3 storage: storageClass: csi-hostpath-sc size: 3Gi resources: requests: memory: "32Mi" cpu: "50m" limits: memory: "512Mi" cpu: "100m" bootstrap: recovery: source: cybertec-pitr recoveryTarget: targetTime: "2025-09-19 13:39:18.89106+00" externalClusters: - name: cybertec-pitr plugin: name: barman-cloud.cloudnative-pg.io parameters: barmanObjectName: minio-store serverName: cluster-example-backup |
In this manifest, there are a few things to emphasize.
Apply the manifest:
1 |
kubectl apply -f cluster-pitr.yaml |
CNPG will fetch the backup and WAL archives from barman-cloud
, then replay transactions up until the specified timestamp, stopping just before the DROP TABLE
.
Once the cluster is running:
1 |
kubectl get pods |
Connect to the restored cluster:
1 |
kubectl exec -it cluster-example-backup-pitr-1 -- psql -U postgres -d pitr_test |
Now verify the data:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
\d List of relations Schema | Name | Type | Owner --------+-------------+----------+---------- public | test_data | table | postgres SELECT count(*) FROM test_data; count ------- 10000 select count(*) from test_data where age = 1; count ------- 15 |
The table is back with all rows intact — successfully recovered to the desired point in time before it is dropped.
Point-in-Time Recovery (PITR) is a powerful feature that lets you rewind your PostgreSQL database to any exact moment—rescuing us from accidental mistakes like dropped tables or bad updates/deletes.
With CloudNativePG and Barman Cloud, PITR becomes a declarative and Kubernetes-native process. By simply adjusting our cluster manifest, we can restore our data to the specific LSN, timeline, or timestamp we need.
You are currently viewing a placeholder content from Turnstile. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
Leave a Reply