[Borgbackup] Experience with compressed database dumps?

Tomasz Melcer liori at exroot.org
Sat Sep 18 06:15:14 EDT 2021


On 14.09.2021 10:54, Thorsten Schöning wrote:
> I wonder if it's better to store uncompressed data and let BORG handle
> de-duplication and compression or if it doesn't matter too much
> anyway? Does anyone hves experiences/numbers/... already for that
> use-case?

I'm doing mostly what you describe for a SQL Server database in a 
process I'm using for periodic migration of production data to a staging 
database. I suspect it will be similar to PostgreSQL/MySQL. My workflow 
is: dump databases using `bcp` into btrfs (compressed), then run 
borgbackup over it. Roughly: ~40k tables, 700 GB of database files → 300 
GB of raw dumped data → 100 GB of btrfs-compressed data → 500 MB to 5 GB 
of borgbackup lzma-compressed increments. Some actual numbers from the 
last run as reported by `borg info`:

Duration: 9 hours 15 minutes 55.04 seconds

(Original size / Compressed size / Deduplicated size)

This archive: 362.90 GB / 5.65 GB / 1.82 GB

All archives: 4.57 TB / 69.77 GB / 18.47 GB

Definitely good enough for my purposes, though as you can see, it takes 
several hours for borg to process this data. For comparison: just 
dumping the data with `bcp` takes ~2.5h using a 200Mbps connection. In 
my case the procedure is not time-sensitive, so it's fine.


-- 
Tomasz Melcer


More information about the Borgbackup mailing list