[Borgbackup] How to keep backup times short after a complete rebuild of the source filesystem?
Heinz Werner Kramski-Grote
kramski at hoernle-marbach.de
Mon Jan 6 06:11:03 EST 2020
I have a data pool of approx. 3.6 TB which I backup daily to a remote system via ssh. The runtimes are ok for the daily differences, but because of its size, I did the inital backup locally on my LAN.
Due to a failed disk, I had to copy all data from a degraded RAID to a new array of new disks (thereby moving from BTRFS to mdadm/lvm/EXT4, but that's another story).
As a result, all ctimes now have changed to the the date of the copy event, like in this example:
$ stat smm01.txt
File: smm01.txt
Size: 715 Blocks: 8 IO Block: 4096 regular file
Device: fd00h/64768d Inode: 115409634 Links: 1
Access: (0744/-rwxr--r--) Uid: ( 1000/ kramski) Gid: ( 1000/ kramski)
Access: 2020-01-04 21:58:36.555772941 +0100
Modify: 1999-10-31 18:50:24.000000000 +0100
Change: 2020-01-04 21:58:36.555772941 +0100
Birth: 2020-01-04 21:58:36.555772941 +0100
According to https://borgbackup.readthedocs.io/en/stable/usage/create.html, it's the ctime (Change) which used for identifying unmodified files.
Should I move to "--files-cache=mtime,size,inode" (Modify) to avoid long initial backup times when I resume my daily backups over ssh?
Regards,
Heinz
More information about the Borgbackup
mailing list