[Borgbackup] Stat of looots of files

Andrea Gelmini andrea.gelmini at gmail.com
Thu Mar 11 07:23:15 EST 2021


Dear developers,
   thanks a lot for your work on Borg!

   I kindly ask you some advice about my setup.

   I have a repository (usual filesharing) of ~65 TB spread over ~45
million files.

   Borg works perfectly!

   My worries are about the weekly backup. To stat all the files it
takes days. Reading the new/changed files is super fast, of course
(more than 200MB/s). Traversing all the tree can take more than time
shift "friday evening -> sunday evening".

   So, is it possible to parallelize the scan part of the filesystem?

   I found reference on old threads and tickets on GitHub, but I
didn't understand if they fit with my need.

   At the moment I tried to push ZFS using caching drives and moving
Borg cache in tmpfs (I know the risk, I take care in case of reboot).
But, no significant improvement.

   A quick glance/try with Restic seems to fix it, because of parallel
scan. Sorry, I still don't have completed the benchmarks. It takes
week. But maybe I am on the wrong path and I can avoid to waste time
and resources.


Thanks a lot again (really),
Andrea


More information about the Borgbackup mailing list