[Borgbackup] Problems with big files

Marian Beermann public at enkore.de
Wed Jul 18 08:36:59 EDT 2018


On 18.07.2018 14:28, Marvin Gaube wrote:
> Hello,
> i started using borgbackup to do backups from my homeserver to a backup
> server elsewhere (Standard SSH). Currently I'm stuck in the first run
> with a very strange behavior:
> I have a big file, around 100 GB. Unfortunately, the connection is
> interrupted twice a day. Between the interruptions, borg usually gets
> around 20-40 GB transfered. This runs for two weeks now, I'm never
> getting further than this big file - and it's not touched at all.
> Theoretically, it would have taken 2-3 days to transfer that file.
> 
> My idea is that for reasons i didn't know borg restarts transferring
> this file completely, instead of using the chunks already transfered.
> But, as far as i understood the documentation, i should have used the
> chunks.
> 
> Is their any idea how i could solve or at least further debug the problem?
> 
> borg is on version 1.0.9 from debian Repo on both machines.>
> Command is:
> REPOSITORY="ssh://user@host/backupdir"
> borg create -v --stats --progress --checkpoint-interval
> 300                      \
>     $REPOSITORY::'{now:%Y-%m-%d_%H:%M}'                 \
>     /path/to/huge/directory
> 
> I set checkpoint-interval to 300 hoping it would solve the problem, but
> it didn't.

borg 1.0 does not do checkpoints in files.

> Thanks!
> Marvin Gaube
> 
> 
> 
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
> 



More information about the Borgbackup mailing list