[Borgbackup] Lock error

Steve Schow steve at bstage.com
Mon May 16 17:30:50 EDT 2016


It doesn’t really have much to do with the locking stuff…it has to do with the fact that Borg is doing compression, encryption, and other CPU oriented tasks…might involve some local disk I/O in the process of handling that “crunching” also…  meanwhile its sending the results of that crunching over SSH to a remote borg what is sitting there waiting for it.  

I suspect borg is not multi threaded for this…so quite literally, while sending stuff over the net, its not doing any other crunching at the same time, and visa versa, which equates to a LOT of wait time…which means, hardware is not being utilized fully. 

But even if it does fork threads for handling these different tasks, if one or the other is slower then the other, then one of them will block and wait also.

With more then one instance running at a time, we get the same result as multi threads would get…perhaps a little better because at the remote side its also writing to two different repos, so the two instances don’t block each other hardly at all.  each one individually may having waiting occurring, but the other instances can take advantage of that to get some hardware time and so forth.

I hit 100% CPU util with about 4 concurrent instances of borg running on my little linux NAS, the borg serve on my mac is only hitting about 50% util handling the server side of it.  More than that and upload speed actually started to decrease.  4 instances is giving me triple the overall speed.  its not clear to me if encryption is happening on the local side or the serve side (note, I would prefer it on the serve side FWIW).






On May 16, 2016, at 2:12 PM, Adrian Klaver <adrian.klaver at aklaver.com> wrote:

> On 05/16/2016 11:09 AM, Steve Schow wrote:
>> So as an experiment I am trying two runs of borg at the same time to two different repos in parallel and one nice benefit is that I’m getting double the upload speed this way…  Running several smaller repos in parallel could substantially decrease the amount of time ti takes to do the backup
> 
> It would seem to come down to this:
> 
> http://borgbackup.readthedocs.io/en/stable/internals.html
> 
> There are a lot of moving parts involved in populating a repo with an archive, especially in the initial load. Spreading the work load across multiple repos helps, as you have seen.
> 
>> 
>> On May 16, 2016, at 3:16 AM, public at enkore.de wrote:
>> 
>>> 
>>> About data set size: 2 TB isn't that much, but if it contains many kinds
>>> of data (e.g. an operating system, documents and pictures) it may make
>>> sense to split that into multiple archives (not repos), just to have a
>>> better overview over the backups.
>> 
>> _______________________________________________
>> Borgbackup mailing list
>> Borgbackup at python.org
>> https://mail.python.org/mailman/listinfo/borgbackup
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.klaver at aklaver.com



More information about the Borgbackup mailing list