From tmhikaru at gmail.com Fri Jul 1 00:10:11 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Thu, 30 Jun 2016 21:10:11 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <0da0c987-4dd1-a1df-7ccc-bfb56f89ff95@enkore.de> References: <20160619202507.GA2020@raspberrypi> <0da0c987-4dd1-a1df-7ccc-bfb56f89ff95@enkore.de> Message-ID: <20160701041011.GA9178@raspberrypi> On Thu, Jun 30, 2016 at 09:08:06AM +0200, Marian Beermann wrote: > You can try enabling faulthandler. Set environment variable (export) > PYTHONFAULTHANDLER to something, say, foobar. When it gets stuck you can > send SIGABRT and should get a proper stack trace where it get stuck. Will do. Oddly enough I may be on to something here, after changing the order of which machines access the repo it started working again. Could be yet another fluke, so I'll do quite a few tests before I am satisfied. Thank you for the suggestion, Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From liori at exroot.org Wed Jul 6 09:53:42 2016 From: liori at exroot.org (Tomasz Melcer) Date: Wed, 6 Jul 2016 15:53:42 +0200 Subject: [Borgbackup] What to do if file is lost? Message-ID: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> Hi, My storage for backups developed a bad block (found using `btrfs scrub`). I made a copy of everything, except for one file that seem to be hit by badblock (an index.?some number?) file. The original medium still seems to work fine?I mean, if not for `btrfs scrub`, I wouldn't know about the problem. Given that the archive is pretty large now, I wonder whether there is any way to regenerate this file safely? At this point I wouldn't mind running a full backup run from scratch, but it would save a lot of time. -- Tomasz Melcer From public at enkore.de Wed Jul 6 09:56:15 2016 From: public at enkore.de (Marian Beermann) Date: Wed, 6 Jul 2016 15:56:15 +0200 Subject: [Borgbackup] What to do if file is lost? In-Reply-To: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> References: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> Message-ID: On 06.07.2016 15:53, Tomasz Melcer wrote: > Hi, > > My storage for backups developed a bad block (found using `btrfs > scrub`). I made a copy of everything, except for one file that seem to > be hit by badblock (an index.?some number?) file. The original medium > still seems to work fine?I mean, if not for `btrfs scrub`, I wouldn't > know about the problem. > > Given that the archive is pretty large now, I wonder whether there is > any way to regenerate this file safely? At this point I wouldn't mind > running a full backup run from scratch, but it would save a lot of time. > > index.N and hints.N are expendable. Regenerating takes a while (size of repository divided by average sequential read speed). Cheers, Marian From liori at exroot.org Wed Jul 6 11:02:34 2016 From: liori at exroot.org (Tomasz Melcer) Date: Wed, 6 Jul 2016 17:02:34 +0200 Subject: [Borgbackup] What to do if file is lost? In-Reply-To: References: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> Message-ID: On 06.07.2016 15:56, Marian Beermann wrote: > On 06.07.2016 15:53, Tomasz Melcer wrote: >> My storage for backups developed a bad block (found using `btrfs >> scrub`). I made a copy of everything, except for one file that seem to >> be hit by badblock (an index.?some number?) file. The original medium >> still seems to work fine?I mean, if not for `btrfs scrub`, I wouldn't >> know about the problem. >> >> [?] > > index.N and hints.N are expendable. Regenerating takes a while (size of > repository divided by average sequential read speed). Ah, thank you. Out of curiosity, what about other files? What will happen if I lose one of the data files (will I just lose the chunks stored in that file?), or the config file? -- Tomasz Melcer From public at enkore.de Wed Jul 6 11:44:59 2016 From: public at enkore.de (Marian Beermann) Date: Wed, 6 Jul 2016 17:44:59 +0200 Subject: [Borgbackup] What to do if file is lost? In-Reply-To: References: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> Message-ID: <920bbb1b-7b49-0507-400d-7217f84facdb@enkore.de> On 06.07.2016 17:02, Tomasz Melcer wrote: > On 06.07.2016 15:56, Marian Beermann wrote: >> On 06.07.2016 15:53, Tomasz Melcer wrote: >>> My storage for backups developed a bad block (found using `btrfs >>> scrub`). I made a copy of everything, except for one file that seem to >>> be hit by badblock (an index.?some number?) file. The original medium >>> still seems to work fine?I mean, if not for `btrfs scrub`, I wouldn't >>> know about the problem. >>> >>> [?] >> >> index.N and hints.N are expendable. Regenerating takes a while (size of >> repository divided by average sequential read speed). > > Ah, thank you. Out of curiosity, what about other files? What will > happen if I lose one of the data files (will I just lose the chunks > stored in that file?), or the config file? > The config file is "public knowledge", except if you use repokey, then you should make backups of it (as mentioned in the docs http://borgbackup.readthedocs.io/en/stable/quickstart.html#encrypted-repos ), since it contains the key material. The data files is another story. There's no forward error correction in Borg itself, so errors can be detected but only some minor errors can be corrected. "borg check --repair" will replace corrupted data chunks with runs of zeroes of the same length, while saying where it did that. Corrupted commit tags can take some more data with them to nirvana. Cheers, Marian From liori at exroot.org Wed Jul 6 11:57:44 2016 From: liori at exroot.org (Tomasz Melcer) Date: Wed, 6 Jul 2016 17:57:44 +0200 Subject: [Borgbackup] What to do if file is lost? In-Reply-To: <920bbb1b-7b49-0507-400d-7217f84facdb@enkore.de> References: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> <920bbb1b-7b49-0507-400d-7217f84facdb@enkore.de> Message-ID: <6e174fe5-2ab8-a68b-8624-49f6a85f0b54@exroot.org> On 06.07.2016 17:44, Marian Beermann wrote: > The data files is another story. There's no forward error correction in > Borg itself, so errors can be detected but only some minor errors can be > corrected. "borg check --repair" will replace corrupted data chunks with > runs of zeroes of the same length, while saying where it did that. > Corrupted commit tags can take some more data with them to nirvana. So, please correct me if I'm wrong: Let say there was a bad block inside one of the data files. After recovery I can just run `borg check --repair`, and while some of the data will be lost, other chunks will still be there. Therefore, I can fully recover all past backups except for the chunk(s) hit by the bad blocks. One more question. If the next backup will happen to have the same chunk, will it be added to the backup, filling the missing part for older backups? In some scenarios I find it likely that a possible bad block could just hit a chunk of one of the files that are still available on the live system. -- Tomasz Melcer From public at enkore.de Wed Jul 6 12:08:41 2016 From: public at enkore.de (Marian Beermann) Date: Wed, 6 Jul 2016 18:08:41 +0200 Subject: [Borgbackup] What to do if file is lost? In-Reply-To: <6e174fe5-2ab8-a68b-8624-49f6a85f0b54@exroot.org> References: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> <920bbb1b-7b49-0507-400d-7217f84facdb@enkore.de> <6e174fe5-2ab8-a68b-8624-49f6a85f0b54@exroot.org> Message-ID: <3c2336d5-5cf8-2ddb-1574-7e95263156db@enkore.de> On 06.07.2016 17:57, Tomasz Melcer wrote: > On 06.07.2016 17:44, Marian Beermann wrote: >> The data files is another story. There's no forward error correction in >> Borg itself, so errors can be detected but only some minor errors can be >> corrected. "borg check --repair" will replace corrupted data chunks with >> runs of zeroes of the same length, while saying where it did that. >> Corrupted commit tags can take some more data with them to nirvana. > > So, please correct me if I'm wrong: > > Let say there was a bad block inside one of the data files. After > recovery I can just run `borg check --repair`, and while some of the > data will be lost, other chunks will still be there. Therefore, I can > fully recover all past backups except for the chunk(s) hit by the bad > blocks. Well it always depends on what block is exactly hit and how. You may lose a lot of FS structure through one unlucky bad block equating to tons of data loss, or maybe just a data block somewhere. If it's only in the data, then check--repair has an easy job and it'd be really only that block, if it also hit's structure metadata more chunks in the same file may be lost. If it hit's a commit more may be lost. If it hit's the one chunk you're interested in right now you have a problem and so on. If it hits the metadata of an archive the archive may loose some files upon repair or may be un-repairable, but no data *per se* would be lost. But in principle, yes, and from a pure statistics view the likelihood of data itself being affected instead of metadata is high for a single block. > > One more question. If the next backup will happen to have the same > chunk, will it be added to the backup, filling the missing part for > older backups? In some scenarios I find it likely that a possible bad > block could just hit a chunk of one of the files that are still > available on the live system. > Yes, but the old backup archive will still have a run of zeroes in it. To explain why, a rough sketch how data storage works in Borg (also explained in the internals docs in more detail). A file item has a chunk ID list, which lists the chunks containing the file data in order (and some metadata). If check finds that one of these is gone, then it creates a *new* chunk, same size as the corrupted one, made from zeroes, stores it, and *edits* the chunk ID list of the affected file to refer to the zeroes chunk instead. If you do a new backup after that, then the corrupted chunk simply won't be in the repository any more and will be newly stored by the backup. But since creating a new backup doesn't touch old archives the old archive will still have a run of zeroes there. Arguably this could be improved (e.g. when repairing a file, edit the chunk ID list, but also store how it was edited - when the affected chunks are stored back in the repository these files may be healed "retrospectively" then.) Cheers, Marian From tve at voneicken.com Wed Jul 6 12:48:44 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Wed, 6 Jul 2016 16:48:44 +0000 Subject: [Borgbackup] What to do if file is lost? In-Reply-To: <3c2336d5-5cf8-2ddb-1574-7e95263156db@enkore.de> References: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> <920bbb1b-7b49-0507-400d-7217f84facdb@enkore.de> <6e174fe5-2ab8-a68b-8624-49f6a85f0b54@exroot.org> <3c2336d5-5cf8-2ddb-1574-7e95263156db@enkore.de> Message-ID: <01000155c11c9619-49c2835f-0978-4d43-b1a1-fc0d3df725d7-000000@email.amazonses.com> On 7/6/2016 9:08 AM, Marian Beermann wrote: > > Arguably this could be improved (e.g. when repairing a file, edit the > chunk ID list, but also store how it was edited - when the affected > chunks are stored back in the repository these files may be healed > "retrospectively" then.) Thanks for the detailed explanation! What strikes me as potentially missing is some way to flag broken files. Moreso than retrospectively repairing them. I.e., if I restore an old directory tree I would hope to get errors and a list of zeroed files so I can hunt for them in newer archives. Maybe this is happening, but it's not obvious from your description. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Jul 7 08:43:03 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 7 Jul 2016 14:43:03 +0200 Subject: [Borgbackup] What to do if file is lost? In-Reply-To: <01000155c11c9619-49c2835f-0978-4d43-b1a1-fc0d3df725d7-000000@email.amazonses.com> References: <11e9bb03-29ca-6e31-649b-8b642c4d7f0c@exroot.org> <920bbb1b-7b49-0507-400d-7217f84facdb@enkore.de> <6e174fe5-2ab8-a68b-8624-49f6a85f0b54@exroot.org> <3c2336d5-5cf8-2ddb-1574-7e95263156db@enkore.de> <01000155c11c9619-49c2835f-0978-4d43-b1a1-fc0d3df725d7-000000@email.amazonses.com> Message-ID: <577E4E57.5000601@waldmann-edv.de> > Thanks for the detailed explanation! What strikes me as potentially > missing is some way to flag broken files. I committed a change relating to this yesterday to 1.0-maint (soon 1.0.4). See my last comments there for details: https://github.com/borgbackup/borg/issues/148 > Moreso than retrospectively > repairing them. I.e., if I restore an old directory tree I would hope to > get errors and a list of zeroed files so I can hunt for them in newer > archives. IIRC you'ld see the list of files with missing chunks when you run borg check --repair. After "borg heal" is implemented, you could react to that by first creating a fresh backup (of everything or of at least the files that are supposed to have the same chunks still) and then running borg heal on the problematic archives. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Thu Jul 7 12:57:03 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 7 Jul 2016 18:57:03 +0200 Subject: [Borgbackup] borgbackup 1.0.4 released! Message-ID: <577E89DF.4000103@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.0.4 Critical fixes, please upgrade ASAP. Please read the changelog before upgrading: https://github.com/borgbackup/borg/blob/1.0.4/docs/changes.rst Cheers, Thomas -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Thu Jul 7 18:00:16 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 8 Jul 2016 00:00:16 +0200 Subject: [Borgbackup] borgbackup 1.0.5 released Message-ID: <577ED0F0.1020304@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.0.5 Critical fixes, please upgrade ASAP. Also fixes the FUSE xattr regression from 1.0.4. Please read the changelog before upgrading: https://github.com/borgbackup/borg/blob/1.0.5/docs/changes.rst Cheers, Thomas -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From sitaramc at gmail.com Thu Jul 7 20:17:32 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Fri, 8 Jul 2016 05:47:32 +0530 Subject: [Borgbackup] borgbackup 1.0.5 released In-Reply-To: <577ED0F0.1020304@waldmann-edv.de> References: <577ED0F0.1020304@waldmann-edv.de> Message-ID: On 07/08/2016 03:30 AM, Thomas Waldmann wrote: > https://github.com/borgbackup/borg/releases/tag/1.0.5 > > Critical fixes, please upgrade ASAP. Also fixes the FUSE xattr > regression from 1.0.4. > > Please read the changelog before upgrading: > > https://github.com/borgbackup/borg/blob/1.0.5/docs/changes.rst I notice that the changelog entry for this says: The best check that everything is ok is to run a dry-run extraction: borg extract -v --dry-run REPO::ARCHIVE Would it be very difficult to add an option to produce sha1 or sha2 checksums of the extracted data, even if it is dry-running? I'm afraid I don't know enough about the internals to be able to guess if it's at all doable or not. regards sitaram From public at enkore.de Fri Jul 8 05:41:44 2016 From: public at enkore.de (Marian Beermann) Date: Fri, 8 Jul 2016 11:41:44 +0200 Subject: [Borgbackup] borgbackup 1.0.5 released In-Reply-To: References: <577ED0F0.1020304@waldmann-edv.de> Message-ID: On 08.07.2016 02:17, Sitaram Chamarty wrote: > On 07/08/2016 03:30 AM, Thomas Waldmann wrote: >> https://github.com/borgbackup/borg/releases/tag/1.0.5 >> >> Critical fixes, please upgrade ASAP. Also fixes the FUSE xattr >> regression from 1.0.4. >> >> Please read the changelog before upgrading: >> >> https://github.com/borgbackup/borg/blob/1.0.5/docs/changes.rst > > I notice that the changelog entry for this says: > > The best check that everything is ok is to run a dry-run extraction: > borg extract -v --dry-run REPO::ARCHIVE > > Would it be very difficult to add an option to produce sha1 or sha2 > checksums of the extracted data, even if it is dry-running? > > I'm afraid I don't know enough about the internals to be able to guess > if it's at all doable or not. > > regards > sitaram > That's scheduled for 1.1, alongside "borg check --verify-data" which basically automates "borg extract --dry-run". Cheers, Marian From sitaramc at gmail.com Fri Jul 8 06:41:15 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Fri, 8 Jul 2016 16:11:15 +0530 Subject: [Borgbackup] borgbackup 1.0.5 released In-Reply-To: References: <577ED0F0.1020304@waldmann-edv.de> Message-ID: <6d5a6a1f-4752-35de-9eb1-a5968717da68@gmail.com> On 07/08/2016 03:11 PM, Marian Beermann wrote: > On 08.07.2016 02:17, Sitaram Chamarty wrote: >> On 07/08/2016 03:30 AM, Thomas Waldmann wrote: >>> https://github.com/borgbackup/borg/releases/tag/1.0.5 >>> >>> Critical fixes, please upgrade ASAP. Also fixes the FUSE xattr >>> regression from 1.0.4. >>> >>> Please read the changelog before upgrading: >>> >>> https://github.com/borgbackup/borg/blob/1.0.5/docs/changes.rst >> >> I notice that the changelog entry for this says: >> >> The best check that everything is ok is to run a dry-run extraction: >> borg extract -v --dry-run REPO::ARCHIVE >> >> Would it be very difficult to add an option to produce sha1 or sha2 >> checksums of the extracted data, even if it is dry-running? >> >> I'm afraid I don't know enough about the internals to be able to guess >> if it's at all doable or not. >> >> regards >> sitaram >> > > That's scheduled for 1.1, alongside "borg check --verify-data" which > basically automates "borg extract --dry-run". great thanks! Good to know... regards sitaram From sitaramc at gmail.com Fri Jul 8 09:29:24 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Fri, 8 Jul 2016 18:59:24 +0530 Subject: [Borgbackup] 1.0.5 binary release fails to write to nas mounted repo Message-ID: Hi, One of my backup scripts writes direct to a NAS drive mounted via CIFS. I moved from 1.0.0 binary release to 1.0.5 binary release, and it no longer works. I'm attaching a file containing a full test from scratch; I'd be happy to do more testing as needed, or open an issue if that is preferable. (I did look to see if someone had already opened an issue for this sometime between 1.0.0 and now, but I couldn't find any, offhand, by searching for one of these strings: CIFS, NFS, NAS). regards sitaram -------------- next part -------------- NOTE: /root/nas was mounted by a command like: mount -t cifs -o username=alice //1.2.3.4/some_volume_name nas All commands run as root in this test. ---- # rm -rf foo nas/foo # borg init -e none foo # borg create -v --list foo::1 .bashrc .bash_history A .bashrc A .bash_history # borg create -v --list foo::2 .bashrc .bash_history U .bashrc A .bash_history # mv foo nas/ mv: preserving times for ?nas/foo/data/0?: Permission denied mv: failed to preserve ownership for ?nas/foo/data/0?: Permission denied mv: preserving permissions for ?nas/foo/data/0?: Permission denied mv: preserving times for ?nas/foo/data?: Permission denied mv: failed to preserve ownership for ?nas/foo/data?: Permission denied mv: preserving permissions for ?nas/foo/data?: Permission denied mv: preserving times for ?nas/foo?: Permission denied mv: failed to preserve ownership for ?nas/foo?: Permission denied mv: preserving permissions for ?nas/foo?: Permission denied # borg create -v --list nas/foo::3 .bashrc .bash_history Warning: The repository at location /root/nas/foo was previously located at /root/foo Do you want to continue? [yN] y U .bashrc A .bash_history Exception ignored in: > Traceback (most recent call last): File "borg/repository.py", line 72, in __del__ File "borg/repository.py", line 189, in close File "borg/repository.py", line 532, in close File "borg/repository.py", line 752, in close_segment ValueError: flush of closed file Local Exception. Traceback (most recent call last): File "borg/archiver.py", line 81, in wrapper File "borg/archiver.py", line 247, in do_create File "borg/archiver.py", line 221, in create_inner File "borg/archive.py", line 317, in save File "borg/repository.py", line 197, in commit File "borg/repository.py", line 746, in write_commit File "borg/repository.py", line 759, in close_segment File "borg/platform.py", line 9, in sync_dir OSError: [Errno 22] Invalid argument During handling of the above exception, another exception occurred: Traceback (most recent call last): File "borg/archiver.py", line 1601, in main File "borg/archiver.py", line 1538, in run File "borg/archiver.py", line 81, in wrapper File "borg/repository.py", line 94, in __exit__ File "borg/repository.py", line 189, in close File "borg/repository.py", line 532, in close File "borg/repository.py", line 752, in close_segment ValueError: flush of closed file Platform: Linux sita-wd.atc.tcs.com 4.1.8-100.fc21.x86_64 #1 SMP Tue Sep 22 12:13:06 UTC 2015 x86_64 x86_64 Linux: Fedora 21 Twenty One Borg: 1.0.5 Python: CPython 3.5.2 PID: 22639 CWD: /root sys.argv: ['borg', 'create', '-v', '--list', 'nas/foo::3', '.bashrc', '.bash_history'] SSH_ORIGINAL_COMMAND: None rc: 2 From tmhikaru at gmail.com Fri Jul 8 18:21:55 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Fri, 8 Jul 2016 15:21:55 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <20160701041011.GA9178@raspberrypi> References: <20160619202507.GA2020@raspberrypi> <0da0c987-4dd1-a1df-7ccc-bfb56f89ff95@enkore.de> <20160701041011.GA9178@raspberrypi> Message-ID: <20160708222155.GB29792@raspberrypi> On Thu, Jun 30, 2016 at 09:10:11PM -0700, tm at raspberrypi wrote: > On Thu, Jun 30, 2016 at 09:08:06AM +0200, Marian Beermann wrote: > > You can try enabling faulthandler. Set environment variable (export) > > PYTHONFAULTHANDLER to something, say, foobar. When it gets stuck you can > > send SIGABRT and should get a proper stack trace where it get stuck. > Will do. Oddly enough I may be on to something here, after changing the > order of which machines access the repo it started working again. Could be > yet another fluke, so I'll do quite a few tests before I am satisfied. After making this change, I got through a full week of fully working full backups with nothing going wrong. Data was being added and pruned with every cycle - each started with ~23 archives in the repo, which was pruned to ~20 before the remote work would begin on the RPI to sync. Everything was working perfectly and I was having no trouble at all. Yesterday I kicked off the fifth backup and went away for the day, assuming when I'd come home it'd be done and I could write to you about the workaround I'd done, despite it not making any sense. Instead, I came home only to find out it'd gotten stuck almost instantly while merging chunks into the master index *again* while processing local cached archive data when I got home that night, more than 10 hours later. I killed it, break-lock'd, and ran it again the same way, but with debug instead of info output and with this python variable set. Without doing things like blowing away the local cache, I have seen this causes the program to get stuck in the exact same way it did before every time before now. I was hoping I could get a useful trace to see what it was trying to actually do, rather than continuing to make educated and uneducated guesses. Maddeningly, it worked 100% perfectly and didn't get stuck at all, even processing the very local archive data it'd gotten stuck on for ~10hrs in seconds. I give up. I cannot make this program work reliably the way I am trying to use it, or even diagnose what the actual problem is with such hit or miss behavior. If I ever have to use xattrs on a low cpu/ram system I may wind up doing something like streaming tar data over the network to the server running borg. Hopefully by the time I do need such a thing, sshfs may have evolved to support xattrs. Certainly, borg runs very poorly as a client on a low cpu&ram machine that has to access a remote repo that holds a lot of data and is modified by other machines. I cannot recommend trying to emulate my test setup, it just doesn't work reliably. I was using borg in a not suggested manner, and I have both read and have had helpful people here tell me not to use it in this way. I apologize for being difficult. Thank you all for trying to help. Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From public at enkore.de Fri Jul 8 19:20:34 2016 From: public at enkore.de (Marian Beermann) Date: Sat, 9 Jul 2016 01:20:34 +0200 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <20160708222155.GB29792@raspberrypi> References: <20160619202507.GA2020@raspberrypi> <0da0c987-4dd1-a1df-7ccc-bfb56f89ff95@enkore.de> <20160701041011.GA9178@raspberrypi> <20160708222155.GB29792@raspberrypi> Message-ID: Thanks a lot for trying to debug this. I've been looking at the code involved in index merging, and may have found a bug there that may play out in low-memory situations (= failing to acquire memory). I'm not saying, "that's the bug", but I can't rule it out. I'll be preparing a patch. Maybe it helps. Cheers, Marian On 09.07.2016 00:21, tmhikaru at gmail.com wrote: > On Thu, Jun 30, 2016 at 09:10:11PM -0700, tm at raspberrypi wrote: >> On Thu, Jun 30, 2016 at 09:08:06AM +0200, Marian Beermann wrote: >>> You can try enabling faulthandler. Set environment variable (export) >>> PYTHONFAULTHANDLER to something, say, foobar. When it gets stuck you can >>> send SIGABRT and should get a proper stack trace where it get stuck. >> Will do. Oddly enough I may be on to something here, after changing the >> order of which machines access the repo it started working again. Could be >> yet another fluke, so I'll do quite a few tests before I am satisfied. > > After making this change, I got through a full week of fully working full > backups with nothing going wrong. Data was being added and pruned with > every cycle - each started with ~23 archives in the repo, which was pruned > to ~20 before the remote work would begin on the RPI to sync. Everything > was working perfectly and I was having no trouble at all. Yesterday I > kicked off the fifth backup and went away for the day, assuming when I'd > come home it'd be done and I could write to you about the workaround I'd > done, despite it not making any sense. Instead, I came home only to find > out it'd gotten stuck almost instantly while merging chunks into the master > index *again* while processing local cached archive data when I got home > that night, more than 10 hours later. I killed it, break-lock'd, and ran it > again the same way, but with debug instead of info output and with this > python variable set. Without doing things like blowing away the local > cache, I have seen this causes the program to get stuck in the exact same > way it did before every time before now. I was hoping I could get a useful > trace to see what it was trying to actually do, rather than continuing to > make educated and uneducated guesses. > > Maddeningly, it worked 100% perfectly and didn't get stuck at all, even > processing the very local archive data it'd gotten stuck on for ~10hrs in > seconds. > > I give up. I cannot make this program work reliably the way I am trying to > use it, or even diagnose what the actual problem is with such hit or miss > behavior. If I ever have to use xattrs on a low cpu/ram system I may wind > up doing something like streaming tar data over the network to the server > running borg. Hopefully by the time I do need such a thing, sshfs may have > evolved to support xattrs. Certainly, borg runs very poorly as a client on > a low cpu&ram machine that has to access a remote repo that holds a lot of > data and is modified by other machines. I cannot recommend trying to > emulate my test setup, it just doesn't work reliably. > > I was using borg in a not suggested manner, and I have both read and have > had helpful people here tell me not to use it in this way. I apologize for > being difficult. Thank you all for trying to help. > > Tim McGrath > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From tw at waldmann-edv.de Sun Jul 10 13:28:33 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 10 Jul 2016 19:28:33 +0200 Subject: [Borgbackup] upcoming 1.0.6rc1 - please give it a practical test Message-ID: <578285C1.5050607@waldmann-edv.de> Preparing #borgbackup 1.0.6rc1 right now, release after the test/build finishes. It would be helpful if you practically test this, so anything not discovered by unit tests can be fixed. The final 1.0.6 release is scheduled for 2016-07-12, so be quick. :) Especially tests on misc. (network or non-network) filesystems would be useful. smbfs, nfs, sshfs, ... 1.0.4 and 1.0.5 suffered from small, but for some applications show-stopping issues, let's try to get 1.0.6 right. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Sun Jul 10 13:33:46 2016 From: public at enkore.de (Marian Beermann) Date: Sun, 10 Jul 2016 19:33:46 +0200 Subject: [Borgbackup] upcoming 1.0.6rc1 - please give it a practical test In-Reply-To: <578285C1.5050607@waldmann-edv.de> References: <578285C1.5050607@waldmann-edv.de> Message-ID: <9c287ed0-3595-fd30-b43e-d2f4a827bfab@enkore.de> On 10.07.2016 19:28, Thomas Waldmann wrote: > Preparing #borgbackup 1.0.6rc1 right now, release after the test/build > finishes. > > It would be helpful if you practically test this, so anything not > discovered by unit tests can be fixed. > > The final 1.0.6 release is scheduled for 2016-07-12, so be quick. :) > > Especially tests on misc. (network or non-network) filesystems would be > useful. smbfs, nfs, sshfs, ... > > 1.0.4 and 1.0.5 suffered from small, but for some applications > show-stopping issues, let's try to get 1.0.6 right. > To add to what Thomas said, there is also an open ticket (with a bounty) about more thorough and automatic testing on different file systems. See https://github.com/borgbackup/borg/issues/1289 for details. Cheers, Marian From imperator at jedimail.de Sun Jul 10 13:43:12 2016 From: imperator at jedimail.de (Sascha Ternes) Date: Sun, 10 Jul 2016 19:43:12 +0200 Subject: [Borgbackup] upcoming 1.0.6rc1 - please give it a practical test In-Reply-To: <9c287ed0-3595-fd30-b43e-d2f4a827bfab@enkore.de> References: <578285C1.5050607@waldmann-edv.de> <9c287ed0-3595-fd30-b43e-d2f4a827bfab@enkore.de> Message-ID: <57828930.3020500@jedimail.de> Hey there! I use borg with daily backups written to a online storage that is mounted via davfs2 (WebDAV). I used 1.0.4 and 1.0.5 since the day they were released. No problems so far, I borg checked too without a problem. If it helps, I would test your RC (event if WebDAV/davfs is not mentioned for bounty ;). Sascha Am 10.07.2016 um 19:33 schrieb Marian Beermann: > On 10.07.2016 19:28, Thomas Waldmann wrote: >> Preparing #borgbackup 1.0.6rc1 right now, release after the test/build >> finishes. >> >> It would be helpful if you practically test this, so anything not >> discovered by unit tests can be fixed. >> >> The final 1.0.6 release is scheduled for 2016-07-12, so be quick. :) >> >> Especially tests on misc. (network or non-network) filesystems would be >> useful. smbfs, nfs, sshfs, ... >> >> 1.0.4 and 1.0.5 suffered from small, but for some applications >> show-stopping issues, let's try to get 1.0.6 right. >> > > To add to what Thomas said, there is also an open ticket (with a bounty) > about more thorough and automatic testing on different file systems. See > https://github.com/borgbackup/borg/issues/1289 for details. > > Cheers, Marian > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From tw at waldmann-edv.de Sun Jul 10 14:49:38 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 10 Jul 2016 20:49:38 +0200 Subject: [Borgbackup] upcoming 1.0.6rc1 - please give it a practical test In-Reply-To: <57828930.3020500@jedimail.de> References: <578285C1.5050607@waldmann-edv.de> <9c287ed0-3595-fd30-b43e-d2f4a827bfab@enkore.de> <57828930.3020500@jedimail.de> Message-ID: <578298C2.9000802@waldmann-edv.de> > I use borg with daily backups written to a online storage that is > mounted via davfs2 (WebDAV). Testing that would be useful, yes. > I used 1.0.4 and 1.0.5 since the day they > were released. No problems so far, I borg checked too without a problem. OK, then davfs2 supports (or does not error) on fsync(dirfd). > If it helps, I would test your RC (event if WebDAV/davfs is not > mentioned for bounty ;). The bounty is only for automating such stuff. :) -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Mon Jul 11 03:59:17 2016 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Mon, 11 Jul 2016 09:59:17 +0200 Subject: [Borgbackup] compatible versions: local borg vs. remote borg? Message-ID: <8a1ef385-07d4-bf92-b413-228e1f14016c@oss.schwarz.eu> Hi, I'm wondering if there is any documentation about version incompatilities when using remote repositories (i.e. borg is also installed on the remote machine)? For example the server might still have borg 1.0.3 but my client uses 1.0.5 (or the other way round). I hope that mismatch should not cause any trouble. May I assume that any (known) incompatibilities will be mentioned on the changes page (prominently)? Also is there any code in borg to detect version incompatibilities? My hope would be that worst case the backup would stop running (instead of silent repo corruption). Felix From adrian.klaver at aklaver.com Mon Jul 11 10:32:16 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 11 Jul 2016 07:32:16 -0700 Subject: [Borgbackup] compatible versions: local borg vs. remote borg? In-Reply-To: <8a1ef385-07d4-bf92-b413-228e1f14016c@oss.schwarz.eu> References: <8a1ef385-07d4-bf92-b413-228e1f14016c@oss.schwarz.eu> Message-ID: <812ea2f6-2770-844b-0b87-c872303decdb@aklaver.com> On 07/11/2016 12:59 AM, Felix Schwarz wrote: > Hi, > > I'm wondering if there is any documentation about version incompatilities when > using remote repositories (i.e. borg is also installed on the remote machine)? > > For example the server might still have borg 1.0.3 but my client uses 1.0.5 > (or the other way round). I hope that mismatch should not cause any trouble. > May I assume that any (known) incompatibilities will be mentioned on the > changes page (prominently)? http://borgbackup.readthedocs.io/en/stable/changes.html#version-1-0-0 http://borgbackup.readthedocs.io/en/stable/usage.html#borg-upgrade "Upgrade an existing Borg repository. This currently supports converting an Attic repository to Borg and also helps with converting Borg 0.xx to 1.0." > > Also is there any code in borg to detect version incompatibilities? My hope > would be that worst case the backup would stop running (instead of silent repo > corruption). The config file in a repo has a version number. Someone else will have to elaborate on how that is used. > > Felix -- Adrian Klaver adrian.klaver at aklaver.com From public at enkore.de Mon Jul 11 11:01:09 2016 From: public at enkore.de (public at enkore.de) Date: Mon, 11 Jul 2016 17:01:09 +0200 Subject: [Borgbackup] compatible versions: local borg vs. remote borg? In-Reply-To: <8a1ef385-07d4-bf92-b413-228e1f14016c@oss.schwarz.eu> References: <8a1ef385-07d4-bf92-b413-228e1f14016c@oss.schwarz.eu> Message-ID: <355ed27c-4014-72d0-f815-178ead7a87ef@enkore.de> Point (bug fix) releases have to stay compatible to their parent version, ie. all 1.0.x versions have to be compatible (RPC, repository format, flags and behaviour unless otherwise documented) with each other, everything else is a regression. Because 1.0 has been deployed by some service providers we will generally try very hard to maintain RPC/remote compatibility to 1.0 in future releases. If a version breaks compatibility in some aspect we will document it in the changelog. Interoperability changes may be backported to older releases as well (which we actually already did in 1.0.4 for future 1.1). Cheers, Marian On 11/07/16 09:59, Felix Schwarz wrote: > Hi, > > I'm wondering if there is any documentation about version incompatilities when > using remote repositories (i.e. borg is also installed on the remote machine)? > > For example the server might still have borg 1.0.3 but my client uses 1.0.5 > (or the other way round). I hope that mismatch should not cause any trouble. > May I assume that any (known) incompatibilities will be mentioned on the > changes page (prominently)? > > Also is there any code in borg to detect version incompatibilities? My hope > would be that worst case the backup would stop running (instead of silent repo > corruption). > > Felix > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From lists.borg at pjw.xsmail.com Mon Jul 11 11:14:07 2016 From: lists.borg at pjw.xsmail.com (pjw) Date: Mon, 11 Jul 2016 09:14:07 -0600 Subject: [Borgbackup] compatible versions: local borg vs. remote borg? Message-ID: <1468250047.684835.662922209.44461204@webmail.messagingengine.com> On Mon, Jul 11, 2016, at 01:59 AM, Felix Schwarz wrote: > I'm wondering if there is any documentation about version incompatilities when > using remote repositories (i.e. borg is also installed on the remote machine)? > > For example the server might still have borg 1.0.3 but my client uses 1.0.5 > (or the other way round). I hope that mismatch should not cause any trouble. > May I assume that any (known) incompatibilities will be mentioned on the > changes page (prominently)? > > Also is there any code in borg to detect version incompatibilities? My hope > would be that worst case the backup would stop running (instead of silent repo > corruption). [parts of the below have been addressed] In my case the remote server (backup provider) defaults to borg v0.29. They have the more recent release at /usr/local/bin/borg1, currently 1.0.3. They seem disinclined to upgrade the default, possibly because.. If/when they upgrade the default server to 1.0.x, what might the consequences be for accessing (extract/check) existing archives created by 0.29? Will all existing repos require borg upgrade? Why? For their users unaware the remote (default) version is not current, how might using the pre-1.0 server impact their archives created with local client 1.x? would such a mismatch be evident? (Felix' questions, older server) How to confirm the v1.0.4 implemented BORG_REMOTE_PATH local variable does in fact call the remote /usr/local/bin/borg1? borg -V will return the local borg version, used elsewhere in the command line -V returns unrecognized argument. All access to the provider's server is over ssh. -pjw From adrian.klaver at aklaver.com Mon Jul 11 11:55:00 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 11 Jul 2016 08:55:00 -0700 Subject: [Borgbackup] compatible versions: local borg vs. remote borg? In-Reply-To: <1468250047.684835.662922209.44461204@webmail.messagingengine.com> References: <1468250047.684835.662922209.44461204@webmail.messagingengine.com> Message-ID: <8b6d9aa1-df23-8c17-ff95-d2e62d61e5e5@aklaver.com> On 07/11/2016 08:14 AM, pjw wrote: > On Mon, Jul 11, 2016, at 01:59 AM, Felix Schwarz wrote: >> I'm wondering if there is any documentation about version incompatilities when >> using remote repositories (i.e. borg is also installed on the remote machine)? >> >> For example the server might still have borg 1.0.3 but my client uses 1.0.5 >> (or the other way round). I hope that mismatch should not cause any trouble. >> May I assume that any (known) incompatibilities will be mentioned on the >> changes page (prominently)? >> >> Also is there any code in borg to detect version incompatibilities? My hope >> would be that worst case the backup would stop running (instead of silent repo >> corruption). > > [parts of the below have been addressed] > > In my case the remote server (backup provider) defaults to borg v0.29. They have the more recent release at /usr/local/bin/borg1, currently 1.0.3. They seem disinclined to upgrade the default, possibly because.. > > If/when they upgrade the default server to 1.0.x, what might the consequences be for accessing (extract/check) existing archives created by 0.29? Will all existing repos require borg upgrade? Why? When I went from 0.30 to 1.0 the only thing I had to do was this: https://github.com/borgbackup/borg/blob/1.0.0/docs/changes.rst "moved keyfile keys from ~/.borg/keys to ~/.config/borg/keys, you can either move them manually or run "borg upgrade " " which I did manually instead of using borg upgrade. The potentially bigger issue would be, from the above link: change the builtin default for --chunker-params This mainly involves disk space issues. I would read that section. > > For their users unaware the remote (default) version is not current, how might using the pre-1.0 server impact their archives created with local client 1.x? would such a mismatch be evident? (Felix' questions, older server) > > How to confirm the v1.0.4 implemented BORG_REMOTE_PATH local variable does in fact call the remote /usr/local/bin/borg1? borg -V will return the local borg version, used elsewhere in the command line -V returns unrecognized argument. All access to the provider's server is over ssh. I see no magic in BORG_REMOTE_PATH, it is just a variable you set to a remote version of Borg that you have determined by some other method. Basically it just fills in the --remote-path argument > > -pjw > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Tue Jul 12 18:10:10 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 13 Jul 2016 00:10:10 +0200 Subject: [Borgbackup] borgbackup 1.0.6 released Message-ID: <57856AC2.7090007@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.0.6 Critical fixes, please upgrade ASAP if you use < 1.0.4. Also fixes some less critical, but still annoying issues of 1.0.4 and 1.0.5. Please read the changelog before upgrading: https://github.com/borgbackup/borg/blob/1.0.6/docs/changes.rst Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From sitaramc at gmail.com Tue Jul 12 20:42:52 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Wed, 13 Jul 2016 06:12:52 +0530 Subject: [Borgbackup] borgbackup 1.0.6 released In-Reply-To: <57856AC2.7090007@waldmann-edv.de> References: <57856AC2.7090007@waldmann-edv.de> Message-ID: On 07/13/2016 03:40 AM, Thomas Waldmann wrote: > https://github.com/borgbackup/borg/releases/tag/1.0.6 > > Critical fixes, please upgrade ASAP if you use < 1.0.4. > > Also fixes some less critical, but still annoying issues of 1.0.4 and 1.0.5. > > Please read the changelog before upgrading: > > https://github.com/borgbackup/borg/blob/1.0.6/docs/changes.rst Thank you! This fixed my CIFS problem! regards sitaram From gmatht at gmail.com Fri Jul 22 10:15:02 2016 From: gmatht at gmail.com (John McCabe-Dansted) Date: Fri, 22 Jul 2016 22:15:02 +0800 Subject: [Borgbackup] Partclone, Borgbackup and Random Access? Message-ID: I wrote a script to backup the used sectors of a device directly to a compressed image with random access: https://github.com/gmatht/joshell/blob/master/scripts/backup2vm These images provided easy per file access and can be booted in qemu. It doesn't seem to be possible to do this with partclone and borgbackup, the closest I can see is to run something like: partclone ... | borg create repo::mypart - which isn't bootable but would be compressed, deduplicated, and shouldn't use too much space if I also backup files from the partition to the same borg repo. Has anyone done anything smarter than this? -- John C. McCabe-Dansted -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Fri Jul 22 10:23:37 2016 From: public at enkore.de (Marian Beermann) Date: Fri, 22 Jul 2016 16:23:37 +0200 Subject: [Borgbackup] Partclone, Borgbackup and Random Access? In-Reply-To: References: Message-ID: You should be able to boot it when FUSE-mounting the repo/archive. Cheers, Marian On 22.07.2016 16:15, John McCabe-Dansted wrote: > I wrote a script to backup the used sectors of a device directly to a > compressed image with random access: > https://github.com/gmatht/joshell/blob/master/scripts/backup2vm > These images provided easy per file access and can be booted in qemu. > > It doesn't seem to be possible to do this with partclone and borgbackup, > the closest I can see is to run something like: > partclone ... | borg create repo::mypart - > which isn't bootable but would be compressed, deduplicated, and > shouldn't use too much space if I also backup files from the partition > to the same borg repo. > > Has anyone done anything smarter than this? > > > -- > John C. McCabe-Dansted > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From gmatht at gmail.com Sat Jul 23 23:03:19 2016 From: gmatht at gmail.com (John McCabe-Dansted) Date: Sun, 24 Jul 2016 11:03:19 +0800 Subject: [Borgbackup] Partclone, Borgbackup and Random Access? In-Reply-To: References: Message-ID: I haven't found anything capable of mounting partclone images, I presume borgbackup can't either? The way the backup2vm script works is to use partclone to copy from the raw device to a sparse file in a btrfs partition on a sparse file. On Fri, Jul 22, 2016 at 10:23 PM, Marian Beermann wrote: > You should be able to boot it when FUSE-mounting the repo/archive. > > Cheers, Marian > > On 22.07.2016 16:15, John McCabe-Dansted wrote: > > I wrote a script to backup the used sectors of a device directly to a > > compressed image with random access: > > https://github.com/gmatht/joshell/blob/master/scripts/backup2vm > > These images provided easy per file access and can be booted in qemu. > > > > It doesn't seem to be possible to do this with partclone and borgbackup, > > the closest I can see is to run something like: > > partclone ... | borg create repo::mypart - > > which isn't bootable but would be compressed, deduplicated, and > > shouldn't use too much space if I also backup files from the partition > > to the same borg repo. > > > > Has anyone done anything smarter than this? > > > > > > -- > > John C. McCabe-Dansted > > > > > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- John C. McCabe-Dansted -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri Aug 5 17:27:30 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 5 Aug 2016 23:27:30 +0200 Subject: [Borgbackup] borgbackup 1.0.7rc1 Message-ID: <65eb69ba-b9ad-80f9-f689-457430f5d860@waldmann-edv.de> Released borgbackup 1.0.7rc1 right now. It would be helpful if you practically test this, so anything not discovered by unit tests can be fixed. The final 1.0.7 release is scheduled for 2016-08-12, so be quick. Especially tests on misc. (network or non-network) filesystems would be useful. smbfs, nfs, sshfs, ... Also, test the locking - it hopefully won't deadlock that easily any more. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From adrian.klaver at aklaver.com Fri Aug 5 18:27:43 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 5 Aug 2016 15:27:43 -0700 Subject: [Borgbackup] borgbackup 1.0.7rc1 In-Reply-To: <65eb69ba-b9ad-80f9-f689-457430f5d860@waldmann-edv.de> References: <65eb69ba-b9ad-80f9-f689-457430f5d860@waldmann-edv.de> Message-ID: On 08/05/2016 02:27 PM, Thomas Waldmann wrote: > Released borgbackup 1.0.7rc1 right now. Is there a ChangeLog somewhere for this? > > It would be helpful if you practically test this, so anything not > discovered by unit tests can be fixed. > > The final 1.0.7 release is scheduled for 2016-08-12, so be quick. > > Especially tests on misc. (network or non-network) filesystems would be > useful. smbfs, nfs, sshfs, ... > > Also, test the locking - it hopefully won't deadlock that easily any more. > -- Adrian Klaver adrian.klaver at aklaver.com From public at enkore.de Fri Aug 5 18:49:28 2016 From: public at enkore.de (Marian Beermann) Date: Sat, 6 Aug 2016 00:49:28 +0200 Subject: [Borgbackup] borgbackup 1.0.7rc1 In-Reply-To: References: <65eb69ba-b9ad-80f9-f689-457430f5d860@waldmann-edv.de> Message-ID: <70143e36-b223-1bcb-fbed-1a6c1559d087@enkore.de> On 06.08.2016 00:27, Adrian Klaver wrote: > On 08/05/2016 02:27 PM, Thomas Waldmann wrote: >> Released borgbackup 1.0.7rc1 right now. > > Is there a ChangeLog somewhere for this? Not yet on read the docs, changelog can be found here: https://github.com/borgbackup/borg/blob/1.0.7rc1/docs/changes.rst#version-107rc1-2016-08-05 Cheers, Marian From adrian.klaver at aklaver.com Fri Aug 5 19:43:10 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 5 Aug 2016 16:43:10 -0700 Subject: [Borgbackup] borgbackup 1.0.7rc1 In-Reply-To: <70143e36-b223-1bcb-fbed-1a6c1559d087@enkore.de> References: <65eb69ba-b9ad-80f9-f689-457430f5d860@waldmann-edv.de> <70143e36-b223-1bcb-fbed-1a6c1559d087@enkore.de> Message-ID: <24de55c5-eeb1-4c90-a3db-cd8459ec4eac@aklaver.com> On 08/05/2016 03:49 PM, Marian Beermann wrote: > On 06.08.2016 00:27, Adrian Klaver wrote: >> On 08/05/2016 02:27 PM, Thomas Waldmann wrote: >>> Released borgbackup 1.0.7rc1 right now. >> >> Is there a ChangeLog somewhere for this? > > Not yet on read the docs, changelog can be found here: > > https://github.com/borgbackup/borg/blob/1.0.7rc1/docs/changes.rst#version-107rc1-2016-08-05 Thanks. > > > Cheers, Marian > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From gait at ATComputing.nl Mon Aug 8 10:11:49 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Mon, 8 Aug 2016 16:11:49 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump Message-ID: <70bfbbc7-e9d6-b2ed-6d4e-5ffd60ef0698@ATComputing.nl> Hello, I love Borg but ... I'm trying to dump 550GB of data contained in a directory tree made by rsnapshot, which means I have 4 hourly, 7 daily, 4 weekly and some monthly directories with many files occurring just once but referenced many times by hard links to them. Which e.g. means 'du -s' on those directories takes quite some time. I tried this using lz4 or no compression to no avail. Any ideas what's happening? What can I do to investigate further? Gerrit Local Exception. Traceback (most recent call last): File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 1609, in main exit_code = archiver.run(args) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 1546, in run return args.func(args) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 81, in wrapper return method(self, args, repository=repository, **kwargs) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 247, in do_create create_inner(archive, cache) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 219, in create_inner read_special=args.read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 275, in _process status = archive.process_file(path, st, cache, self.ignore_inode) File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 638, in process_file self.add_item(item) File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 281, in add_item self.write_checkpoint() File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 285, in write_checkpoint self.save(self.checkpoint_name) File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 314, in save self.cache.add_chunk(self.id, data, self.stats) File "/usr/local/lib/python3.4/site-packages/borg/cache.py", line 363, in add_chunk data = self.key.encrypt(data) File "/usr/local/lib/python3.4/site-packages/borg/key.py", line 136, in encrypt data = self.compressor.compress(data) File "borg/compress.pyx", line 186, in borg.compress.Compressor.compress (borg/compress.c:3973) File "borg/compress.pyx", line 92, in borg.compress.LZ4.compress (borg/compress.c:2004) Exception: lz4 compress failed Platform: FreeBSD sanger 10.3-RELEASE-p2 FreeBSD 10.3-RELEASE-p2 #0: Wed May 4 06:03:51 UTC 2016 root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 amd64 Borg: 1.0.6 Python: CPython 3.4.5 PID: 29182 CWD: /usr/home/sysman sys.argv: ['/usr/local/bin/borg', 'create', '--debug', '--compression=lz4', '--exclude', '*/borg/rsnapshot*/?hourly.*', '--progress', '--stats', '--verbose', '::data_backup at _data_backup_.zfs_snapshot_borg', '/data/backup/.zfs/snapshot/borg'] SSH_ORIGINAL_COMMAND: None From adrian.klaver at aklaver.com Mon Aug 8 10:41:26 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 8 Aug 2016 07:41:26 -0700 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: <70bfbbc7-e9d6-b2ed-6d4e-5ffd60ef0698@ATComputing.nl> References: <70bfbbc7-e9d6-b2ed-6d4e-5ffd60ef0698@ATComputing.nl> Message-ID: On 08/08/2016 07:11 AM, Gerrit A. Smit wrote: > Hello, > > > I love Borg but ... I'm trying to dump 550GB of data contained in a > directory tree made by rsnapshot, > which means I have 4 hourly, 7 daily, 4 weekly and some monthly > directories with many files > occurring just once but referenced many times by hard links to them. > > Which e.g. means 'du -s' on those directories takes quite some time. > > I tried this using lz4 or no compression to no avail. > > Any ideas what's happening? > What can I do to investigate further? What is the error when you use no compression? > > > Gerrit > > Local Exception. > Traceback (most recent call last): > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 1609, in main > exit_code = archiver.run(args) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 1546, in run > return args.func(args) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 81, in wrapper > return method(self, args, repository=repository, **kwargs) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 247, in do_create > create_inner(archive, cache) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 219, in create_inner > read_special=args.read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 301, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line > 275, in _process > status = archive.process_file(path, st, cache, self.ignore_inode) > File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line > 638, in process_file > self.add_item(item) > File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line > 281, in add_item > self.write_checkpoint() > File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line > 285, in write_checkpoint > self.save(self.checkpoint_name) > File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line > 314, in save > self.cache.add_chunk(self.id, data, self.stats) > File "/usr/local/lib/python3.4/site-packages/borg/cache.py", line 363, > in add_chunk data = self.key.encrypt(data) > File "/usr/local/lib/python3.4/site-packages/borg/key.py", line 136, > in encrypt > data = self.compressor.compress(data) > File "borg/compress.pyx", line 186, in > borg.compress.Compressor.compress (borg/compress.c:3973) > File "borg/compress.pyx", line 92, in borg.compress.LZ4.compress > (borg/compress.c:2004) > Exception: lz4 compress failed > Platform: FreeBSD sanger 10.3-RELEASE-p2 FreeBSD 10.3-RELEASE-p2 #0: Wed > May 4 06:03:51 UTC 2016 > root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 amd64 > Borg: 1.0.6 Python: CPython 3.4.5 > PID: 29182 CWD: /usr/home/sysman > sys.argv: ['/usr/local/bin/borg', 'create', '--debug', > '--compression=lz4', '--exclude', '*/borg/rsnapshot*/?hourly.*', > '--progress', '--stats', '--verbose', > '::data_backup at _data_backup_.zfs_snapshot_borg', > '/data/backup/.zfs/snapshot/borg'] > SSH_ORIGINAL_COMMAND: None > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Mon Aug 8 19:53:22 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 9 Aug 2016 01:53:22 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: <70bfbbc7-e9d6-b2ed-6d4e-5ffd60ef0698@ATComputing.nl> References: <70bfbbc7-e9d6-b2ed-6d4e-5ffd60ef0698@ATComputing.nl> Message-ID: Hi Gerrit, thanks for reporting this and congrats, it looks like you triggered the discovery of at least 3 issues. > directories with many files > occurring just once but referenced many times by hard links to them. The hardlinks do not matter (would happen in the same way if you had a lot of 0 byte files), but I assume you have a huge count of filesystem items due to this. Could you count them (files + directories), e.g. using find . | wc -l in the toplevel dir of the rsync backup dir? > I tried this using lz4 or no compression to no avail. As it was already asked by Adrian: we also need the error msg and traceback for the "no compression" case. > Exception: lz4 compress failed > Any ideas what's happening? Yes, I analyzed it: https://github.com/borgbackup/borg/issues/1453 The root cause for this is very likely that: https://github.com/borgbackup/borg/issues/1452 If you give us the uncompressed traceback, I guess we could verify that. Also, we noticed that: https://github.com/borgbackup/borg/issues/1451 We'll try to fix or relax some of the issues ASAP. BTW, if you transform a rsync-hardlink-method backup into a borg backup, instead of just backing up the whole thing into 1 repo, you could create 1 archive per rsync-snapshot. There is even an option to set the timestamp of the archive to some past date, if you like. This would likely already solve the issue you encountered, because the per-archive item (files/dirs) count will be much less then. If you could keep your rsync backup until 1.0.7 (rc2?), it would be cool if you could check by then if our fix works for you. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From gait at ATComputing.nl Tue Aug 9 04:06:45 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Tue, 9 Aug 2016 10:06:45 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: References: <70bfbbc7-e9d6-b2ed-6d4e-5ffd60ef0698@ATComputing.nl> Message-ID: <7868412d-c5e7-11d8-3b17-767e22282018@ATComputing.nl> Op 08-08-16 om 16:41 schreef Adrian Klaver: > What is the error when you use no compression? When not using compression I get a Data integrity error and a unusable repository: 553.53 GB O 553.75 GB C 319.15 GB D 4585967 N data/backup/.zfs/snapshot/borg/rsnapshot/1daily.0/hosts_ssh/... Data integrity error Traceback (most recent call last): File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 1609, in main exit_code = archiver.run(args) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 1546, in run return args.func(args) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 81, in wrapper return method(self, args, repository=repository, **kwargs) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 247, in do_create create_inner(archive, cache) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 219, in create_inner read_special=args.read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 301, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 275, in _process status = archive.process_file(path, st, cache, self.ignore_inode) File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 638, in process_file self.add_item(item) File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 281, in add_item self.write_checkpoint() File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 285, in write_checkpoint self.save(self.checkpoint_name) File "/usr/local/lib/python3.4/site-packages/borg/archive.py", line 317, in save self.repository.commit() File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 199, in commit self.compact_segments(save_space=save_space) File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 287, in compact_segments for tag, key, offset, data in self.io.iter_objects(segment, include_data=True): File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 648, in iter_objects (TAG_PUT, TAG_DELETE, TAG_COMMIT)) File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 704, in _read segment, offset)) borg.helpers.IntegrityError: Invalid segment entry size [segment 48441, offset 4210392] Platform: FreeBSD sanger 10.3-RELEASE-p2 FreeBSD 10.3-RELEASE-p2 #0: Wed May 4 06:03:51 UTC 2016 root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 amd64 Borg: 1.0.6 Python: CPython 3.4.5 PID: 44736 CWD: /usr/home/sysman sys.argv: ['/usr/local/bin/borg', 'create', '--debug', '--exclude', '*/borg/rsnapshot*/?monthly.*', '--progress', '--stats', '--verbose', '::data_backup at _data_backup_.zfs_snapshot_borg', '/data/backup/.zfs/snapshot/borg'] SSH_ORIGINAL_COMMAND: None zfsborg_create :: list archive(s) using builtin fallback logging configuration Replaying segments 0% Replaying segments 5% Replaying segments 10% Replaying segments 15% Replaying segments 20% Replaying segments 25% Replaying segments 30% Replaying segments 35% Replaying segments 40% Replaying segments 45% Replaying segments 50% Replaying segments 55% Replaying segments 60% Replaying segments 65% Replaying segments 70% Replaying segments 75% Replaying segments 80% Replaying segments 85% Replaying segments 90% Replaying segments 95% Data integrity error Traceback (most recent call last): File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 1609, in main exit_code = archiver.run(args) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 1546, in run return args.func(args) File "/usr/local/lib/python3.4/site-packages/borg/archiver.py", line 75, in wrapper kwargs['manifest'], kwargs['key'] = Manifest.load(repository) File "/usr/local/lib/python3.4/site-packages/borg/helpers.py", line 106, in load cdata = repository.get(cls.MANIFEST_ID) File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 458, in get self.index = self.open_index(self.get_transaction_id()) File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 160, in get_transaction_id self.replay_segments(replay_from, segments_transaction_id) File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 322, in replay_segments self._update_index(segment, objects) File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 331, in _update_index for tag, key, offset in objects: File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 648, in iter_objects (TAG_PUT, TAG_DELETE, TAG_COMMIT)) File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line 704, in _read segment, offset)) borg.helpers.IntegrityError: Invalid segment entry size [segment 48570, offset 316616] Platform: FreeBSD sanger 10.3-RELEASE-p2 FreeBSD 10.3-RELEASE-p2 #0: Wed May 4 06:03:51 UTC 2016 root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 amd64 Borg: 1.0.6 Python: CPython 3.4.5 PID: 58353 CWD: /usr/home/sysman sys.argv: ['/usr/local/bin/borg', 'list', '--debug', '::'] SSH_ORIGINAL_COMMAND: None From tw at waldmann-edv.de Tue Aug 9 09:59:24 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 9 Aug 2016 15:59:24 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: <7868412d-c5e7-11d8-3b17-767e22282018@ATComputing.nl> References: <70bfbbc7-e9d6-b2ed-6d4e-5ffd60ef0698@ATComputing.nl> <7868412d-c5e7-11d8-3b17-767e22282018@ATComputing.nl> Message-ID: >> What is the error when you use no compression? > 553.53 GB O 553.75 GB C 319.15 GB D 4585967 N OK, so 550GB data and 4.5M files. > File "/usr/local/lib/python3.4/site-packages/borg/repository.py", line > 704, in _read > segment, offset)) > borg.helpers.IntegrityError: Invalid segment entry size [segment 48441, > offset 4210392] It has written a huge object and now rejects to read it, see: https://github.com/borgbackup/borg/issues/1451 What we are wondering about: You don't have that many files, it is just 4.5 million files. Also they are not that big as the total size is just 550GB. Also, you said you have a lot of hardlinks (which borg stores as chunkless item that keeps a reference to a "master item" with the chunks list - but it will store ACLs and xattrs for the hardlink item separately). So why are your metadata items that big? Do you have a lot of ACLs or xattrs? See there: https://github.com/borgbackup/borg/issues/1452 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From gait at ATComputing.nl Wed Aug 10 04:50:05 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 10 Aug 2016 10:50:05 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: References: Message-ID: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> [sorry for the possible repeat, previous message had wrong sender address] Hi everyone, > thanks for reporting this and congrats, it looks like you triggered the > discovery of at least 3 issues. Makes my day! (somehow) > Could you count them (files + directories), e.g. using find . | wc -l in > the toplevel dir of the rsync backup dir? # time sh -c 'find .|wc -l' 100479074 real 60m19.987s user 1m54.702s sys 8m29.685s > If you could keep your rsync backup until 1.0.7 (rc2?), it would be cool > if you could check by then if our fix works for you. Can't wait! Gerrit -- Kind regards, AT COMPUTING Gerrit A. Smit Beheer Technische Infrastructuur AT Computing Telefoon: +31 24 352 72 22 D? one-stop-Linux-shop Telefoon cursussecretariaat: +31 24 352 72 72 Fax: +31 24 352 72 92 Kerkenbos 12-38 TI at ATComputing.nl 6546 BE Nijmegen www.atcomputing.nl Nieuw bij AT Computing: onze Linux Reference Card nu ook als gratis app! From gait at ATComputing.nl Wed Aug 10 06:53:09 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 10 Aug 2016 12:53:09 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> References: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> Message-ID: Op 10-08-16 om 10:50 schreef Gerrit A. Smit: >> If you could keep your rsync backup until 1.0.7 (rc2?), it would be cool >> if you could check by then if our fix works for you. > > Can't wait! O, sorry, I *can* wait. Gerrit From tw at waldmann-edv.de Wed Aug 10 08:03:19 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 10 Aug 2016 14:03:19 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> References: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> Message-ID: <52031d6e-059d-8a58-e470-8d7ae4b3d266@waldmann-edv.de> > # time sh -c 'find .|wc -l' > 100479074 Oh, wow. 100 million files. O.O There is still the open question why the file metadata is so big. Do you have a lot of big ACLs or xattrs (extended attributes) in there? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From gait at ATComputing.nl Wed Aug 10 10:02:43 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 10 Aug 2016 16:02:43 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: <52031d6e-059d-8a58-e470-8d7ae4b3d266@waldmann-edv.de> References: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> <52031d6e-059d-8a58-e470-8d7ae4b3d266@waldmann-edv.de> Message-ID: Hi Thomas, Op 10-08-16 om 14:03 schreef Thomas Waldmann: >> # time sh -c 'find .|wc -l' >> 100479074 > > Oh, wow. 100 million files. O.O I just did what you told me to do, so: this just means so many leaves. Number of inodes is about 16,036,802 with many of them shared by 18 leaves (4 x hourly, 7 x daily, 4 x weekly, 3 x monthly). > There is still the open question why the file metadata is so big. > Do you have a lot of big ACLs or xattrs (extended attributes) in there? No. xattr does occur, but that's with files made at install time and on just a few hosts. I think rsnapshot will stay in use as I tend to use Borg for off-site dumps. Eventually with rsync.net, as they have a special offer for Borg-users. Gerrit From tw at waldmann-edv.de Fri Aug 12 23:55:23 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 13 Aug 2016 05:55:23 +0200 Subject: [Borgbackup] borgbackup 1.0.7rc2 Message-ID: <744affb1-f019-ee8a-9a6e-a048657e314d@waldmann-edv.de> Released borgbackup 1.0.7rc2 right now. https://github.com/borgbackup/borg/releases/tag/1.0.7rc2 https://github.com/borgbackup/borg/blob/1.0.7rc2/docs/changes.rst#version-107rc2-2016-08-13 It would be helpful if you practically test this, so anything not discovered by unit tests can be fixed. The final 1.0.7 release is scheduled for 2016-08-19, so be quick. Following tests would be useful: - tests on misc. (network or non-network) filesystems. smbfs, nfs, sshfs, ... - locking - it hopefully won't deadlock that easily any more. - xattrs - esp. race conditions on live filesystems - running borg check -v - backing up a huge number of files into 1 archive, like 10 .. 100 million - env var overrides - running against borg servers with <= 1.0.6 or 1.0.7rc2 - running with lz4 compression - running with different backup sets on same machines - working with the FUSE mount, linux and OS X - working with output connected to (breaking) pipe, like borg ... | less (and then pressing q) - trying the borg versions management practically - trying borg init --append-only -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Fri Aug 12 23:58:11 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 13 Aug 2016 05:58:11 +0200 Subject: [Borgbackup] 550GB rsync-tree will not dump In-Reply-To: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> References: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> Message-ID: <3d2c365f-1f7d-9949-9a11-3a2d5c903dfa@waldmann-edv.de> >> If you could keep your rsync backup until 1.0.7 (rc2?), it would be cool >> if you could check by then if our fix works for you. > > Can't wait! No need to wait any longer, 1.0.7rc2 is out. And if your metadata isn't too big, it may even be capable of backing up your 100 million files. Can you test? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From gait at atcomputing.nl Mon Aug 15 15:41:45 2016 From: gait at atcomputing.nl (Gerrit A. Smit) Date: Mon, 15 Aug 2016 21:41:45 +0200 Subject: [Borgbackup] 643GB rsync-tree will dump In-Reply-To: <3d2c365f-1f7d-9949-9a11-3a2d5c903dfa@waldmann-edv.de> References: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> <3d2c365f-1f7d-9949-9a11-3a2d5c903dfa@waldmann-edv.de> Message-ID: <9ac5f6fc-1518-4cc0-faf6-6dd8ae48853e@atcomputing.nl> (Mind The Subject) Thomas Waldmann schreef op 13-08-16 om 05:58: >>> If you could keep your rsync backup until 1.0.7 (rc2?), it would be cool >>> if you could check by then if our fix works for you. >> Can't wait! > No need to wait any longer, 1.0.7rc2 is out. Sorry, I used another version ... > > And if your metadata isn't too big, it may even be capable of backing up > your 100 million files. > > Can you test? > All OK!!! Thanks! ------------------------------------------------------------------------------ Archive name: data_backup at _data_backup_.zfs_snapshot_borg Archive fingerprint: f221548ef6df7d800ee42eb330043c5efd8483cb8b7a58ae2df9c32738fc0a03 Time (start): Mon, 2016-08-15 12:49:46 Time (end): Mon, 2016-08-15 21:30:21 Duration: 8 hours 40 minutes 34.45 seconds Number of files: 4933652 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 643.45 GB 455.47 GB 258.68 GB All archives: 643.45 GB 455.47 GB 258.68 GB Unique chunks Total chunks Chunk index: 2663000 5042489 ------------------------------------------------------------------------------ zfsborg_create :: list archive(s) using builtin fallback logging configuration 29 self tests completed in 0.25 seconds data_backup at _data_backup_.zfs_snapshot_borg Mon, 2016-08-15 12:49:46 [f221548ef6df7d800ee42eb330043c5efd8483cb8b7a58ae2df9c32738fc0a03] $ borg --version borg 1.1.dev326+ng6e9debb Gerrit From gait at atcomputing.nl Mon Aug 15 17:03:53 2016 From: gait at atcomputing.nl (Gerrit A. Smit) Date: Mon, 15 Aug 2016 23:03:53 +0200 Subject: [Borgbackup] 643GB rsync-tree will dump In-Reply-To: <9ac5f6fc-1518-4cc0-faf6-6dd8ae48853e@atcomputing.nl> References: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> <3d2c365f-1f7d-9949-9a11-3a2d5c903dfa@waldmann-edv.de> <9ac5f6fc-1518-4cc0-faf6-6dd8ae48853e@atcomputing.nl> Message-ID: > > ------------------------------------------------------------------------------ > > Archive name: data_backup at _data_backup_.zfs_snapshot_borg > Archive fingerprint: > f221548ef6df7d800ee42eb330043c5efd8483cb8b7a58ae2df9c32738fc0a03 > Time (start): Mon, 2016-08-15 12:49:46 > Time (end): Mon, 2016-08-15 21:30:21 > Duration: 8 hours 40 minutes 34.45 seconds > Number of files: 4933652 > ------------------------------------------------------------------------------ > > Original size Compressed size > Deduplicated size > This archive: 643.45 GB 455.47 GB > 258.68 GB > All archives: 643.45 GB 455.47 GB > 258.68 GB > Unique chunks Total chunks > Chunk index: 2663000 5042489 > ------------------------------------------------------------------------------ > Is there a way to get these info without creating a new archive? Gerrit From adrian.klaver at aklaver.com Mon Aug 15 18:39:23 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 15 Aug 2016 15:39:23 -0700 Subject: [Borgbackup] 643GB rsync-tree will dump In-Reply-To: References: <81acac55-06c4-16ae-4ef1-2d027ab11dc6@ATComputing.nl> <3d2c365f-1f7d-9949-9a11-3a2d5c903dfa@waldmann-edv.de> <9ac5f6fc-1518-4cc0-faf6-6dd8ae48853e@atcomputing.nl> Message-ID: <411e7739-e9bc-bd7f-330a-04ebe7dbb275@aklaver.com> On 08/15/2016 02:03 PM, Gerrit A. Smit wrote: >> >> ------------------------------------------------------------------------------ >> >> Archive name: data_backup at _data_backup_.zfs_snapshot_borg >> Archive fingerprint: >> f221548ef6df7d800ee42eb330043c5efd8483cb8b7a58ae2df9c32738fc0a03 >> Time (start): Mon, 2016-08-15 12:49:46 >> Time (end): Mon, 2016-08-15 21:30:21 >> Duration: 8 hours 40 minutes 34.45 seconds >> Number of files: 4933652 >> ------------------------------------------------------------------------------ >> >> Original size Compressed size >> Deduplicated size >> This archive: 643.45 GB 455.47 GB >> 258.68 GB >> All archives: 643.45 GB 455.47 GB >> 258.68 GB >> Unique chunks Total chunks >> Chunk index: 2663000 5042489 >> ------------------------------------------------------------------------------ >> > Is there a way to get these info without creating a new archive? aklaver at arkansas:~$ borg_new info -v /mnt/backup/arkansas_borg/cascade_cuts/::production_081516_1205 Name: production_081516_1205 Fingerprint: 404f38998b266270f34daa4e6686d1e5c9c2f7421e6d2709c09563ae3d4b2948 Hostname: mayhem3 Username: aklaver Time (start): Mon, 2016-08-15 12:05:11 Time (end): Mon, 2016-08-15 12:05:16 Command line: borg_new create --stats -v --compression lzma --remote-path /home/aklaver/bin/borg_new arkansas:/mnt/backup/arkansas_borg/********::production_081516_1205 /var/bak/******* Number of files: 626 Original size Compressed size Deduplicated size This archive: 453.26 MB 200.66 MB 462.75 kB All archives: 33.42 GB 25.19 GB 4.47 GB Unique chunks Total chunks Chunk index: 16391 164017 > > Gerrit > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Adrian Klaver adrian.klaver at aklaver.com From mfseeker at gmail.com Wed Aug 17 14:09:55 2016 From: mfseeker at gmail.com (Stan Armstrong) Date: Wed, 17 Aug 2016 15:09:55 -0300 Subject: [Borgbackup] Failsafe? Message-ID: <57B4A873.3070608@gmail.com> Back in January I had established a backup system consisting of a single repository and several archives that served to backup my startup SSD and my main data HD, both locally and to my wife's backup drive. I used several simple scripts and crontab to manage the timings and persistence for various of these backups. The scripts also run tests and lists and keep the results in a borg log directory. I tested the setup for a few months and was quite happy with it. Then two things happened. First my main data drive began to fail and had to be sent to data drive heaven. I was able to manually copy some of my data from it to another drive before I retired the failing drive. Then I found that borg had stopped making backups some months ago due to lock files appearing. I had neglected to check from time to time in recent months to be sure the backups were still being made. I had hoped that once my backup system was in place, I could forget about it. Live and learn. Does anyone have a (semi)automatic way of checking for lock files? I plan on writing a script that would periodically check for lock files and, if no borg was running, would use break-lock. Have any of you done this already? There must be a one-liner that would perform the required lock testing on all of my archives. I'm lazy enough not to want to struggle to reinvent the wheel. PS In trying to locate the most recent copies of some of the files and directories that perished with the failed drive, I used "borg mount" for the first time today. What a great time saver. From public at enkore.de Wed Aug 17 15:10:44 2016 From: public at enkore.de (public at enkore.de) Date: Wed, 17 Aug 2016 21:10:44 +0200 Subject: [Borgbackup] Failsafe? In-Reply-To: <57B4A873.3070608@gmail.com> References: <57B4A873.3070608@gmail.com> Message-ID: <1e4373a5-1838-4d1f-1cc9-849551297f16@enkore.de> In regards to backup monitoring I would recommend to send mails after every succeeded *and* failed backup. If they stop coming you know something's up (ditto if they report failures). In regards to locking, there have been some improvements in most releases of the 1.0 series and there are efforts to improve it for 1.1 (eg. https://github.com/borgbackup/borg/pull/1246 ). Scripting it is possible, making that safe may not be so simple, even if it is assured externally (cron?) that tasks won't overlap. Ie. we're aware that this is a pain point and try to do better. Cheers, Marian On 17/08/16 20:09, Stan Armstrong wrote: > Back in January I had established a backup system consisting of a single > repository and several archives that served to backup my startup SSD and > my main data HD, both locally and to my wife's backup drive. I used > several simple scripts and crontab to manage the timings and persistence > for various of these backups. The scripts also run tests and lists and > keep the results in a borg log directory. I tested the setup for a few > months and was quite happy with it. > > Then two things happened. First my main data drive began to fail and had > to be sent to data drive heaven. I was able to manually copy some of my > data from it to another drive before I retired the failing drive. Then I > found that borg had stopped making backups some months ago due to lock > files appearing. I had neglected to check from time to time in recent > months to be sure the backups were still being made. > > I had hoped that once my backup system was in place, I could forget > about it. Live and learn. Does anyone have a (semi)automatic way of > checking for lock files? I plan on writing a script that would > periodically check for lock files and, if no borg was running, would use > break-lock. Have any of you done this already? There must be a one-liner > that would perform the required lock testing on all of my archives. I'm > lazy enough not to want to struggle to reinvent the wheel. > > PS In trying to locate the most recent copies of some of the files and > directories that perished with the failed drive, I used "borg mount" for > the first time today. What a great time saver. > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Thu Aug 18 19:33:24 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 19 Aug 2016 01:33:24 +0200 Subject: [Borgbackup] security fix: borgbackup 1.0.7 binaries released Message-ID: https://github.com/borgbackup/borg/releases/tag/1.0.7bin Critical security fix and some bug fixes, please upgrade ASAP. More details: see URL above. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Fri Aug 19 17:56:23 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 19 Aug 2016 23:56:23 +0200 Subject: [Borgbackup] security fix: borgbackup 1.0.7 released Message-ID: <5e7aa0f6-2b54-4dcd-f711-7a9a478dccf2@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.0.7 Critical security fix and some bug fixes, please upgrade ASAP. More details: see URL above. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From hpj at urpla.net Sat Aug 20 06:40:07 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Sat, 20 Aug 2016 12:40:07 +0200 Subject: [Borgbackup] Verify integrity Message-ID: <5578283.kgNl5c9pCQ@xrated> Hi assimilated companions, in an attempt to verify the integrity of all my backups with 1.0.6, I did: for a in $(borg list /backup/borg | cut -d' ' -f1 | sort -n); do echo $a; borg extract /backup/borg::$a --dry-run; done IOW, I forgot to supply -v on extraction (again). Can I assume, that borg would have generated some error messages, if some backups were broken? Thanks, Pete From public at enkore.de Sat Aug 20 07:26:53 2016 From: public at enkore.de (Marian Beermann) Date: Sat, 20 Aug 2016 13:26:53 +0200 Subject: [Borgbackup] Verify integrity In-Reply-To: <5578283.kgNl5c9pCQ@xrated> References: <5578283.kgNl5c9pCQ@xrated> Message-ID: Warnings and errors are always printed unless you use --error (suppressing warnings) or --critical (suppressing errors). Cheers, Marian http://borgbackup.readthedocs.io/en/stable/usage.html#type-of-log-output On 20.08.2016 12:40, Hans-Peter Jansen wrote: > Hi assimilated companions, > > in an attempt to verify the integrity of all my backups with 1.0.6, I did: > > for a in $(borg list /backup/borg | cut -d' ' -f1 | sort -n); do echo $a; borg > extract /backup/borg::$a --dry-run; done > > IOW, I forgot to supply -v on extraction (again). > > Can I assume, that borg would have generated some error messages, if some > backups were broken? > > Thanks, > Pete > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From wtraylor at areyouthinking.org Thu Aug 25 06:16:32 2016 From: wtraylor at areyouthinking.org (Walker Traylor) Date: Thu, 25 Aug 2016 17:16:32 +0700 Subject: [Borgbackup] Determining which keys I am using Message-ID: Hello borg devs, A while ago, I created a borg repo with an existing keyfile in .config/borg and a passphrase set in my BORG_PASSPHRASE environment variable. I am surprised now to find three keyfiles in my .config/borg directory which must have been created while testing different initializations and specifying keyfile encryption. I am trying to figure out which one I am using so back it up and delete the others. I removed them all and one by one made extractions using each key and found that they all work, also without any key at all. Obviously this means the keys aren?t being used and only the passphrase is being used. I will need to reinitialize the repo again and ensure it is using the key. But first I am trying to understand this behavior. When does borg use the existing keyfile in the directory, and when does it create another? How can I verify a repository to determine what form of encryption it is using after I create it to be sure? I am using borg 1.0.3 (though it was initialized on an earlier version) on both ends. My client is Mac 10.11.16. Thank you, Walker From tw at waldmann-edv.de Thu Aug 25 06:50:50 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 25 Aug 2016 12:50:50 +0200 Subject: [Borgbackup] Determining which keys I am using In-Reply-To: References: Message-ID: Hi Walker, it would be good if you could reproduce the problem with the current borg release 1.0.7. Also check that you look at the right locations in the file system, e.g. if borg runs as root, the key(s) will be in ~root/.config/borg/keys/ (not in your users home dir), Did you ever use borg < 1.0 or attic in -e passphrase mode? If so, did you use borg migrate-to-repokey when switching to borg >= 1.0? Also, please make sure the borg code is really the version you think it is. In the same way as you usually invoke borg create, please invoke borg --version. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Thu Aug 25 07:13:59 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 25 Aug 2016 13:13:59 +0200 Subject: [Borgbackup] Determining which keys I am using In-Reply-To: References: Message-ID: Also, if the encryption is done in repokey mode, look into repo/config to find your passphrase-protected key. -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wtraylor at areyouthinking.org Thu Aug 25 07:17:21 2016 From: wtraylor at areyouthinking.org (Walker Traylor) Date: Thu, 25 Aug 2016 18:17:21 +0700 Subject: [Borgbackup] Determining which keys I am using In-Reply-To: References: Message-ID: I found that and noticed it didn?t match any of my client keys then realized it was passphrase encrypted. If I decrypt that key with the passphrase it should match the local key right? What command can I use to decrypt that repo key? Thanks, Walker > On Aug 25, 2016, at 6:13 PM, Thomas Waldmann wrote: > > Also, if the encryption is done in repokey mode, look into repo/config to find your passphrase-protected key. > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity._______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From wtraylor at areyouthinking.org Thu Aug 25 07:28:22 2016 From: wtraylor at areyouthinking.org (Walker Traylor) Date: Thu, 25 Aug 2016 18:28:22 +0700 Subject: [Borgbackup] Determining which keys I am using In-Reply-To: References: Message-ID: <48B4CFBF-3971-41E3-8FAD-61603036ECD9@areyouthinking.org> > On Aug 25, 2016, at 5:50 PM, Thomas Waldmann wrote: > > Hi Walker, > > it would be good if you could reproduce the problem with the current > borg release 1.0.7. I will try to do this when I can make time. For now here is your other info: > > Also check that you look at the right locations in the file system, e.g. > if borg runs as root, the key(s) will be in ~root/.config/borg/keys/ > (not in your users home dir), Good idea, but I am sure it runs as user wtraylor. > > Did you ever use borg < 1.0 or attic in -e passphrase mode? > Yes, intially. I deleted everything locally and in the remote repo. > If so, did you use borg migrate-to-repokey when switching to borg >= 1.0? > No. Is this documented somewhere on https://borgbackup.readthedocs.io? > Also, please make sure the borg code is really the version you think it > is. In the same way as you usually invoke borg create, please invoke > borg ?version. > I normally use the borg wrapper ?borgmatic.? To be sure I found this in lsof while borgmatic is running: /opt/homebrew-cask/Caskroom/borgbackup/1.0.3/borg-darwin64 wtraylor at macbook$ /opt/homebrew-cask/Caskroom/borgbackup/1.0.3/borg-darwin64 --version borg-darwin64 1.0.3 I also noticed this behavior before I upgraded the local borg binary to a 1.0 release and began using the 1.0.x server binary. I initialized the repo using a 0.9 release, perhaps 0.96. I realize this makes it hard to debug and if this isn?t enough information I?ll try to reproduce on 1.0.7. It was months ago that I initialized the repo with the pre 1.0 binary. I do remember clearing everything out on the server (including all dotfiles) and client when I decided to change from passphrase to repokey and initialized several times using keyfile. I assumed it was using the same key and didn?t check until later to discover it was making new keys (appending .1, .2, etc to the key.) Walker > Cheers, > > Thomas > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From eric at in3x.io Fri Aug 26 17:26:29 2016 From: eric at in3x.io (Eric S. Johansson) Date: Fri, 26 Aug 2016 17:26:29 -0400 Subject: [Borgbackup] backup speed Message-ID: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> I'm just getting started with Borg and I've encountered something am puzzled by. I'm backing up about 2 1/2 TB of data across the net to some cloud-based storage. It's borg client to Borg server (v1.07) and the initial part of the transfer was fast (12 Mb per second) and now it's running slow (2 Mb per second). I believe the initial transfer never completed and I've tried a couple times to finish the complete transfer but the backup has run for days transferring one iso image. The repository seems to check ok[1] what should I be looking at to debug this problem [1] eric at schist:~$ borg check -v esj at 192.168.73.232:backups/schist Remote: Starting repository check Remote: Completed repository check, no problems found. Starting archive consistency check... Analyzing archive 2016-08-26.checkpoint (11/11) Analyzing archive 2016-08-23.checkpoint (10/11) Analyzing archive 2016-08-22.checkpoint (9/11) Analyzing archive 2016-08-21.checkpoint (8/11) Analyzing archive 2016-08-20.checkpoint (7/11) Analyzing archive 2016-08-18.checkpoint (6/11) Analyzing archive 2016-05-07 (5/11) Analyzing archive 2016-04-13 (4/11) Analyzing archive 2016-04-05 (3/11) Analyzing archive 2016-03-31 (2/11) Analyzing archive 2016-03-27.checkpoint (1/11) Archive consistency check complete, no problems found. From tw at waldmann-edv.de Fri Aug 26 17:55:57 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 26 Aug 2016 23:55:57 +0200 Subject: [Borgbackup] backup speed In-Reply-To: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> Message-ID: <412f5a19-0784-f8ce-30f5-1c3cb4db5a58@waldmann-edv.de> > I'm backing up about 2 1/2 TB of data across the net to some cloud-based > storage. rsync.net or are there others also? How many files (filesystem objects) are we talking about? > It's borg client to Borg server (v1.07) and the initial part of > the transfer was fast (12 Mb per second) and now it's running slow (2 > Mb per second). Hard to say if that indicates a problem. Some infos / ideas: - a lot of small files naturally give lower throughput than large files, so throughput varies. - compression might not work for some files, work normally for others and hilariously for sparse files (all those 0 bytes compress really good). - if you start from 0, borg hashindex is small, hash operations are very fast. with growing amounts of chunks, the hash table grows bigger, RAM consumption rises (make sure you have enough RAM - if you run out of RAM and starts to swap, it gets really slow). - if the index is big, starting a transaction takes more time than for a small index - by default, it does checkpoints every 5 minutes. if your connection is stable, you can use a longer interval - I have the suspicion that sometimes the hashtable performance gets slow (depends on the data fed into it), https://github.com/borgbackup/borg/issues/536 - a developer is working on a better hashtable implementation and also related performance tests / statistics. > I believe the initial transfer never completed You can use -v --show-rc to log the return code. > and I've tried a couple > times to finish the complete transfer but the backup has run for days > transferring one iso image. How do you know it is that one iso image all the time? > eric at schist:~$ borg check -v esj at 192.168.73.232:backups/schist > Remote: Starting repository check > Remote: Completed repository check, no problems found. > Starting archive consistency check... > Analyzing archive 2016-08-26.checkpoint (11/11) > Analyzing archive 2016-08-23.checkpoint (10/11) > Analyzing archive 2016-08-22.checkpoint (9/11) > Analyzing archive 2016-08-21.checkpoint (8/11) > Analyzing archive 2016-08-20.checkpoint (7/11) > Analyzing archive 2016-08-18.checkpoint (6/11) > Analyzing archive 2016-05-07 (5/11) > Analyzing archive 2016-04-13 (4/11) > Analyzing archive 2016-04-05 (3/11) > Analyzing archive 2016-03-31 (2/11) > Analyzing archive 2016-03-27.checkpoint (1/11) > Archive consistency check complete, no problems found. After(!) you have finished your backup(s) successfully, you can delete the checkpoints. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Fri Aug 26 18:11:43 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 27 Aug 2016 00:11:43 +0200 Subject: [Borgbackup] backup speed In-Reply-To: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> Message-ID: <4abdd857-17af-4f61-c487-01bb54d589d5@waldmann-edv.de> ... and if it is via the internet to some provider: - internet throughput and latency might vary - provider performance might vary -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From deker at deker.co Sat Aug 27 12:09:40 2016 From: deker at deker.co (Rob "Deker" Dekelbaum) Date: Sat, 27 Aug 2016 12:09:40 -0400 Subject: [Borgbackup] backup speed In-Reply-To: <4abdd857-17af-4f61-c487-01bb54d589d5@waldmann-edv.de> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> <4abdd857-17af-4f61-c487-01bb54d589d5@waldmann-edv.de> Message-ID: <2f370ed6-00de-4973-0e0c-e9fcad187146@deker.co> ...and some providers might throttle your connection after an initial burst of speed (I'm looking at you Comcast....) On 08/26/2016 06:11 PM, Thomas Waldmann wrote: > ... and if it is via the internet to some provider: > > - internet throughput and latency might vary > - provider performance might vary > > From tw at waldmann-edv.de Sat Aug 27 22:18:28 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 28 Aug 2016 04:18:28 +0200 Subject: [Borgbackup] borgbackup beta 1.1.0b1 released Message-ID: <961799c1-ba40-52f5-0bbf-84e752acce7a@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.1.0b1 More details: see URL above. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From eric at in3x.io Tue Aug 30 12:31:20 2016 From: eric at in3x.io (Eric S. Johansson) Date: Tue, 30 Aug 2016 12:31:20 -0400 Subject: [Borgbackup] backup speed In-Reply-To: <412f5a19-0784-f8ce-30f5-1c3cb4db5a58@waldmann-edv.de> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> <412f5a19-0784-f8ce-30f5-1c3cb4db5a58@waldmann-edv.de> Message-ID: <58db8b55-7b83-f60b-f242-6157705f906f@in3x.io> On 8/26/2016 5:55 PM, Thomas Waldmann wrote: >> I'm backing up about 2 1/2 TB of data across the net to some cloud-based >> storage. > rsync.net or are there others also? It's a private cloud. Friend of mine has a bunch of storage and we are talking about cross storing each other's off-site backups. Any excuse to buy a lot more discs. :-) > How many files (filesystem objects) are we talking about? eric at schist:~$ sudo find /np1/pond/home/| wc -l 207828 eric at schist:~$ sudo find /np1/pond/git/| wc -l 3698 eric at schist:~$ sudo find /np1/pond/music/| wc -l 586 >> It's borg client to Borg server (v1.07) and the initial part of >> the transfer was fast (12 Mb per second) and now it's running slow (2 >> Mb per second). > Hard to say if that indicates a problem. Some infos / ideas: > > - I have the suspicion that sometimes the hashtable performance gets > slow (depends on the data fed into it), > https://github.com/borgbackup/borg/issues/536 - a developer is working > on a better hashtable implementation and also related performance tests > / statistics. This is one of my suspicions as well. The other is that there some sort of semaphore/resource block. I'm seeing low CPU and low network utilization. 103.65 GB O 103.23 GB C 98.80 GB D 440 N np1/pond/home/eric/2015-09-13-23-img/sda3.ext4-ptcl-img.gz.bm I started this backup last Friday. Yes, the connection is stable it's over VPN plus SSH. The same connection running rsync was running in the 130 kB/sec range versus what appears to be in the 20 kB/sec range for borg > >> I believe the initial transfer never completed > You can use -v --show-rc to log the return code. > >> and I've tried a couple >> times to finish the complete transfer but the backup has run for days >> transferring one iso image. > How do you know it is that one iso image all the time? I'm running borg with this command line in a tmux window borg create -v --stats --progress -C zlib,6 'me at 192.168.1.2:backups/schist::{now:%Y-%m-%d}' /np1/pond/home /np1/pond/git /np1/pond/music --exclude '*.pyc' and I check in periodically and see lines like this one indicating the work is working on this one file for hours at a time. 103.65 GB O 103.23 GB C 98.80 GB D 440 N np1/pond/home/eric/2015-09-13-23-img/sda3.ext4-ptcl-img.gz.bm From public at enkore.de Tue Aug 30 12:48:06 2016 From: public at enkore.de (Marian Beermann) Date: Tue, 30 Aug 2016 18:48:06 +0200 Subject: [Borgbackup] backup speed In-Reply-To: <58db8b55-7b83-f60b-f242-6157705f906f@in3x.io> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> <412f5a19-0784-f8ce-30f5-1c3cb4db5a58@waldmann-edv.de> <58db8b55-7b83-f60b-f242-6157705f906f@in3x.io> Message-ID: <4ac383dc-ea9c-b42c-ed1e-cf44963d6cff@enkore.de> Hi Eric On 30.08.2016 18:31, Eric S. Johansson wrote: >>> It's borg client to Borg server (v1.07) and the initial part of >>> the transfer was fast (12 Mb per second) and now it's running slow (2 >>> Mb per second). >> Hard to say if that indicates a problem. Some infos / ideas: >> >> - I have the suspicion that sometimes the hashtable performance gets >> slow (depends on the data fed into it), >> https://github.com/borgbackup/borg/issues/536 - a developer is working >> on a better hashtable implementation and also related performance tests >> / statistics. > > This is one of my suspicions as well. The other is that there some sort > of semaphore/resource block. I'm seeing low CPU and low network utilization. If the hash-table performance breaks down it's due to tombstoning (afaik), so I'd expect high CPU load with little throughput, not low CPU and little throughput. >> >>> I believe the initial transfer never completed >> You can use -v --show-rc to log the return code. >> >>> and I've tried a couple >>> times to finish the complete transfer but the backup has run for days >>> transferring one iso image. >> How do you know it is that one iso image all the time? > > I'm running borg with this command line in a tmux window > > borg create -v --stats --progress -C zlib,6 > 'me at 192.168.1.2:backups/schist::{now:%Y-%m-%d}' /np1/pond/home > /np1/pond/git /np1/pond/music --exclude '*.pyc' > > and I check in periodically and see lines like this one indicating the > work is working on this one file for hours at a time. > > 103.65 GB O 103.23 GB C 98.80 GB D 440 N > np1/pond/home/eric/2015-09-13-23-img/sda3.ext4-ptcl-img.gz.bm > What's the latency between the hosts you are using? (Preferably measured inside the VPN) Since the current networking code looks relatively latency sensitive in my eyes. OTOH 100 GB in 440 files means big files, so many large chunks. In that case the latency should not matter as much. Cheers, Marian From eric at in3x.io Tue Aug 30 13:38:06 2016 From: eric at in3x.io (Eric S. Johansson) Date: Tue, 30 Aug 2016 13:38:06 -0400 Subject: [Borgbackup] backup speed In-Reply-To: <4ac383dc-ea9c-b42c-ed1e-cf44963d6cff@enkore.de> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> <412f5a19-0784-f8ce-30f5-1c3cb4db5a58@waldmann-edv.de> <58db8b55-7b83-f60b-f242-6157705f906f@in3x.io> <4ac383dc-ea9c-b42c-ed1e-cf44963d6cff@enkore.de> Message-ID: <78313c52-49a0-09ee-5ccc-d0ba7cced346@in3x.io> On 8/30/2016 12:48 PM, Marian Beermann wrote: > What's the latency between the hosts you are using? (Preferably measured > inside the VPN) ping times are between 145 to 170 ms > > Since the current networking code looks relatively latency sensitive in > my eyes. OTOH 100 GB in 440 files means big files, so many large chunks. > In that case the latency should not matter as much. well, it is more like 1TB that I'm trying to back up eric at schist:~$ sudo du -sh /np1/pond/home/ /np1/pond/git/ /np1/pond/music/ 640G /np1/pond/home/ 28M /np1/pond/git/ 3.6G /np1/pond/music/ From gait at ATComputing.nl Wed Aug 31 05:20:38 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 31 Aug 2016 11:20:38 +0200 Subject: [Borgbackup] Mounting a big repository is very fast Message-ID: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> Hello, Mounting a very big archive takes some time but it works as expected. Using this # BORG_REPO=/blauw/borg/repos/sanger borg mount --verbose :: /data/borg/fuse to mount a very big repository is very fast, but then this happens: zfsborg_ksh # cd /data/borg/fuse # Now I want to list the archives: zfsborg_ksh # ls ls: .: Operation timed out zfsborg_ksh # man ls This last one hangs, probably because man wants to access the current directory which lives in the fuse-fs. Meanwhile, borg is still busy preparing the fuse-fs. Any thoughts? Gerrit From gait at ATComputing.nl Wed Aug 31 06:20:20 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 31 Aug 2016 12:20:20 +0200 Subject: [Borgbackup] Mounting a big repository is very fast In-Reply-To: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> References: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> Message-ID: <4403ca2f-221c-aba0-e27b-0e311a94f1d0@ATComputing.nl> Op 31-08-16 om 11:20 schreef Gerrit A. Smit: > Meanwhile, borg is still busy preparing the fuse-fs. Wouldn't it be nice to get a list of archives before borg starts digging deeper? Gerrit From gait at ATComputing.nl Wed Aug 31 07:48:54 2016 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 31 Aug 2016 13:48:54 +0200 Subject: [Borgbackup] Mounting a big repository is very fast In-Reply-To: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> References: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> Message-ID: <8d6f4446-abb0-5799-8f87-f3c7c864b797@ATComputing.nl> Hmmm ... after waiting some time, I get # ls -l total 0 ls: fts_read: Device not configured # pwd /data/borg/fuse/data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51/data/backup/.zfs/snapshot/borg-2016-08-29T11:55:51 OK, and I see the borg fuse process is dead. # borg --version borg 1.0.7 # uname -a FreeBSD sanger 10.3-RELEASE FreeBSD 10.3-RELEASE #0 r304529: Sat Aug 20 16:57:34 CEST 2016 Gerrit From public at enkore.de Wed Aug 31 07:51:56 2016 From: public at enkore.de (Marian Beermann) Date: Wed, 31 Aug 2016 13:51:56 +0200 Subject: [Borgbackup] Mounting a big repository is very fast In-Reply-To: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> References: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> Message-ID: On 31.08.2016 11:20, Gerrit A. Smit wrote: > Hello, > > > Mounting a very big archive takes some time but it works as expected. > > Using this > > # BORG_REPO=/blauw/borg/repos/sanger borg mount --verbose :: > /data/borg/fuse > > to mount a very big repository is very fast, but then this happens: > > zfsborg_ksh # cd /data/borg/fuse > # Now I want to list the archives: > zfsborg_ksh # ls > ls: .: Operation timed out > zfsborg_ksh # man ls > > This last one hangs, probably because man wants to access the current > directory > which lives in the fuse-fs. > > Meanwhile, borg is still busy preparing the fuse-fs. > > > Any thoughts? Operating system? On Linux with gnu coreutils that ls is instantaneous. Perhaps the ls used does a bit more poking around with the directories, causing Borg to fetch the metadata for each archive. In case you run into any trouble try the --foreground option to borg mount to see the log output. Cheers, Marian From public at enkore.de Wed Aug 31 07:53:14 2016 From: public at enkore.de (Marian Beermann) Date: Wed, 31 Aug 2016 13:53:14 +0200 Subject: [Borgbackup] Mounting a big repository is very fast In-Reply-To: References: <2687bc79-66ab-bf42-4d4e-b3d1644564b5@ATComputing.nl> Message-ID: <19c7e30f-a315-c013-e889-ab649ac23a4c@enkore.de> On 31.08.2016 13:51, Marian Beermann wrote: > On 31.08.2016 11:20, Gerrit A. Smit wrote: >> Hello, >> >> >> Mounting a very big archive takes some time but it works as expected. >> >> Using this >> >> # BORG_REPO=/blauw/borg/repos/sanger borg mount --verbose :: >> /data/borg/fuse >> >> to mount a very big repository is very fast, but then this happens: >> >> zfsborg_ksh # cd /data/borg/fuse >> # Now I want to list the archives: >> zfsborg_ksh # ls >> ls: .: Operation timed out >> zfsborg_ksh # man ls >> >> This last one hangs, probably because man wants to access the current >> directory >> which lives in the fuse-fs. >> >> Meanwhile, borg is still busy preparing the fuse-fs. >> >> >> Any thoughts? > > Operating system? On Linux with gnu coreutils that ls is instantaneous. > Perhaps the ls used does a bit more poking around with the directories, > causing Borg to fetch the metadata for each archive. > > In case you run into any trouble try the --foreground option to borg > mount to see the log output. > > Cheers, Marian > PS: If you're on a different OS, try strace (or equivalent) to see what syscalls ls is using and where it hangs. This would be a great help debugging it Cheers, Marian From eric at in3x.io Wed Aug 31 12:02:56 2016 From: eric at in3x.io (Eric S. Johansson) Date: Wed, 31 Aug 2016 12:02:56 -0400 Subject: [Borgbackup] backup speed In-Reply-To: <78313c52-49a0-09ee-5ccc-d0ba7cced346@in3x.io> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> <412f5a19-0784-f8ce-30f5-1c3cb4db5a58@waldmann-edv.de> <58db8b55-7b83-f60b-f242-6157705f906f@in3x.io> <4ac383dc-ea9c-b42c-ed1e-cf44963d6cff@enkore.de> <78313c52-49a0-09ee-5ccc-d0ba7cced346@in3x.io> Message-ID: Any more suggestions on backup speed Debugging? 131.20 GB O 130.58 GB C 126.15 GB D 453 N np1/pond/home/eric/2015-09-13-23-img/sda3.ext4-ptcl-img.gz.bz On 8/30/2016 1:38 PM, Eric S. Johansson wrote: > > On 8/30/2016 12:48 PM, Marian Beermann wrote: >> What's the latency between the hosts you are using? (Preferably measured >> inside the VPN) > ping times are between 145 to 170 ms >> Since the current networking code looks relatively latency sensitive in >> my eyes. OTOH 100 GB in 440 files means big files, so many large chunks. >> In that case the latency should not matter as much. > well, it is more like 1TB that I'm trying to back up > > eric at schist:~$ sudo du -sh /np1/pond/home/ /np1/pond/git/ /np1/pond/music/ > 640G /np1/pond/home/ > 28M /np1/pond/git/ > 3.6G /np1/pond/music/ > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From wtraylor at areyouthinking.org Fri Sep 2 13:58:03 2016 From: wtraylor at areyouthinking.org (Walker Traylor) Date: Sat, 3 Sep 2016 00:58:03 +0700 Subject: [Borgbackup] Determining which keys I am using In-Reply-To: <48B4CFBF-3971-41E3-8FAD-61603036ECD9@areyouthinking.org> References: <48B4CFBF-3971-41E3-8FAD-61603036ECD9@areyouthinking.org> Message-ID: <7C7F7C41-ED7C-4339-873D-DE5D3B2BE137@areyouthinking.org> Following up here. I cannot reproduce with borg 1.0.6 or 1.0.7. Must have been artifacts left behind from inits using an old version before I started using repo keys or knew what I was doing. Thanks for consideration. Walker > On Aug 25, 2016, at 6:28 PM, Walker Traylor wrote: > > >> On Aug 25, 2016, at 5:50 PM, Thomas Waldmann wrote: >> >> Hi Walker, >> >> it would be good if you could reproduce the problem with the current >> borg release 1.0.7. > > I will try to do this when I can make time. For now here is your other info: >> >> Also check that you look at the right locations in the file system, e.g. >> if borg runs as root, the key(s) will be in ~root/.config/borg/keys/ >> (not in your users home dir), > > Good idea, but I am sure it runs as user wtraylor. > >> >> Did you ever use borg < 1.0 or attic in -e passphrase mode? >> > > Yes, intially. I deleted everything locally and in the remote repo. > >> If so, did you use borg migrate-to-repokey when switching to borg >= 1.0? >> > > No. Is this documented somewhere on https://borgbackup.readthedocs.io? > >> Also, please make sure the borg code is really the version you think it >> is. In the same way as you usually invoke borg create, please invoke >> borg ?version. >> > > I normally use the borg wrapper ?borgmatic.? To be sure I found this in lsof while borgmatic is running: > /opt/homebrew-cask/Caskroom/borgbackup/1.0.3/borg-darwin64 > > wtraylor at macbook$ /opt/homebrew-cask/Caskroom/borgbackup/1.0.3/borg-darwin64 --version > borg-darwin64 1.0.3 > > I also noticed this behavior before I upgraded the local borg binary to a 1.0 release and began using the 1.0.x server binary. I initialized the repo using a 0.9 release, perhaps 0.96. I realize this makes it hard to debug and if this isn?t enough information I?ll try to reproduce on 1.0.7. > > It was months ago that I initialized the repo with the pre 1.0 binary. I do remember clearing everything out on the server (including all dotfiles) and client when I decided to change from passphrase to repokey and initialized several times using keyfile. I assumed it was using the same key and didn?t check until later to discover it was making new keys (appending .1, .2, etc to the key.) > > Walker > > >> Cheers, >> >> Thomas >> >> -- >> >> GPG ID: 9F88FB52FAF7B393 >> GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at in3x.io Tue Sep 6 12:41:34 2016 From: eric at in3x.io (Eric S. Johansson) Date: Tue, 6 Sep 2016 12:41:34 -0400 Subject: [Borgbackup] update and possible reason Re: backup speed In-Reply-To: <58db8b55-7b83-f60b-f242-6157705f906f@in3x.io> References: <76b29194-44f4-f9d1-ff92-f02ee2fe523a@in3x.io> <412f5a19-0784-f8ce-30f5-1c3cb4db5a58@waldmann-edv.de> <58db8b55-7b83-f60b-f242-6157705f906f@in3x.io> Message-ID: On 8/30/2016 12:31 PM, Eric S. Johansson wrote: > It's borg client to Borg server (v1.07) and the initial part of >>> the transfer was fast (12 Mb per second) and now it's running slow (2 >>> Mb per second). >> turns out that the problem was borg tunneling through ssh running inside of openvpn. too many network layers. after opening a pinhole in the firewall and directly connected to the target server, data transferred 10 times faster. From nerbrume at free.fr Wed Sep 7 02:58:57 2016 From: nerbrume at free.fr (nerbrume at free.fr) Date: Wed, 7 Sep 2016 08:58:57 +0200 (CEST) Subject: [Borgbackup] [Documentation] Remote/ local usage In-Reply-To: <1463553722.357949886.1473230613002.JavaMail.root@zimbra59-e10.priv.proxad.net> Message-ID: <1469150971.358017044.1473231537085.JavaMail.root@zimbra59-e10.priv.proxad.net> Hello, I've just setup borg between a rather recent desktop machine (local), which is distantly backuped (ssh) on a very small box (something like a raspberry pi 1st gen, but with even less RAM). I didn't found in the documentation any reference as to which operations are done where (and what parameters I can act upon to lessen the burden on the slow distant machine). (btw, the doc is great !) My main concerns are the CPU and RAM usage during backup (I guess disk usage would be of interest too, but I didn't found it to be the limiting factor) : - where is the compression (lz4/gzip/lzma) done ? - where is the encryption (ssh + borg own encryption) done ? - where is the deduplication done ? But I'm also interested in the same question during other borg actions, especially borg check. I've already gathered from my observations that borg mount is done locally : mounting my backup locally on the small box takes forever. I feel this info should be in the documentation, but refrained from submitting a bug. Tell me if I should. From public at enkore.de Wed Sep 7 06:00:49 2016 From: public at enkore.de (public at enkore.de) Date: Wed, 7 Sep 2016 12:00:49 +0200 Subject: [Borgbackup] [Documentation] Remote/ local usage In-Reply-To: <1469150971.358017044.1473231537085.JavaMail.root@zimbra59-e10.priv.proxad.net> References: <1469150971.358017044.1473231537085.JavaMail.root@zimbra59-e10.priv.proxad.net> Message-ID: Hi, On 07/09/16 08:58, nerbrume at free.fr wrote: > Hello, > > I've just setup borg between a rather recent desktop machine (local), > which is distantly backuped (ssh) on a very small box (something like > a raspberry pi 1st gen, but with even less RAM). I didn't found in > the documentation any reference as to which operations are done where > (and what parameters I can act upon to lessen the burden on the slow > distant machine). (btw, the doc is great !) > > My main concerns are the CPU and RAM usage during backup (I guess disk > usage would be of interest too, but I didn't found it to be the > limiting factor) : > - where is the compression (lz4/gzip/lzma) done ? > - where is the encryption (ssh + borg own encryption) done ? > - where is the deduplication done ? All of the above: client side. This is implied by the docs, but not very obvious ("Data is encrypted clientside."). The repository/remote end is a simple key-value database. It needs about (chunks_count * 40-80 bytes) of memory plus some overhead to work. > > But I'm also interested in the same question during other borg actions, especially borg check. check does some stuff on the remote end, which can need another chunk_counts * 40 bytes of memory, but is normally IO bound. > I've already gathered from my observations that borg mount is done > locally : mounting my backup locally on the small box takes forever. > > > I feel this info should be in the documentation, but refrained from submitting a bug. Tell me if I should. Please do :) Cheers, Marian From tmhikaru at gmail.com Thu Sep 8 01:20:58 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Wed, 7 Sep 2016 22:20:58 -0700 Subject: [Borgbackup] [Documentation] Remote/ local usage In-Reply-To: <1469150971.358017044.1473231537085.JavaMail.root@zimbra59-e10.priv.proxad.net> References: <1463553722.357949886.1473230613002.JavaMail.root@zimbra59-e10.priv.proxad.net> <1469150971.358017044.1473231537085.JavaMail.root@zimbra59-e10.priv.proxad.net> Message-ID: <20160908052058.GA23662@raspberrypi> On Wed, Sep 07, 2016 at 08:58:57AM +0200, nerbrume at free.fr wrote: > Hello, > > I've just setup borg between a rather recent desktop machine (local), which is distantly backuped (ssh) on a very small box (something like a raspberry pi 1st gen, but with even less RAM). > I didn't found in the documentation any reference as to which operations are done where (and what parameters I can act upon to lessen the burden on the slow distant machine). > (btw, the doc is great !) > > My main concerns are the CPU and RAM usage during backup (I guess disk usage would be of interest too, but I didn't found it to be the limiting factor) : > - where is the compression (lz4/gzip/lzma) done ? > - where is the encryption (ssh + borg own encryption) done ? > - where is the deduplication done ? Taking it from someone who tried using borg to do backups from an rpiB1 client to a powerful server, these are all operations that are done client side, and in theory should work in your described scenario though you will get a bit of a speed hit with I/O. ... Just don't try to use it the other way around, it just does not work well trying to run the client on a weak machine, when it works at all. I never did figure out why it kept inexplicably hanging forever. If you have problems with server side operations taking forever, never completing or running out of memory, you may want to try using sshfs instead of a more typical borg over ssh login. Using sshfs will allow you to access the remote repo via borg as if it was locally present on the client machine, and will not require borg to be installed or run on the server. It will require quite a bit more computation out of ssh than normal, but in practice I found that making the weak machines files accessable via sshfs *greatly* sped things up compared to having it run the program over an ssh login. As an alternative, you may want to consider rsync if borg does not work out for you. Although it does not support backing up selinux xattrs, doesn't have a way to encrypt files, and doesn't have the deduplication awesomeness of borg, on a weaker machine it actually works where borg cannot. You can even have it use hardlinks to do incremental backups, though this requires semicomplicated scripting. I personally had to revert to using rsync for my backups because I could not expect borg to work on the weak client machine - it hardly did me any good to have a backup that could not be restored. Luckily in your case it sounds like this won't be a problem. Just my two cents. Tim -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From sitaramc at gmail.com Thu Sep 8 01:45:59 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Thu, 8 Sep 2016 11:15:59 +0530 Subject: [Borgbackup] [Documentation] Remote/ local usage In-Reply-To: <20160908052058.GA23662@raspberrypi> References: <1463553722.357949886.1473230613002.JavaMail.root@zimbra59-e10.priv.proxad.net> <1469150971.358017044.1473231537085.JavaMail.root@zimbra59-e10.priv.proxad.net> <20160908052058.GA23662@raspberrypi> Message-ID: On 09/08/2016 10:50 AM, tmhikaru at gmail.com wrote: > On Wed, Sep 07, 2016 at 08:58:57AM +0200, nerbrume at free.fr wrote: >> Hello, >> >> I've just setup borg between a rather recent desktop machine (local), which is distantly backuped (ssh) on a very small box (something like a raspberry pi 1st gen, but with even less RAM). >> I didn't found in the documentation any reference as to which operations are done where (and what parameters I can act upon to lessen the burden on the slow distant machine). >> (btw, the doc is great !) >> >> My main concerns are the CPU and RAM usage during backup (I guess disk usage would be of interest too, but I didn't found it to be the limiting factor) : >> - where is the compression (lz4/gzip/lzma) done ? >> - where is the encryption (ssh + borg own encryption) done ? >> - where is the deduplication done ? > > Taking it from someone who tried using borg to do backups from an rpiB1 > client to a powerful server, these are all operations that are done client > side, and in theory should work in your described scenario though you will > get a bit of a speed hit with I/O. ... Just don't try to use it the other > way around, it just does not work well trying to run the client on a weak > machine, when it works at all. I never did figure out why it kept > inexplicably hanging forever. > > If you have problems with server side operations taking forever, never > completing or running out of memory, you may want to try using sshfs instead > of a more typical borg over ssh login. Using sshfs will allow you to access > the remote repo via borg as if it was locally present on the client machine, > and will not require borg to be installed or run on the server. It will > require quite a bit more computation out of ssh than normal, but in practice > I found that making the weak machines files accessable via sshfs *greatly* > sped things up compared to having it run the program over an ssh login. > > As an alternative, you may want to consider rsync if borg does not work out > for you. Although it does not support backing up selinux xattrs, doesn't > have a way to encrypt files, and doesn't have the deduplication awesomeness > of borg, on a weaker machine it actually works where borg cannot. You can > even have it use hardlinks to do incremental backups, though this requires > semicomplicated scripting. I personally had to revert to using rsync for my > backups because I could not expect borg to work on the weak client machine - > it hardly did me any good to have a backup that could not be restored. > Luckily in your case it sounds like this won't be a problem. I combined the two once (different problem but still...). I did a borg backup locally, then rsync-d that (with "--delete") to the remote server. regards sitaram From nerbrume at free.fr Thu Sep 8 02:31:12 2016 From: nerbrume at free.fr (nerbrume at free.fr) Date: Thu, 8 Sep 2016 08:31:12 +0200 (CEST) Subject: [Borgbackup] [Documentation] Remote/ local usage In-Reply-To: Message-ID: <923778509.362887203.1473316272938.JavaMail.root@zimbra59-e10.priv.proxad.net> Hi, > > - where is the compression (lz4/gzip/lzma) done ? > > - where is the encryption (ssh + borg own encryption) done ? > > - where is the deduplication done ? > > All of the above: client side. This is implied by the docs, but not > very > obvious ("Data is encrypted clientside."). > > The repository/remote end is a simple key-value database. It needs > about > (chunks_count * 40-80 bytes) of memory plus some overhead to work. Ok, that explains why I saw borg eating 50% of my memory on the remote, while sshd & kworker were eating most of the CPU. I feared to be limited by compression, clearly not the case. However, I know see that I should watch my chunks count to avoid problems in a near future. > > > > But I'm also interested in the same question during other borg > actions, especially borg check. > > check does some stuff on the remote end, which can need another > chunk_counts * 40 bytes of memory, but is normally IO bound. > > > I've already gathered from my observations that borg mount is done > > locally : mounting my backup locally on the small box takes > > forever. > > > > > > I feel this info should be in the documentation, but refrained from > submitting a bug. Tell me if I should. > > Please do :) I'll submit a bug with a draft for a FAQ entry. Thanks for the fast and clear answer ! From nerbrume at free.fr Thu Sep 8 02:37:43 2016 From: nerbrume at free.fr (nerbrume at free.fr) Date: Thu, 8 Sep 2016 08:37:43 +0200 (CEST) Subject: [Borgbackup] [Documentation] Remote/ local usage In-Reply-To: <20160908052058.GA23662@raspberrypi> Message-ID: <85204109.362909488.1473316663107.JavaMail.root@zimbra59-e10.priv.proxad.net> Hello, > Taking it from someone who tried using borg to do backups from an > rpiB1 > client to a powerful server, these are all operations that are done > client > side, and in theory should work in your described scenario though you > will > get a bit of a speed hit with I/O. ... Just don't try to use it the > other > way around, it just does not work well trying to run the client on a > weak > machine, when it works at all. I never did figure out why it kept > inexplicably hanging forever. > > If you have problems with server side operations taking forever, > never > completing or running out of memory, you may want to try using sshfs > instead > of a more typical borg over ssh login. Using sshfs will allow you to > access > the remote repo via borg as if it was locally present on the client > machine, > and will not require borg to be installed or run on the server. It > will > require quite a bit more computation out of ssh than normal, but in > practice > I found that making the weak machines files accessable via sshfs > *greatly* > sped things up compared to having it run the program over an ssh > login. > > As an alternative, you may want to consider rsync if borg does not > work out > for you. Although it does not support backing up selinux xattrs, > doesn't > have a way to encrypt files, and doesn't have the deduplication > awesomeness > of borg, on a weaker machine it actually works where borg cannot. > You can > even have it use hardlinks to do incremental backups, though this > requires > semicomplicated scripting. I personally had to revert to using rsync > for my > backups because I could not expect borg to work on the weak client > machine - > it hardly did me any good to have a backup that could not be > restored. > Luckily in your case it sounds like this won't be a problem. I might not have been clear on my setup : the "fast" machine has the data, and is doing the "borg create ssh://user at slow-machine". I'm currently probably limited by the slow machine CPU, but not by borg, as the backup folder is on a cryptfs encrypted usb drive. Before using borg, I was already using rsync, with the same limitation (I had a slightly higher I/O, thought). But thanks for the sshfs trick, it might prove useful at some point ! N. From dsjstc at gmail.com Fri Sep 9 21:13:14 2016 From: dsjstc at gmail.com (DS Jstc) Date: Fri, 9 Sep 2016 18:13:14 -0700 Subject: [Borgbackup] Multi-archive searches in Borg Backup? Message-ID: <2697cccb-5d9d-e370-9f79-9e86f71d7d54@gmail.com> I'm evaluating several backup systems. I really like what I'm reading about Borg, but the Usage document doesn't seem to have anything about multi-archive searches. Here are the primary use-cases that concern me: Case 1: I've deleted a file, but I'm not sure when. I'd like to restore the last valid version of it, but obviously, I don't know which repository it's in. Can I search for it? Case 2: I need a deleted file for which I don't know the full pathname. I'd like to run a regex search against all paths in all archives in the repository. Case 3: I made a critical error in an important document at some time in the past. I need to see every archive in which a change was recorded so I can retrieve the last pre-error version. Can Borg handle any of these use cases? From public at enkore.de Sat Sep 10 05:40:20 2016 From: public at enkore.de (Marian Beermann) Date: Sat, 10 Sep 2016 11:40:20 +0200 Subject: [Borgbackup] Multi-archive searches in Borg Backup? In-Reply-To: <2697cccb-5d9d-e370-9f79-9e86f71d7d54@gmail.com> References: <2697cccb-5d9d-e370-9f79-9e86f71d7d54@gmail.com> Message-ID: <45be87a6-f1ee-3f46-cb1e-5a40fe07d444@enkore.de> Hi On 10.09.2016 03:13, DS Jstc wrote: > I'm evaluating several backup systems. I really like what I'm reading > about Borg, but the Usage document doesn't seem to have anything about > multi-archive searches. Here are the primary use-cases that concern me: > > Case 1: I've deleted a file, but I'm not sure when. I'd like to restore > the last valid version of it, but obviously, I don't know which > repository it's in. Can I search for it? > > Case 2: I need a deleted file for which I don't know the full pathname. > I'd like to run a regex search against all paths in all archives in the > repository. This can be done with the current stable version (1.0) when you mount the repository via FUSE. You can then use e.g. GNU find or your normal file manager to search for files just as if they were on a normal disk. > > Case 3: I made a critical error in an important document at some time in > the past. I need to see every archive in which a change was recorded so > I can retrieve the last pre-error version. > > Can Borg handle any of these use cases? In 1.1 (currently in beta) there are several tools that make things like this far easier (and also 1.) and 2.)): - FUSE mounted repositories have a "versions view" where all versions of a file are available: /path/file.version.238123 /path/file.version.123902 /path/otherfile.version.231213 ... - borg-diff can easily print the difference between (subsets of) archives. It's very fast and e.g. manual-binary-search is feasible. $ time borg diff testrepo::linux1 linux2 +100.4 kB -100.5 kB linux-4.4.2/fs/btrfs/check-integrity.c removed 279 B linux-4.4.2/drivers/gpu/Makefile 3.48user 0.04system 0:03.56elapsed 98%CPU - borg-list supports the full range of patterns, including regular expressions: $ borg list testrepo::asdf "re:.*/PKG-INFO" -rw-r--r-- mabe mabe 10460 Sat, 2016-08-27 01:54:31 src/borgbackup.egg-info/PKG-INFO $ Cheers, Marian Current beta docs: borg-diff: http://borgbackup.readthedocs.io/en/1.1.0b1/usage.html#borg-diff borg-mount: http://borgbackup.readthedocs.io/en/1.1.0b1/usage.html#borg-mount borg-list: http://borgbackup.readthedocs.io/en/1.1.0b1/usage.html#borg-list patterns: http://borgbackup.readthedocs.io/en/1.1.0b1/usage.html#borg-help-patterns From dsjstc at gmail.com Sat Sep 10 12:13:00 2016 From: dsjstc at gmail.com (DS Jstc) Date: Sat, 10 Sep 2016 09:13:00 -0700 Subject: [Borgbackup] Multi-archive searches in Borg Backup? In-Reply-To: <2697cccb-5d9d-e370-9f79-9e86f71d7d54@gmail.com> References: <2697cccb-5d9d-e370-9f79-9e86f71d7d54@gmail.com> Message-ID: <4c1a13e4-70df-00d0-1081-6fcb789b526e@gmail.com> Thanks for your detailed response, Marian! 1.1b sounds like where it's at, I'll give it a whirl. (Apologies, I've subscribed to the digest and cannot figure out how to respond directly to you!) From wtraylor at areyouthinking.org Sun Sep 11 09:37:40 2016 From: wtraylor at areyouthinking.org (Walker Traylor) Date: Sun, 11 Sep 2016 20:37:40 +0700 Subject: [Borgbackup] repokey mode Message-ID: <7468F065-795E-4701-9C82-F3CE0EE0CFDF@areyouthinking.org> Is it possible to convert an existing repository from repokey to keyfile mode? I am unable to find that in the documentation but I seem to remember encountering information about this somewhere. ?Walker From adrian.klaver at aklaver.com Sun Sep 11 10:15:45 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Sun, 11 Sep 2016 07:15:45 -0700 Subject: [Borgbackup] repokey mode In-Reply-To: <7468F065-795E-4701-9C82-F3CE0EE0CFDF@areyouthinking.org> References: <7468F065-795E-4701-9C82-F3CE0EE0CFDF@areyouthinking.org> Message-ID: On 09/11/2016 06:37 AM, Walker Traylor wrote: > Is it possible to convert an existing repository from repokey to keyfile mode? > > I am unable to find that in the documentation but I seem to remember encountering information about this somewhere. All I could find was this: https://github.com/borgbackup/borg/issues/510 which indicates it is not possible at this time. > > ?Walker > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From billy at worldofbilly.com Wed Sep 14 20:00:23 2016 From: billy at worldofbilly.com (Billy Charlton) Date: Wed, 14 Sep 2016 17:00:23 -0700 Subject: [Borgbackup] Signed (Unofficial) Windows installers Message-ID: I've been building unofficial Windows installers since borg 1.0. Recently on the installer's GitHub issue [1] it was suggested that I sign them with GPG. So, I've set that up. I've added a signature for release 1.0.7 and will sign future builds as well. I'm new to this though, so I have some questions: - Do you want my public key? I uploaded it to pgp.mit.edu [2] -- does someone want to verify that or something? - Is there somewhere else I should push these or announce the builds? Cheers, Billy [1] https://github.com/borgbackup/borg/issues/440 [2] http://pgp.mit.edu/pks/lookup?op=get&search=0x40ED1F779784BBF0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Wed Sep 14 20:19:39 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 15 Sep 2016 02:19:39 +0200 Subject: [Borgbackup] Signed (Unofficial) Windows installers In-Reply-To: References: Message-ID: <85d18ca7-1d8f-5380-711f-00c7af0ff440@waldmann-edv.de> On 09/15/2016 02:00 AM, Billy Charlton wrote: > I've been building unofficial Windows installers since borg 1.0. > Recently on the installer's GitHub issue [1] it was suggested that I > sign them with GPG. So, I've set that up. I've added a signature for > release 1.0.7 and will sign future builds as well. Great! :) > > I'm new to this though, so I have some questions: > - Do you want my public key? I uploaded it to pgp.mit.edu > That is enough. It should be also available from other keyservers now. What you should publish though is your full key fingerprint, so people can make sure they really got the right one. gpg --fingerprint YOURID > [2] -- does someone want to verify that or something? That would be useful, but usually has to be done personally to verify documents (passport, ID) against person. To get at least some signatures, maybe attend some gpg keysigning party at a hackerspace or event. > - Is there somewhere else I should push these or announce the builds? You could publish them on github in your own repository. If you sign it with gpg (and users verify your signature), the distribution channel doesn't matter much though as people can make sure it is stuff from you and it is unmodified as you released it. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From melkor.lord at gmail.com Thu Sep 29 20:54:21 2016 From: melkor.lord at gmail.com (Melkor Lord) Date: Fri, 30 Sep 2016 02:54:21 +0200 Subject: [Borgbackup] Scenario: Paranoid situation goes wrong Message-ID: Hi, I've recently discovered BorgBackup and so far with my tests, I'm quite speechless by the overall quality! This is quite amazing, congratulations! I want to move forward and use it in a serious way but I have to prepare for the worst so here is a possible scenario: This is purely theorical, just in case... Here we go : Let's say I have a dedicated server at some hosting company. The dedicated server offer comes with a few GB of FTP space for "backups" which is nice. I use Borg and it does the job in a nice way. After each backup, I use a script to push the repo(s) to the FTP space available to me. Let's say I'm really paranoid so I mirror everything within the repo(s) but the "config" file to avoid some indelicate smartass at the hosting company scanning through the FTP storage to "pickup some interesting things" one would say. In this situation, there would be no way to brute-force the backup since the key wouldn't be available. So far, so good. Now, sh*t happens! My server gets trashed for some reason and I get a new one or new disks. Of course, I "forgot" to save the "config" file... I reach to the FTP, mirror back the contents to try a restore... Ok but I miss the "config" file! How do I generate it back? Of course, I still know the passphrase :-) -- Unix _IS_ user friendly, it's just selective about who its friends are. -------------- next part -------------- An HTML attachment was scrubbed... URL: From melkor.lord at gmail.com Thu Sep 29 21:03:04 2016 From: melkor.lord at gmail.com (Melkor Lord) Date: Fri, 30 Sep 2016 03:03:04 +0200 Subject: [Borgbackup] borg --list-format bug? Message-ID: Hi, According to the docs for "borg list" --list-format option : - Special "{formatkeys}" exists to list available keys but I can't get it to work and show all available keys : borg list --list-format "{formatkeys}" /path/to/repo lists the repo contents but not the list keys... Am I doing something wrong? -- Unix _IS_ user friendly, it's just selective about who its friends are. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Sep 29 21:45:32 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 30 Sep 2016 03:45:32 +0200 Subject: [Borgbackup] borg --list-format bug? In-Reply-To: References: Message-ID: <9b326ee1-3184-44eb-c0a0-fe545b25913e@waldmann-edv.de> > According to the docs for "borg list" --list-format option : > - Special "{formatkeys}" exists to list available keys > > but I can't get it to work and show all available keys : > > borg list --list-format "{formatkeys}" /path/to/repo > > lists the repo contents but not the list keys... Am I doing something wrong? If you use borg 1.1.0beta or git master, that is because this format string was removed. See docs/changes.rst. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Thu Sep 29 21:42:19 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 30 Sep 2016 03:42:19 +0200 Subject: [Borgbackup] Scenario: Paranoid situation goes wrong In-Reply-To: References: Message-ID: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> > I've recently discovered BorgBackup and so far with my tests, I'm quite > speechless by the overall quality! This is quite amazing, > congratulations! I want to move forward and use it in a serious way but > I have to prepare for the worst so here is a possible scenario: > > This is purely theorical, just in case... Here we go : > > Let's say I have a dedicated server at some hosting company. The > dedicated server offer comes with a few GB of FTP space for "backups" > which is nice. > > I use Borg and it does the job in a nice way. After each backup, I use a > script to push the repo(s) to the FTP space available to me. > > Let's say I'm really paranoid so I mirror everything within the repo(s) > but the "config" file The key in the config file is encrypted with a key derived from your passphrase, so just include it if you use the repokey method (default). Alternatively, use the keyfile method, then the key will sit on your local filesystem - and you need to backup it separately. > Now, sh*t happens! My server gets trashed for some reason and I get a > new one or new disks. Of course, I "forgot" to save the "config" file... If you lose the key, you lose your backup. > Ok but I miss the "config" file! How do I generate it back? Of course, I > still know the passphrase :-) The passphrase is only used to decrypt the key. It is not the repo encryption key itself. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From sitaramc at gmail.com Thu Sep 29 22:17:50 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Fri, 30 Sep 2016 07:47:50 +0530 Subject: [Borgbackup] Scenario: Paranoid situation goes wrong In-Reply-To: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> References: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> Message-ID: On 09/30/2016 07:12 AM, Thomas Waldmann wrote: > The key in the config file is encrypted with a key derived from your > passphrase, so just include it if you use the repokey method (default). > > Alternatively, use the keyfile method, then the key will sit on your > local filesystem - and you need to backup it separately. Question I've been meaning to ask: in either case, is there a key stretching operation involved, to slow down brute forces? regards sitaram From tw at waldmann-edv.de Thu Sep 29 22:38:22 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 30 Sep 2016 04:38:22 +0200 Subject: [Borgbackup] Scenario: Paranoid situation goes wrong In-Reply-To: References: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> Message-ID: <1f0bf5b7-4a32-f282-bd0d-1af6b46f797d@waldmann-edv.de> > Question I've been meaning to ask: in either case, is there a key > stretching operation involved, to slow down brute forces? See there: http://borgbackup.readthedocs.io/en/stable/internals.html#key-files -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From sitaramc at gmail.com Thu Sep 29 23:26:51 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Fri, 30 Sep 2016 08:56:51 +0530 Subject: [Borgbackup] Scenario: Paranoid situation goes wrong In-Reply-To: <1f0bf5b7-4a32-f282-bd0d-1af6b46f797d@waldmann-edv.de> References: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> <1f0bf5b7-4a32-f282-bd0d-1af6b46f797d@waldmann-edv.de> Message-ID: <24bc77d5-b5c8-9a11-8371-6a4af6be72a8@gmail.com> On 09/30/2016 08:08 AM, Thomas Waldmann wrote: >> Question I've been meaning to ask: in either case, is there a key >> stretching operation involved, to slow down brute forces? > > See there: > > http://borgbackup.readthedocs.io/en/stable/internals.html#key-files Thanks! Would it be ok to suggest making the number of rounds customisable by the user (like the "-a" parameter in ssh-keygen, and maybe other such tools I don't know). If you're in principle OK with it, I'll open a ticket/issue on github. regards sitaram From tw at waldmann-edv.de Fri Sep 30 05:17:56 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 30 Sep 2016 11:17:56 +0200 Subject: [Borgbackup] Scenario: Paranoid situation goes wrong In-Reply-To: <24bc77d5-b5c8-9a11-8371-6a4af6be72a8@gmail.com> References: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> <1f0bf5b7-4a32-f282-bd0d-1af6b46f797d@waldmann-edv.de> <24bc77d5-b5c8-9a11-8371-6a4af6be72a8@gmail.com> Message-ID: <16f9c2aa-b39d-a128-12e6-4562f9700ec5@waldmann-edv.de> > Would it be ok to suggest making the number of rounds customisable by > the user (like the "-a" parameter in ssh-keygen, and maybe other such > tools I don't know). > > If you're in principle OK with it, I'll open a ticket/issue on github. There is already a ticket about that, search for pbkdf2 in the issue tracker. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Fri Sep 30 16:06:51 2016 From: public at enkore.de (public at enkore.de) Date: Fri, 30 Sep 2016 22:06:51 +0200 Subject: [Borgbackup] borg --list-format bug? In-Reply-To: References: Message-ID: In 1.0.x --list-format only applies to listing files, not archives. The argument is ignored in the latter case. Cheers, Marian On 30/09/16 03:03, Melkor Lord wrote: > Hi, > > According to the docs for "borg list" --list-format option : > - Special "{formatkeys}" exists to list available keys > > but I can't get it to work and show all available keys : > > borg list --list-format "{formatkeys}" /path/to/repo > > lists the repo contents but not the list keys... Am I doing something wrong? > > -- > Unix _IS_ user friendly, it's just selective about who its friends are. > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From melkor.lord at gmail.com Fri Sep 30 19:35:35 2016 From: melkor.lord at gmail.com (Melkor Lord) Date: Sat, 1 Oct 2016 01:35:35 +0200 Subject: [Borgbackup] Scenario: Paranoid situation goes wrong In-Reply-To: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> References: <01602b32-023c-150f-246c-17f2a927d653@waldmann-edv.de> Message-ID: On Fri, Sep 30, 2016 at 3:42 AM, Thomas Waldmann wrote: > Let's say I'm really paranoid so I mirror everything within the repo(s) > > but the "config" file > > The key in the config file is encrypted with a key derived from your > passphrase, so just include it if you use the repokey method (default). > Yep but it defeats the purpose of the scenario which is not giving opportunity to a third party to try brute-forcing the password by having everything available. > Alternatively, use the keyfile method, then the key will sit on your > local filesystem - and you need to backup it separately. > I'll use a mix of these solutions. I'll mirror the repo to the FTP space, without the "config" file and I'll backup it separately to make sure it's always available even after a big disaster. > > Now, sh*t happens! My server gets trashed for some reason and I get a > > new one or new disks. Of course, I "forgot" to save the "config" file... > > If you lose the key, you lose your backup. > Which is exactly what I want for the prying third party eyes ;) > > Ok but I miss the "config" file! How do I generate it back? Of course, I > > still know the passphrase :-) > > The passphrase is only used to decrypt the key. It is not the repo > encryption key itself. > Ok got it. Borg is definitely nice. -- Unix _IS_ user friendly, it's just selective about who its friends are. -------------- next part -------------- An HTML attachment was scrubbed... URL: