From aec at osncs.com Fri Apr 1 11:07:47 2016 From: aec at osncs.com (Andre Charette) Date: Fri, 01 Apr 2016 11:07:47 -0400 Subject: [Borgbackup] Disabling progress in script Message-ID: <5ebd845829d5c4687b96aeade8bd0ee5@osncs.com> I'm using v.1.0 and the following command in a cron job script: "borg check --verbose ${REPOSITORY}/${BACKUP_NAME}" Is there a way to disable the progress output and keep the other relevant information? I haven't found a way to do this without without modifying the code in "helpers.py". "Checking segments 0.0%Checking segments 0.2%Checking segments 0.4%Checking segments 0.6%Checking segments 0.8%Checking segments 0.9%Checking segments 1.1%..." -- /andre -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Sat Apr 9 17:05:49 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 09 Apr 2016 23:05:49 +0200 Subject: [Borgbackup] 1.0.1 released Message-ID: <5213631C-0E80-4888-8A06-3CC8059722BB@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.0.1 Bugfix release - please read the changelog before upgrading: https://github.com/borgbackup/borg/blob/1.0.1/docs/changes.rst Cheers, Thomas --- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 -- Sent from my mobile device. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonas at wielicki.name Fri Apr 15 08:06:27 2016 From: jonas at wielicki.name (Jonas Wielicki) Date: Fri, 15 Apr 2016 14:06:27 +0200 Subject: [Borgbackup] Remote sources Message-ID: <5710D943.1080906@wielicki.name> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi all, Is there (non-obvious) support for backing up from remote sources? A simple borg create test::bar somehost:/ did not succeed (No such file or directory: 'somehost:'). If there is no such feature, is there a recommended way to implement "pull" style backups using borg? If not, is it worth having an issue for that, and has anyone an estimate of the complexity? I imagine, given that remote-destination backups exist, it should be possible to do it the other way round without too much effort. I am asking and thinking because I need that feature to start using borg and I am considering putting some time into making it happen. best regards, jwi -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJXENlDAAoJEMBiAyWXYliKwOwP/1S+EwENJONnq+9bCGaJxY3u AxdM99rEc1+TsSqWUCagGScknBjUawtVCtN8toPfDq1pPXK5/Sa9MmdqzUYLqKSL oTCKaABU9EgsXgQZgNyk7jOvIrk2TmvBh5CEMOHTa1Y+PfLhclAYvEg3MB8HeWH+ cFCp0n0gt9H27QvVkF2yWOPmm9dHEPO2kw9qZoDaHkrkHsYoOu/BiWsK+T8ib+VU KTCdcRK5zwnNGBnJC75TIyCCtvjAIwIu+iVaDxgBCAEhOWlMfPfpDqoIrxs+idIZ iSIkPggqj1+ngwxaqIn2Z+B5dnnPfyUyIQCF6v/J2kZOLrcCsvfr4JSApkGhRq3i zGKlsU84pzsw6s/4o5jIXnTUJ+GGZ9TOguAEC++CUvzqSXjZaJqswXeEMl4Jx66y Rt3GOSgzN61318LkyDJ+BODy3BBh/sWph7b8m3PsMAqXSnWOgp9v2scC5Ge3WKXF A9Nt0aCI0KREiSs+cz0wzMxEZ3AgcW/AKdl3Xcg4Fkn3cS1hzItKdRjNwhne43d+ /o7VWlAkgpmDwAdTUBfTGgi4UndQ3DaENi0GdGiO2Z9ARrnIPz6Gld50Faz5DZG3 CGISt65qoBC19K65SSmZiRHg4AWZi2DpfoxH5PuHSk82+kFb7t3A5hWM07m4s+RY aFQqCCG5dWp+iGYiyw66 =yfXP -----END PGP SIGNATURE----- From adrian.klaver at aklaver.com Fri Apr 15 09:22:49 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 15 Apr 2016 06:22:49 -0700 Subject: [Borgbackup] Remote sources In-Reply-To: <5710D943.1080906@wielicki.name> References: <5710D943.1080906@wielicki.name> Message-ID: <5710EB29.9080003@aklaver.com> On 04/15/2016 05:06 AM, Jonas Wielicki wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > Hi all, > > Is there (non-obvious) support for backing up from remote sources? A > simple > > borg create test::bar somehost:/ > > did not succeed (No such file or directory: 'somehost:'). > > If there is no such feature, is there a recommended way to implement > "pull" style backups using borg? See: https://github.com/borgbackup/borg/issues/36 https://github.com/borgbackup/borg/issues/900 > > If not, is it worth having an issue for that, and has anyone an > estimate of the complexity? I imagine, given that remote-destination > backups exist, it should be possible to do it the other way round > without too much effort. I am asking and thinking because I need that > feature to start using borg and I am considering putting some time > into making it happen. > > best regards, > jwi -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Sat Apr 16 09:28:36 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 16 Apr 2016 15:28:36 +0200 Subject: [Borgbackup] borgbackup 1.0.2 released Message-ID: <57123E04.3060600@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.0.2 Some fixes. Must have for users on big-endian architectures (ppc, s390, ...). Please read the changelog before upgrading: https://github.com/borgbackup/borg/blob/1.0.2/docs/changes.rst Cheers, Thomas --- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From william at conveystudio.com Thu Apr 21 01:45:26 2016 From: william at conveystudio.com (William Gogan) Date: Thu, 21 Apr 2016 05:45:26 +0000 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance Message-ID: I'm trying borgbackup out, and so far it's performing really well in almost all tests. The one item where I'm seeing odd performance is for tar files. It appears not to be deduplicating except within the current archive. Background: Our VM tool kicks out a .tar file per container. It compresses (lzo) the .tar. For discussion purposes, let's pretend it's called vm.tar.lzo So, I call `lzop vm.tar.lzo -d --to-stdout | borg create --verbose --stats --progress --chunker-params 19,23,21,4095 --compression lz4 /dir/borg/::2016-04-21-01-38 -` - I assumed lzo would wreck borg's dedupe, so I pipe in the decompressed version. Even if I generate a .tar file, then immediately generate a second one (within <30s of the first), and then feed them both to borgbackup, it shows about 80% of the blocks as non-duplicates despite 99% of the files not having changed on the disk (and so should not have changed in the .tar) I looked at the FAQ, and it does make specific mention of doing well at VM backups, so I'm wondering if I'm doing something wrong. What can I do to get better dedupe performance? I considered adding tar to the mix and untarring the file before piping it to borg, but that seems suboptimal. If anyone has any suggestions, I'd welcome them! Thanks, William. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sitaramc at gmail.com Thu Apr 21 01:53:06 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Thu, 21 Apr 2016 11:23:06 +0530 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: References: Message-ID: <57186AC2.1090004@gmail.com> On 04/21/2016 11:15 AM, William Gogan wrote: > I'm trying borgbackup out, and so far it's performing really well in almost all tests. > > The one item where I'm seeing odd performance is for tar files. It appears not to be deduplicating except within the current archive. > > Background: Our VM tool kicks out a .tar file per container. It compresses (lzo) the .tar. For discussion purposes, let's pretend it's called vm.tar.lzo Compression changes the bytestream. You may get lucky and the changes only happened to files at the end of a tar file, but that's unlikely. Depending on how many files changed, the probably that something changed at the beginning of the tar file is pretty high. This is what I would guess is happening. From william at conveystudio.com Thu Apr 21 02:52:33 2016 From: william at conveystudio.com (William Gogan) Date: Thu, 21 Apr 2016 00:52:33 -0600 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <57186AC2.1090004@gmail.com> References: <57186AC2.1090004@gmail.com> Message-ID: <571878B1.6090208@conveystudio.com> Sitaram Chamarty wrote: > On 04/21/2016 11:15 AM, William Gogan wrote: >> I'm trying borgbackup out, and so far it's performing really well in almost all tests. >> >> The one item where I'm seeing odd performance is for tar files. It appears not to be deduplicating except within the current archive. >> >> Background: Our VM tool kicks out a .tar file per container. It compresses (lzo) the .tar. For discussion purposes, let's pretend it's called vm.tar.lzo > > Compression changes the bytestream. You may get lucky and the changes > only happened to files at the end of a tar file, but that's unlikely. > Depending on how many files changed, the probably that something changed > at the beginning of the tar file is pretty high. Just to confirm - even though as I mention I'm piping lzop -d --to-stdout vm.tar.lzo to borg (ie: borg is not getting a compressed file, it is being piped the uncompressed .tar file), it sounds like Borg isn't capable of handling duplicate pieces inside a file. I guess, and I'm probably wrong about this.. I had hoped that it would go something like "borg is getting the uncompressed .tar, so it will see that 98% of the files in that tar didn't change, and it will deduplicate all of that". I think what you're telling me though is that, when inside a single big file like a tar, borg doesn't cope very well with small changes, even if that big file is uncompressed like a straight tar.. is that right? Would I be better trying to totally extract the tar to a tmp disk and point borg at that each time? > > This is what I would guess is happening. -- William Gogan Convey Studio / Custom. Digital. Branding. 719.278.3736 conveystudio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sitaramc at gmail.com Thu Apr 21 02:58:11 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Thu, 21 Apr 2016 12:28:11 +0530 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <571878B1.6090208@conveystudio.com> References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> Message-ID: <57187A03.7000002@gmail.com> On 04/21/2016 12:22 PM, William Gogan wrote: > > > Sitaram Chamarty wrote: >> On 04/21/2016 11:15 AM, William Gogan wrote: >>> I'm trying borgbackup out, and so far it's performing really well in almost all tests. >>> >>> The one item where I'm seeing odd performance is for tar files. It appears not to be deduplicating except within the current archive. >>> >>> Background: Our VM tool kicks out a .tar file per container. It compresses (lzo) the .tar. For discussion purposes, let's pretend it's called vm.tar.lzo >> >> Compression changes the bytestream. You may get lucky and the changes >> only happened to files at the end of a tar file, but that's unlikely. >> Depending on how many files changed, the probably that something changed >> at the beginning of the tar file is pretty high. > Just to confirm - even though as I mention I'm piping lzop -d --to-stdout vm.tar.lzo to borg (ie: borg is not getting a compressed file, it is being piped the uncompressed .tar file), it sounds like Borg isn't capable of handling duplicate pieces inside a file. oop; my apologies. I reacted too fast and did not realise that borg was getting an uncompressed file. I assume this means borg gets the file via STDIN? If so, maybe it has something to do with STDIN being less amenable to dedup? sorry again for my previous (useless) mail! From heiko.helmle at horiba.com Thu Apr 21 03:11:12 2016 From: heiko.helmle at horiba.com (heiko.helmle at horiba.com) Date: Thu, 21 Apr 2016 09:11:12 +0200 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <57187A03.7000002@gmail.com> References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> <57187A03.7000002@gmail.com> Message-ID: > Borg isn't capable of handling duplicate pieces inside a file. > > oop; my apologies. I reacted too fast and did not realise that borg was > getting an uncompressed file. > > I assume this means borg gets the file via STDIN? If so, maybe it has > something to do with STDIN being less amenable to dedup? > > sorry again for my previous (useless) mail! I'm seeing something similar here. I used attic (and many early borg revisions) to backup a few work VMs here. A slightly bigger one (about 100Gigs) was backupped daily. This backup took about half an hour (with -C lzma) and resulted in about 1-2 Gigs of new data (deduped and compressed) each time. Now with recent borg, the amount of new data jumped to about 17-20Gigs per day and it took much longer (i had to scale back to use zlib as compression to have the backup finnish before the LVM snapshot filled up). This indicates that the deduplication engine took a hit along the way and feeds much more data to lzma, which makes the overall runtime slower. This *might* coincide with the change in the default chunker params, but I'm not sure. Unfortunately I didn't pay attention as to which release actually started the drop in dedup performance. If I find the time, I might start a trial run with the "classic" parameters (10,23,16,4095), but not this week :) Best Regards Heiko -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Thu Apr 21 04:28:10 2016 From: public at enkore.de (public at enkore.de) Date: Thu, 21 Apr 2016 10:28:10 +0200 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> <57187A03.7000002@gmail.com> Message-ID: <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> Since Borg doesn't know the structure of a tar file my guess is that changed metadata that's stored in-line with file data will make deduplication of the file data impossible for files that are smaller than 1-2 avg chunk sizes (>2 MB). For this specific use case I'd recommend using the old chunker params which should allow better deduplication; still: unchanged, small files with updated metadata won't deduplicate. When deduplicating actual file systems this doesn't seem to be as troublesome ; my guess here is that most file systems tend to put inodes (with the often-changing metadata) in one place and file data in another, hence metadata updates don't affect data deduplication as much. Still, for optimal granularity you'll want Borg to be able to tell files apart. Cheers, Marian On 21.04.2016 09:11, heiko.helmle at horiba.com wrote: >> Borg isn't capable of handling duplicate pieces inside a file. >> >> oop; my apologies. I reacted too fast and did not realise that borg was >> getting an uncompressed file. >> >> I assume this means borg gets the file via STDIN? If so, maybe it has >> something to do with STDIN being less amenable to dedup? >> >> sorry again for my previous (useless) mail! > > I'm seeing something similar here. I used attic (and many early borg > revisions) to backup a few work VMs here. A slightly bigger one (about > 100Gigs) was backupped daily. This backup took about half an hour (with > -C lzma) and resulted in about 1-2 Gigs of new data (deduped and > compressed) each time. > > Now with recent borg, the amount of new data jumped to about 17-20Gigs > per day and it took much longer (i had to scale back to use zlib as > compression to have the backup finnish before the LVM snapshot filled > up). This indicates that the deduplication engine took a hit along the > way and feeds much more data to lzma, which makes the overall runtime > slower. > > This *might* coincide with the change in the default chunker params, but > I'm not sure. Unfortunately I didn't pay attention as to which release > actually started the drop in dedup performance. If I find the time, I > might start a trial run with the "classic" parameters (10,23,16,4095), > but not this week :) > > Best Regards > Heiko > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From sitaramc at gmail.com Thu Apr 21 05:03:24 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Thu, 21 Apr 2016 14:33:24 +0530 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> <57187A03.7000002@gmail.com> <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> Message-ID: <5718975C.6040407@gmail.com> On 04/21/2016 01:58 PM, public at enkore.de wrote: > Since Borg doesn't know the structure of a tar file my guess is that > changed metadata that's stored in-line with file data will make > deduplication of the file data impossible for files that are smaller > than 1-2 avg chunk sizes (>2 MB). Oh very nice; I had not thought of this but it makes perfect sense! > For this specific use case I'd recommend using the old chunker params > which should allow better deduplication; still: unchanged, small files > with updated metadata won't deduplicate. > > When deduplicating actual file systems this doesn't seem to be as > troublesome ; my guess here is that most file systems tend to put inodes > (with the often-changing metadata) in one place and file data in > another, hence metadata updates don't affect data deduplication as much. My guess would be that borg itself "knows" what is metadata and what is file data, and has different storage/dedup mechanisms for them. regards sitaram > > Still, for optimal granularity you'll want Borg to be able to tell files > apart. > > Cheers, Marian > > On 21.04.2016 09:11, heiko.helmle at horiba.com wrote: >>> Borg isn't capable of handling duplicate pieces inside a file. >>> >>> oop; my apologies. I reacted too fast and did not realise that borg was >>> getting an uncompressed file. >>> >>> I assume this means borg gets the file via STDIN? If so, maybe it has >>> something to do with STDIN being less amenable to dedup? >>> >>> sorry again for my previous (useless) mail! >> >> I'm seeing something similar here. I used attic (and many early borg >> revisions) to backup a few work VMs here. A slightly bigger one (about >> 100Gigs) was backupped daily. This backup took about half an hour (with >> -C lzma) and resulted in about 1-2 Gigs of new data (deduped and >> compressed) each time. >> >> Now with recent borg, the amount of new data jumped to about 17-20Gigs >> per day and it took much longer (i had to scale back to use zlib as >> compression to have the backup finnish before the LVM snapshot filled >> up). This indicates that the deduplication engine took a hit along the >> way and feeds much more data to lzma, which makes the overall runtime >> slower. >> >> This *might* coincide with the change in the default chunker params, but >> I'm not sure. Unfortunately I didn't pay attention as to which release >> actually started the drop in dedup performance. If I find the time, I >> might start a trial run with the "classic" parameters (10,23,16,4095), >> but not this week :) >> >> Best Regards >> Heiko >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From lists at localguru.de Thu Apr 21 05:50:29 2016 From: lists at localguru.de (Marcus Schopen) Date: Thu, 21 Apr 2016 11:50:29 +0200 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: References: Message-ID: <1461232229.21215.3.camel@cosmo.binux.de> Hi, Am Donnerstag, den 21.04.2016, 05:45 +0000 schrieb William Gogan: > I'm trying borgbackup out, and so far it's performing really well in > almost all tests. > > > The one item where I'm seeing odd performance is for tar files. It > appears not to be deduplicating except within the current archive. > > > Background: Our VM tool kicks out a .tar file per container. It > compresses (lzo) the .tar. For discussion purposes, let's pretend it's > called vm.tar.lzo > > > So, I call `lzop vm.tar.lzo -d --to-stdout | borg create --verbose > --stats --progress --chunker-params 19,23,21,4095 --compression > lz4 /dir/borg/::2016-04-21-01-38 -` - I assumed lzo would wreck borg's > dedupe, so I pipe in the decompressed version. > > > Even if I generate a .tar file, then immediately generate a second one > (within <30s of the first), and then feed them both to borgbackup, it > shows about 80% of the blocks as non-duplicates despite 99% of the > files not having changed on the disk (and so should not have changed > in the .tar) > > > I looked at the FAQ, and it does make specific mention of doing well > at VM backups, so I'm wondering if I'm doing something wrong. > > > What can I do to get better dedupe performance? I considered adding > tar to the mix and untarring the file before piping it to borg, but > that seems suboptimal. > > > If anyone has any suggestions, I'd welcome them! I have a similar deduplication problem with partclone images I'd like to backup. Andy ideas of another dumper (instead of raw dd)? Ciao Marcus From public at enkore.de Thu Apr 21 07:41:08 2016 From: public at enkore.de (public at enkore.de) Date: Thu, 21 Apr 2016 13:41:08 +0200 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <5718975C.6040407@gmail.com> References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> <57187A03.7000002@gmail.com> <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> <5718975C.6040407@gmail.com> Message-ID: <5eeac430-e531-f24d-d853-a7ffb6a48862@enkore.de> On 21.04.2016 11:03, Sitaram Chamarty wrote: > On 04/21/2016 01:58 PM, public at enkore.de wrote: >> Since Borg doesn't know the structure of a tar file my guess is that >> changed metadata that's stored in-line with file data will make >> deduplication of the file data impossible for files that are smaller >> than 1-2 avg chunk sizes (>2 MB). > > Oh very nice; I had not thought of this but it makes perfect sense! > >> For this specific use case I'd recommend using the old chunker params >> which should allow better deduplication; still: unchanged, small files >> with updated metadata won't deduplicate. >> >> When deduplicating actual file systems this doesn't seem to be as >> troublesome ; my guess here is that most file systems tend to put inodes >> (with the often-changing metadata) in one place and file data in >> another, hence metadata updates don't affect data deduplication as much. > > My guess would be that borg itself "knows" what is metadata and what is > file data, and has different storage/dedup mechanisms for them. My bad, I meant to write "deduplicating actual file system *images*". When Borg makes archives from a file system (not FS image) then the physical layout of the FS doesn't matter, it reads files/dirs with normal APIs like most programs would do. File contents directly go into chunks, metadata goes into the item (=files, dirs) stream, which is chunked with a different, very fine-grained chunker. Cheers, Marian > > regards > sitaram > >> >> Still, for optimal granularity you'll want Borg to be able to tell files >> apart. >> >> Cheers, Marian >> >> On 21.04.2016 09:11, heiko.helmle at horiba.com wrote: >>>> Borg isn't capable of handling duplicate pieces inside a file. >>>> >>>> oop; my apologies. I reacted too fast and did not realise that borg was >>>> getting an uncompressed file. >>>> >>>> I assume this means borg gets the file via STDIN? If so, maybe it has >>>> something to do with STDIN being less amenable to dedup? >>>> >>>> sorry again for my previous (useless) mail! >>> >>> I'm seeing something similar here. I used attic (and many early borg >>> revisions) to backup a few work VMs here. A slightly bigger one (about >>> 100Gigs) was backupped daily. This backup took about half an hour (with >>> -C lzma) and resulted in about 1-2 Gigs of new data (deduped and >>> compressed) each time. >>> >>> Now with recent borg, the amount of new data jumped to about 17-20Gigs >>> per day and it took much longer (i had to scale back to use zlib as >>> compression to have the backup finnish before the LVM snapshot filled >>> up). This indicates that the deduplication engine took a hit along the >>> way and feeds much more data to lzma, which makes the overall runtime >>> slower. >>> >>> This *might* coincide with the change in the default chunker params, but >>> I'm not sure. Unfortunately I didn't pay attention as to which release >>> actually started the drop in dedup performance. If I find the time, I >>> might start a trial run with the "classic" parameters (10,23,16,4095), >>> but not this week :) >>> >>> Best Regards >>> Heiko >>> >>> >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup >>> >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > From dastapov at gmail.com Thu Apr 21 07:42:38 2016 From: dastapov at gmail.com (Dmitry Astapov) Date: Thu, 21 Apr 2016 12:42:38 +0100 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <5eeac430-e531-f24d-d853-a7ffb6a48862@enkore.de> References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> <57187A03.7000002@gmail.com> <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> <5718975C.6040407@gmail.com> <5eeac430-e531-f24d-d853-a7ffb6a48862@enkore.de> Message-ID: If there is a good fuse mounter for tar files, you can achieve better results mounting them and archiving from there. On Thu, Apr 21, 2016 at 12:41 PM, wrote: > On 21.04.2016 11:03, Sitaram Chamarty wrote: > > On 04/21/2016 01:58 PM, public at enkore.de wrote: > >> Since Borg doesn't know the structure of a tar file my guess is that > >> changed metadata that's stored in-line with file data will make > >> deduplication of the file data impossible for files that are smaller > >> than 1-2 avg chunk sizes (>2 MB). > > > > Oh very nice; I had not thought of this but it makes perfect sense! > > > >> For this specific use case I'd recommend using the old chunker params > >> which should allow better deduplication; still: unchanged, small files > >> with updated metadata won't deduplicate. > >> > >> When deduplicating actual file systems this doesn't seem to be as > >> troublesome ; my guess here is that most file systems tend to put inodes > >> (with the often-changing metadata) in one place and file data in > >> another, hence metadata updates don't affect data deduplication as much. > > > > My guess would be that borg itself "knows" what is metadata and what is > > file data, and has different storage/dedup mechanisms for them. > > My bad, I meant to write "deduplicating actual file system *images*". > > When Borg makes archives from a file system (not FS image) then the > physical layout of the FS doesn't matter, it reads files/dirs with > normal APIs like most programs would do. > > File contents directly go into chunks, metadata goes into the item > (=files, dirs) stream, which is chunked with a different, very > fine-grained chunker. > > Cheers, Marian > > > > > regards > > sitaram > > > >> > >> Still, for optimal granularity you'll want Borg to be able to tell files > >> apart. > >> > >> Cheers, Marian > >> > >> On 21.04.2016 09:11, heiko.helmle at horiba.com wrote: > >>>> Borg isn't capable of handling duplicate pieces inside a file. > >>>> > >>>> oop; my apologies. I reacted too fast and did not realise that borg > was > >>>> getting an uncompressed file. > >>>> > >>>> I assume this means borg gets the file via STDIN? If so, maybe it has > >>>> something to do with STDIN being less amenable to dedup? > >>>> > >>>> sorry again for my previous (useless) mail! > >>> > >>> I'm seeing something similar here. I used attic (and many early borg > >>> revisions) to backup a few work VMs here. A slightly bigger one (about > >>> 100Gigs) was backupped daily. This backup took about half an hour (with > >>> -C lzma) and resulted in about 1-2 Gigs of new data (deduped and > >>> compressed) each time. > >>> > >>> Now with recent borg, the amount of new data jumped to about 17-20Gigs > >>> per day and it took much longer (i had to scale back to use zlib as > >>> compression to have the backup finnish before the LVM snapshot filled > >>> up). This indicates that the deduplication engine took a hit along the > >>> way and feeds much more data to lzma, which makes the overall runtime > >>> slower. > >>> > >>> This *might* coincide with the change in the default chunker params, > but > >>> I'm not sure. Unfortunately I didn't pay attention as to which release > >>> actually started the drop in dedup performance. If I find the time, I > >>> might start a trial run with the "classic" parameters (10,23,16,4095), > >>> but not this week :) > >>> > >>> Best Regards > >>> Heiko > >>> > >>> > >>> _______________________________________________ > >>> Borgbackup mailing list > >>> Borgbackup at python.org > >>> https://mail.python.org/mailman/listinfo/borgbackup > >>> > >> > >> > >> _______________________________________________ > >> Borgbackup mailing list > >> Borgbackup at python.org > >> https://mail.python.org/mailman/listinfo/borgbackup > >> > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Dmitry Astapov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Apr 21 09:02:31 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 21 Apr 2016 15:02:31 +0200 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: References: Message-ID: <5718CF67.1000703@waldmann-edv.de> > Background: Our VM tool kicks out a .tar file per container. It > compresses (lzo) the .tar. For discussion purposes, let's pretend it's > called vm.tar.lzo Please provide a tar listing so we can see how many / how big files are in there. Without that, one can only speculate... Also, the specific format of the "vm disk file(s)" in here would be interesting. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From william at conveystudio.com Thu Apr 21 10:40:28 2016 From: william at conveystudio.com (William Gogan) Date: Thu, 21 Apr 2016 08:40:28 -0600 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> <57187A03.7000002@gmail.com> <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> Message-ID: <5718E65C.5050501@conveystudio.com> public at enkore.de wrote: > For this specific use case I'd recommend using the old chunker params > which should allow better deduplication; still: unchanged, small files > with updated metadata won't deduplicate. For the sake of testing, I re-ran my same experiment (3 .tar files of the same system, taken ~30 seconds apart, piped to borg) *without* any chunker params, to let the defaults run. I was getting 10% deduplication when using the explicit chunker params, and it's still right at 10% using the default params. However, note that the data is exactly as you predicted - the .tar file comprises almost entirely of small files (the .tar file contains the / directory of a brand-new redhat system with minimal installed services.. all files are small). Total deduplication is running around 20%. So, this test (sample size=3) proved your expectation about small-file behavior was accurate. I am going to now try mounting the tar as suggested in another comment, and will report back on what I get out of that. > > -- William Gogan Convey Studio / Custom. Digital. Branding. 719.278.3736 conveystudio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From anarcat at debian.org Thu Apr 21 10:42:28 2016 From: anarcat at debian.org (Antoine =?utf-8?Q?Beaupr=C3=A9?=) Date: Thu, 21 Apr 2016 10:42:28 -0400 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <5718E65C.5050501@conveystudio.com> References: <57186AC2.1090004@gmail.com> <571878B1.6090208@conveystudio.com> <57187A03.7000002@gmail.com> <058e7f38-9689-4bc6-7417-7c20f5e3c022@enkore.de> <5718E65C.5050501@conveystudio.com> Message-ID: <87bn53t44r.fsf@angela.anarcat.ath.cx> On 2016-04-21 10:40:28, William Gogan wrote: > public at enkore.de wrote: >> For this specific use case I'd recommend using the old chunker params >> which should allow better deduplication; still: unchanged, small files >> with updated metadata won't deduplicate. > For the sake of testing, I re-ran my same experiment (3 .tar files of > the same system, taken ~30 seconds apart, piped to borg) *without* any > chunker params, to let the defaults run. I was getting 10% deduplication > when using the explicit chunker params, and it's still right at 10% > using the default params. > > However, note that the data is exactly as you predicted - the .tar file > comprises almost entirely of small files (the .tar file contains the / > directory of a brand-new redhat system with minimal installed services.. > all files are small). Total deduplication is running around 20%. > > So, this test (sample size=3) proved your expectation about small-file > behavior was accurate. > > I am going to now try mounting the tar as suggested in another comment, > and will report back on what I get out of that. It would be interesting to have unit / perf tests for this stuff. A. -- If Christ were here there is one thing he would not be -- a Christian. - Mark Twain From william at conveystudio.com Thu Apr 21 11:15:39 2016 From: william at conveystudio.com (William Gogan) Date: Thu, 21 Apr 2016 09:15:39 -0600 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <5718CF67.1000703@waldmann-edv.de> References: <5718CF67.1000703@waldmann-edv.de> Message-ID: <5718EE9B.7080901@conveystudio.com> Thomas Waldmann wrote: >> Background: Our VM tool kicks out a .tar file per container. It >> compresses (lzo) the .tar. For discussion purposes, let's pretend it's >> called vm.tar.lzo > > Please provide a tar listing so we can see how many / how big files > are in there. Without that, one can only speculate... I can't give you a listing, but I can tell you this, which should help: This tar is created (using the command below) against a brand-new Redhat OS install with no user data on it yet, and minimal services. It is approx 1GB, and is mostly small files of type OS. I apologize that this isn't exactly what you asked for, but I'm not permitted to give a specific listing of data due to some work policies, even though this is just a blank install. > > Also, the specific format of the "vm disk file(s)" in here would be > interesting. The VM 'disk file' is actually just a straight tar file, created with the following process: 1) LVM snapshot is taken 2) Tar is created against the snapshot using `tar cpf - --totals --sparse --numeric-owner --acls --xattrs --xattrs-include=user.* --xattrs-include=security.capability --warning=no-xattr-write --one-file-system --warning=no-file-ignored --directory=/storage/dump/vzdump-lxc-111-2016_04_21-10_39_19.tmp ./etc/vzdump/pct.conf --directory=/mnt/vzsnap0 --no-anchored --exclude=lost+found --anchored --exclude=./var/log/?* --exclude=./tmp/?* --exclude=./var/tmp/?* --exclude=./var/run/?*.pid ./` 3) That tar file is compressed using lzo. As previously mentioned, I lzop -d the file passing it to borg. Pretty much the only benefit this .tar file gives me, vs pointing borg against the mounted LVM snapshot itself, is that should a disaster occur, the recovery process relies on providing the VM server with the .tar file of each VM. A potential workaround to this would be to have borg work on the LVM mount itself, and then, during the restore process, I *might* (subject to testing) be able to run this tar command against the borg restore, in order to re-create the .tar file expected 'on demand' that can be consumed by the VM server. This feels a little wiggly, but I'll do some checking if all else fails. -- William Gogan Convey Studio / Custom. Digital. Branding. 719.278.3736 conveystudio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Apr 21 13:44:28 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 21 Apr 2016 19:44:28 +0200 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <5718EE9B.7080901@conveystudio.com> References: <5718CF67.1000703@waldmann-edv.de> <5718EE9B.7080901@conveystudio.com> Message-ID: <5719117C.1070007@waldmann-edv.de> >>> Background: Our VM tool kicks out a .tar file per container. I was assuming it was kind of a disk image or pieces of a disk image, plus config file. >> Please provide a tar listing so we can see how many / how big files >> are in there. Without that, one can only speculate... > I can't give you a listing, but I can tell you this, which should help: > This tar is created (using the command below) against a brand-new Redhat > OS install with no user data on it yet, and minimal services. It is > approx 1GB, and is mostly small files of type OS. OK, then the problem is as already analyzed. The default granularity of 2MB of the chunker does not match that kind of input. If you feed a lot of single, small files into borg, the chunks are determined automatically: each file of <512K will be automatically 1 chunk. But if you kind of concatenate them all + intersect them with (changing?) metadata, these boundaries do not establish and likely there is always some change in the metadata. So, it looks like you could just drop the tar step completely and just directly use borg to make it behave like you want. > I apologize that this > isn't exactly what you asked for, but I'm not permitted to give a > specific listing of data due to some work policies, even though this is > just a blank install. No problem, I was assuming it was a different kind of listing, but it is clear enough now. > Pretty much the only benefit this .tar file gives me, vs pointing borg > against the mounted LVM snapshot itself, is that should a disaster > occur, the recovery process relies on providing the VM server with the > .tar file of each VM. OK, so it's kind of an integration issue. > A potential workaround to this would be to have borg work on the LVM > mount itself, and then, during the restore process, I *might* (subject > to testing) be able to run this tar command against the borg restore, in > order to re-create the .tar file expected 'on demand' that can be > consumed by the VM server. This feels a little wiggly, but I'll do some > checking if all else fails. There could be 2 other ways of solving this: a) ask the VM/container sw provider to integrate borg b) we could have a reader/chunker that reads from tar files instead of the filessystem alternatively to our normal chunker. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From william at conveystudio.com Thu Apr 21 14:00:02 2016 From: william at conveystudio.com (William Gogan) Date: Thu, 21 Apr 2016 12:00:02 -0600 Subject: [Borgbackup] Deduplication of tar files - doesn't seem to be giving good performance In-Reply-To: <5719117C.1070007@waldmann-edv.de> References: <5718CF67.1000703@waldmann-edv.de> <5718EE9B.7080901@conveystudio.com> <5719117C.1070007@waldmann-edv.de> Message-ID: <57191522.10106@conveystudio.com> Thomas Waldmann wrote: >>>> Background: Our VM tool kicks out a .tar file per container. > > I was assuming it was kind of a disk image or pieces of a disk image, > plus config file. My apologies - I can see how I wasn't quite specific enough there. The VM tool (Proxmox) actually supports containers (LXC/OpenVZ) that utilize the tar process we have been discussing, and also separately supports KVM virtualization which does result in a disk image type approach (specifically, qcow2 file format as the default) with a config file. My focus has been on the simpler .tar use case partly because we can use r1soft's CDP product for anything in KVM land (although I would like to eventually also have borg backing up our qcow2 files as well, but that's a different discussion I think) > > OK, so it's kind of an integration issue. Precisely. > >> A potential workaround to this would be to have borg work on the LVM >> mount itself, and then, during the restore process, I *might* (subject >> to testing) be able to run this tar command against the borg restore, in >> order to re-create the .tar file expected 'on demand' that can be >> consumed by the VM server. This feels a little wiggly, but I'll do some >> checking if all else fails. > > There could be 2 other ways of solving this: > a) ask the VM/container sw provider to integrate borg I am going to make an attempt to modify their backup tool to support calling borg instead of tar. They have a limited series of hooks during the backup process that can call an external helper script, and it appears one event is emitted right after all snapshot operations are completed and the system has made itself ready for what would normally be the .tar operation. If this is successful, I'll provide this to the community of course so that other Proxmox (the VM software) users have the capability of utilizing borg, at least for LXC/OpenVZ containers (proxmox does a different thing for KVM, as it utilizes qcow2 files usually [but not always]) > b) we could have a reader/chunker that reads from tar files instead of > the filessystem alternatively to our normal chunker. This would be a nice thing to have from my side - however I haven't read through the borg code enough to know if it's outside my abilities (it probably is!). It would seem that tar is a pretty popular format that is used a fair amount (our other control panels, such as Plesk, etc, also export .tar files as snapshot backups). I suppose the counter argument would be that borg can operate on the direct files vs tar getting in the way, and 'backing up the backup' (borging the backup.tar file) might not be seen as useful to some. For me, it would be quite useful :) -- William Gogan Convey Studio / Custom. Digital. Branding. 719.278.3736 conveystudio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgphelps50 at gmail.com Fri Apr 22 09:14:35 2016 From: tgphelps50 at gmail.com (Terry Phelps) Date: Fri, 22 Apr 2016 09:14:35 -0400 Subject: [Borgbackup] Question on "borg mount" performance Message-ID: First: I just discovered Borg a few days ago. I have been searching for some sort of usable de-deduplicating backup program. I am in the early stages of using Borg to backup VM disk images. Some of these images have lots in common with other disk images, so I have a lot to gain by using some form of de-deduplication. I was just now going to do some verification of of the backups, by comparing the SHA1 sum of the original VM disk image ( a clone, really, so it doesn't chnage) to the SHA1 sum of the backup. I first did a "borg mount", and did "sha1sum /mnt/.../disk.image" directly from the FUSE mount. This was horribly slow. I then did a "borg extract" of that same disk image, and that was much, much faster. Is this what you expected? Is the FUSE mount method known to be very slow when reading, say, a 10-GB file, as compared to extracting that same file? In any case, Borg is wonderfully documented, very complete in the features I need, and is written in Python. What could be better? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri Apr 22 13:09:52 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 22 Apr 2016 19:09:52 +0200 Subject: [Borgbackup] Question on "borg mount" performance In-Reply-To: References: Message-ID: <571A5AE0.605@waldmann-edv.de> > I was just now going to do some verification of of the backups, by > comparing the SHA1 sum of the original VM disk image ( a clone, really, > so it doesn't chnage) to the SHA1 sum of the backup. I first did a "borg > mount", and did "sha1sum /mnt/.../disk.image" directly from the FUSE > mount. This was horribly slow. I then did a "borg extract" of that same > disk image, and that was much, much faster. > > Is this what you expected? Well, it is expected that FUSE mount is slower than extract. You didn't write how slow it was so that we could directly compare. Also the operations you did were slightly different: a) your ran sha1sum directly on the file in the fuse mount - that might behave differently than copying the file from the fuse mount to somewhere else with cp depending on how many and how big read calls sha1sum does. The FUSE mount also needs to do some preprocessing when you first enter a "archive directory" so it can know what file names it shall show you there. It also does some caching to avoid having to get same chunks multiple times from the repo b) extract doesn't do most of that In general, if you know that you will deal with a huge amount of data (like a full extract, like extracting gigabytes), borg extract is more efficient. > In any case, Borg is wonderfully documented, very complete in the > features I need, and is written in Python. What could be better? :) -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Fri Apr 22 13:17:21 2016 From: public at enkore.de (public at enkore.de) Date: Fri, 22 Apr 2016 19:17:21 +0200 Subject: [Borgbackup] Question on "borg mount" performance In-Reply-To: <571A5AE0.605@waldmann-edv.de> References: <571A5AE0.605@waldmann-edv.de> Message-ID: sha256sum does 32k reads on my system; cp reads 131k blocks. I'm not sure, but at those sizes I wouldn't expect FUSE to aggregate reads to reduce call overhead. For a large file I would expect to be significantly slower than "borg extract", since it would decrypt/decompress a 2 MB chunk 63 times for 32k reads, and still 15 times for 131 kB reads, assuming those arrive at Borg with that sizing. That being said performance might be improved here by having a LRUCache of partially read chunks. Ref: https://github.com/borgbackup/borg/blob/master/borg/fuse.py#L224 Cheers, Marian On 22.04.2016 19:09, Thomas Waldmann wrote: >> I was just now going to do some verification of of the backups, by >> comparing the SHA1 sum of the original VM disk image ( a clone, really, >> so it doesn't chnage) to the SHA1 sum of the backup. I first did a "borg >> mount", and did "sha1sum /mnt/.../disk.image" directly from the FUSE >> mount. This was horribly slow. I then did a "borg extract" of that same >> disk image, and that was much, much faster. >> >> Is this what you expected? > > Well, it is expected that FUSE mount is slower than extract. > > You didn't write how slow it was so that we could directly compare. > > Also the operations you did were slightly different: > > a) your ran sha1sum directly on the file in the fuse mount - that might > behave differently than copying the file from the fuse mount to > somewhere else with cp depending on how many and how big read calls > sha1sum does. > > The FUSE mount also needs to do some preprocessing when you first enter > a "archive directory" so it can know what file names it shall show you > there. > > It also does some caching to avoid having to get same chunks multiple > times from the repo > > b) extract doesn't do most of that > > In general, if you know that you will deal with a huge amount of data > (like a full extract, like extracting gigabytes), borg extract is more > efficient. > >> In any case, Borg is wonderfully documented, very complete in the >> features I need, and is written in Python. What could be better? > > :) > From anarcat at debian.org Fri Apr 22 13:29:18 2016 From: anarcat at debian.org (Antoine =?utf-8?Q?Beaupr=C3=A9?=) Date: Fri, 22 Apr 2016 13:29:18 -0400 Subject: [Borgbackup] Question on "borg mount" performance In-Reply-To: References: Message-ID: <87pothsgb5.fsf@angela.anarcat.ath.cx> On 2016-04-22 09:14:35, Terry Phelps wrote: > First: I just discovered Borg a few days ago. I have been searching for > some sort of usable de-deduplicating backup program. I am in the early > stages of using Borg to backup VM disk images. Some of these images have > lots in common with other disk images, so I have a lot to gain by using > some form of de-deduplication. > > I was just now going to do some verification of of the backups, by > comparing the SHA1 sum of the original VM disk image ( a clone, really, so > it doesn't chnage) to the SHA1 sum of the backup. I first did a "borg > mount", and did "sha1sum /mnt/.../disk.image" directly from the FUSE mount. > This was horribly slow. I then did a "borg extract" of that same disk > image, and that was much, much faster. > > Is this what you expected? Is the FUSE mount method known to be very slow > when reading, say, a 10-GB file, as compared to extracting that same file? It seems to me we should have a `diff` command that would remove the need to extract big files or use the slower FUSE interface (if only because that needs to build a directory tree). The diff command is currently limited to diffing between archives, so right now you could create an archive and diff, but that seems like a huge overhead as well. I have opened a feature request about this: https://github.com/borgbackup/borg/issues/963 a. -- L'art n'est pas un bureau d'anthropom?trie. - L?o Ferr?, "Pr?face" From public at enkore.de Fri Apr 22 13:48:46 2016 From: public at enkore.de (public at enkore.de) Date: Fri, 22 Apr 2016 19:48:46 +0200 Subject: [Borgbackup] Question on "borg mount" performance In-Reply-To: References: Message-ID: <9395cbb0-ede9-d0f5-9164-e60bcb115621@enkore.de> By the way, unreleased Borg 1.1+ can list various hashes (SHA-XXX) with borg list, which makes it unnecessary to extract / mount archives just for shaXXXsum. Cheers, Marian From public at enkore.de Fri Apr 22 16:37:48 2016 From: public at enkore.de (public at enkore.de) Date: Fri, 22 Apr 2016 22:37:48 +0200 Subject: [Borgbackup] Question on "borg mount" performance In-Reply-To: References: <571A5AE0.605@waldmann-edv.de> Message-ID: <2c664a1f-b6f9-414c-01af-4596f58c0991@enkore.de> That's the problem. Patch created, see https://github.com/borgbackup/borg/pull/965 Cheers, Marian On 22.04.2016 19:17, public at enkore.de wrote: > sha256sum does 32k reads on my system; cp reads 131k blocks. I'm not > sure, but at those sizes I wouldn't expect FUSE to aggregate reads to > reduce call overhead. > > For a large file I would expect to be significantly slower than "borg > extract", since it would decrypt/decompress a 2 MB chunk 63 times for > 32k reads, and still 15 times for 131 kB reads, assuming those arrive at > Borg with that sizing. > > That being said performance might be improved here by having a LRUCache > of partially read chunks. > > Ref: https://github.com/borgbackup/borg/blob/master/borg/fuse.py#L224 > > Cheers, Marian > > On 22.04.2016 19:09, Thomas Waldmann wrote: >>> I was just now going to do some verification of of the backups, by >>> comparing the SHA1 sum of the original VM disk image ( a clone, really, >>> so it doesn't chnage) to the SHA1 sum of the backup. I first did a "borg >>> mount", and did "sha1sum /mnt/.../disk.image" directly from the FUSE >>> mount. This was horribly slow. I then did a "borg extract" of that same >>> disk image, and that was much, much faster. >>> >>> Is this what you expected? >> >> Well, it is expected that FUSE mount is slower than extract. >> >> You didn't write how slow it was so that we could directly compare. >> >> Also the operations you did were slightly different: >> >> a) your ran sha1sum directly on the file in the fuse mount - that might >> behave differently than copying the file from the fuse mount to >> somewhere else with cp depending on how many and how big read calls >> sha1sum does. >> >> The FUSE mount also needs to do some preprocessing when you first enter >> a "archive directory" so it can know what file names it shall show you >> there. >> >> It also does some caching to avoid having to get same chunks multiple >> times from the repo >> >> b) extract doesn't do most of that >> >> In general, if you know that you will deal with a huge amount of data >> (like a full extract, like extracting gigabytes), borg extract is more >> efficient. >> >>> In any case, Borg is wonderfully documented, very complete in the >>> features I need, and is written in Python. What could be better? >> >> :) >> > From steve at bstage.com Wed May 11 17:03:59 2016 From: steve at bstage.com (Steve Schow) Date: Wed, 11 May 2016 15:03:59 -0600 Subject: [Borgbackup] borg can't find executable on remote Message-ID: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> I am new to borg. I installed using pip on OSX and I installed the standalone binary on linux. From either platform I can init a new repo and create backups to them an mount them with fuse. What I really want to do now is run a backup job over the network from linux, that connects to the OSX borg on another machine and backs itself up. I enabled SSH on the OSX machine and tested that I can login from the linux machine using SSH (check) when I attempt to do the following from linux I get error: borg init me at 192.168.1.50:/Users/me/trial.borg Remote: bash: borg: command not found So looks like borg is not able to find the borg executable on the remote OSX host. On that host, its at /usr/local/bin/borg and works from there, but I just can?t seem to call it remotely and use borg in a networked fashion. What am I missing? From tw at waldmann-edv.de Wed May 11 17:17:14 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 11 May 2016 23:17:14 +0200 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> Message-ID: <5733A15A.7030705@waldmann-edv.de> Hi Steve, > I installed using pip on OSX and I installed the standalone binary > on linux. > > From either platform I can init a new repo and create backups to them an mount them with fuse. > > What I really want to do now is run a backup job over the network from linux, that connects to the OSX borg on another machine and backs itself up. > > I enabled SSH on the OSX machine and tested that I can login from the linux machine using SSH (check) > > when I attempt to do the following from linux I get error: > > borg init me at 192.168.1.50:/Users/me/trial.borg > Remote: bash: borg: command not found Sounds like the "borg" command is not in the path (or not executable or not readable?). > So looks like borg is not able to find the borg executable on the remote OSX host. > On that host, its at /usr/local/bin/borg and works from there, but I just can?t seem to call it remotely and use borg in a networked fashion. > > What am I missing? Is maybe /usr/local/bin not in the PATH of that user you use for ssh? Cheers, Thomas -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From steve at bstage.com Wed May 11 18:40:23 2016 From: steve at bstage.com (Steve Schow) Date: Wed, 11 May 2016 16:40:23 -0600 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: <5733A15A.7030705@waldmann-edv.de> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> Message-ID: On May 11, 2016, at 3:17 PM, Thomas Waldmann wrote: > Hi Steve, > >> when I attempt to do the following from linux I get error: >> >> borg init me at 192.168.1.50:/Users/me/trial.borg >> Remote: bash: borg: command not found > > Sounds like the "borg" command is not in the path (or not executable or not readable?). > its definitely in the path, I have that dir in the path of the username being used and I am able to execute it when I login to command line using that user name on the remote machine directly. the file is readable and executable by all. is there any reason by borg?s method of calling SSH would not get the user?s env? From adrian.klaver at aklaver.com Wed May 11 18:43:59 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Wed, 11 May 2016 15:43:59 -0700 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> Message-ID: <6c219a4a-bd5b-8629-6074-a038db74ab61@aklaver.com> On 05/11/2016 03:40 PM, Steve Schow wrote: > On May 11, 2016, at 3:17 PM, Thomas Waldmann wrote: > >> Hi Steve, >> >>> when I attempt to do the following from linux I get error: >>> >>> borg init me at 192.168.1.50:/Users/me/trial.borg >>> Remote: bash: borg: command not found >> >> Sounds like the "borg" command is not in the path (or not executable or not readable?). >> > > its definitely in the path, I have that dir in the path of the username being used and I am able to execute it when I login to command line using that user name on the remote machine directly. the file is readable and executable by all. > > is there any reason by borg?s method of calling SSH would not get the user?s env? There is always --remote-path: https://borgbackup.readthedocs.io/en/stable/usage.html#borg-init "--remote-path PATH set remote path to executable (default: "borg")" > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From jdc at uwo.ca Wed May 11 19:42:52 2016 From: jdc at uwo.ca (Dan Christensen) Date: Wed, 11 May 2016 19:42:52 -0400 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> Message-ID: <874ma4xitf.fsf@uwo.ca> On May 11, 2016, Steve Schow wrote: > is there any reason by borg?s method of calling SSH would not get the user?s env? "Login" shells run extra init scripts, such as .profile. Compare what happens if you login to the remote machine, and then type "echo $PATH": ssh user at remote echo $PATH [in the remote shell] vs. what happens if you run ssh user at remote 'echo $PATH' (Note the single quotes, so the $PATH variable gets expanded on the remote end.) The latter doesn't create a login shell. As was suggested, the --remote-path option will solve this for you. Or, you can set your path in your .bashrc file, which is run in both cases. Dan From steve at bstage.com Wed May 11 22:07:13 2016 From: steve at bstage.com (Steve Schow) Date: Wed, 11 May 2016 20:07:13 -0600 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: <874ma4xitf.fsf@uwo.ca> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> Message-ID: That makes sense. Do you know if there is a way to provide env settings to ssh calls? Sent from my iPhone > On May 11, 2016, at 5:42 PM, Dan Christensen wrote: > >> On May 11, 2016, Steve Schow wrote: >> >> is there any reason by borg?s method of calling SSH would not get the user?s env? > > "Login" shells run extra init scripts, such as .profile. Compare what > happens if you login to the remote machine, and then type "echo $PATH": > > ssh user at remote > echo $PATH [in the remote shell] > > vs. what happens if you run > > ssh user at remote 'echo $PATH' > > (Note the single quotes, so the $PATH variable gets expanded on the > remote end.) > > The latter doesn't create a login shell. > > As was suggested, the --remote-path option will solve this for you. > Or, you can set your path in your .bashrc file, which is run in both > cases. > > Dan From jdc at uwo.ca Wed May 11 22:24:54 2016 From: jdc at uwo.ca (Dan Christensen) Date: Wed, 11 May 2016 22:24:54 -0400 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> Message-ID: <87a8jwvwqx.fsf@uwo.ca> The shell on the remote end should run the .bashrc init file, and environment variables can be set there. There are other ways too, but this is getting off topic for the list. Dan On May 11, 2016, Steve Schow wrote: > That makes sense. Do you know if there is a way to provide env settings to ssh calls? > > Sent from my iPhone > >> On May 11, 2016, at 5:42 PM, Dan Christensen wrote: >> >>> On May 11, 2016, Steve Schow wrote: >>> >>> is there any reason by borg?s method of calling SSH would not get the user?s env? >> >> "Login" shells run extra init scripts, such as .profile. Compare what >> happens if you login to the remote machine, and then type "echo $PATH": >> >> ssh user at remote >> echo $PATH [in the remote shell] >> >> vs. what happens if you run >> >> ssh user at remote 'echo $PATH' >> >> (Note the single quotes, so the $PATH variable gets expanded on the >> remote end.) >> >> The latter doesn't create a login shell. >> >> As was suggested, the --remote-path option will solve this for you. >> Or, you can set your path in your .bashrc file, which is run in both >> cases. >> >> Dan From steve at bstage.com Wed May 11 22:39:06 2016 From: steve at bstage.com (Steve Schow) Date: Wed, 11 May 2016 20:39:06 -0600 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: <87a8jwvwqx.fsf@uwo.ca> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> Message-ID: <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> I notice the error message that comes back is a bash error?so can someone confirm that borg is actually calling bash to indirectly call borg through the bash shell, as opposed to call borg directly? On May 11, 2016, at 8:24 PM, Dan Christensen wrote: > The shell on the remote end should run the .bashrc init file, > and environment variables can be set there. There are other ways > too, but this is getting off topic for the list. > > Dan > > On May 11, 2016, Steve Schow wrote: > >> That makes sense. Do you know if there is a way to provide env settings to ssh calls? >> >> Sent from my iPhone >> >>> On May 11, 2016, at 5:42 PM, Dan Christensen wrote: >>> >>>> On May 11, 2016, Steve Schow wrote: >>>> >>>> is there any reason by borg?s method of calling SSH would not get the user?s env? >>> >>> "Login" shells run extra init scripts, such as .profile. Compare what >>> happens if you login to the remote machine, and then type "echo $PATH": >>> >>> ssh user at remote >>> echo $PATH [in the remote shell] >>> >>> vs. what happens if you run >>> >>> ssh user at remote 'echo $PATH' >>> >>> (Note the single quotes, so the $PATH variable gets expanded on the >>> remote end.) >>> >>> The latter doesn't create a login shell. >>> >>> As was suggested, the --remote-path option will solve this for you. >>> Or, you can set your path in your .bashrc file, which is run in both >>> cases. >>> >>> Dan From steve at bstage.com Wed May 11 23:17:13 2016 From: steve at bstage.com (Steve Schow) Date: Wed, 11 May 2016 21:17:13 -0600 Subject: [Borgbackup] borg can't find executable on remote In-Reply-To: <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> Message-ID: I was wrong, I had my env in .bash_profile. I moved stuff to .bashrc and it works now. Looks like borg is indeed calling the bash shell to call the borg indirectly for exactly this reason. ?remote-path also worked, thanks for that. Now I just need to figure out how to automate all the passwords and I?ll be good to go. On May 11, 2016, at 8:39 PM, Steve Schow wrote: > I notice the error message that comes back is a bash error?so can someone confirm that borg is actually calling bash to indirectly call borg through the bash shell, as opposed to call borg directly? > > > > On May 11, 2016, at 8:24 PM, Dan Christensen wrote: > >> The shell on the remote end should run the .bashrc init file, >> and environment variables can be set there. There are other ways >> too, but this is getting off topic for the list. >> >> Dan >> >> On May 11, 2016, Steve Schow wrote: >> >>> That makes sense. Do you know if there is a way to provide env settings to ssh calls? >>> >>> Sent from my iPhone >>> >>>> On May 11, 2016, at 5:42 PM, Dan Christensen wrote: >>>> >>>>> On May 11, 2016, Steve Schow wrote: >>>>> >>>>> is there any reason by borg?s method of calling SSH would not get the user?s env? >>>> >>>> "Login" shells run extra init scripts, such as .profile. Compare what >>>> happens if you login to the remote machine, and then type "echo $PATH": >>>> >>>> ssh user at remote >>>> echo $PATH [in the remote shell] >>>> >>>> vs. what happens if you run >>>> >>>> ssh user at remote 'echo $PATH' >>>> >>>> (Note the single quotes, so the $PATH variable gets expanded on the >>>> remote end.) >>>> >>>> The latter doesn't create a login shell. >>>> >>>> As was suggested, the --remote-path option will solve this for you. >>>> Or, you can set your path in your .bashrc file, which is run in both >>>> cases. >>>> >>>> Dan > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From steve at bstage.com Fri May 13 02:56:05 2016 From: steve at bstage.com (Steve Schow) Date: Fri, 13 May 2016 00:56:05 -0600 Subject: [Borgbackup] logging questions In-Reply-To: References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> Message-ID: <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> New to borg here.. trying to figure out how to get some good logging for my automated backups. I want to see in the log all files that were actually backed up. I?m pretty sure I need these options: --list --stats --verbose --show-rc --filter=AMEds Can anyone confirm that I have the right options to see in my nightly log, a list of all files that were actually backed up, the final status, including the return status, so that I can catch a problem and send email notification. Also I notice from the docs there is this env variable BORG_LOGGING_CONF, but I couldn?t find any information about how to use it. thanks From steve at bstage.com Fri May 13 02:59:26 2016 From: steve at bstage.com (Steve Schow) Date: Fri, 13 May 2016 00:59:26 -0600 Subject: [Borgbackup] --numeric-owner In-Reply-To: References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> Message-ID: <449B12E6-1CCF-4EA3-8050-1B210A691939@bstage.com> Should I be using ?numeric-owner? I as using a similar option previously when I was using rsync to backup stuff, because I?m backing up remotely to a different system. The userid?s and usernames doesn?t always match up between the two systems. So what happens when I backup something up from machine A to machine B without this option? In the case of borg the data is encased in the repo, so is just a text string encoded with the user name or what? when I go to restore to the original machine A, will the stringified username be converted back to the correct UID on that machine again at that time? When would be the use case to use ?numeric-owner? From tw at waldmann-edv.de Fri May 13 08:10:02 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 13 May 2016 14:10:02 +0200 Subject: [Borgbackup] logging questions In-Reply-To: <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> Message-ID: <5735C41A.4040201@waldmann-edv.de> > --list --stats --verbose --show-rc --filter=AMEds Looks ok. There's a complete list of status chars in the docs - but if you just want to see added/modified/errored files, directories, symlinks, that is it. > Can anyone confirm that I have the right options to see in my nightly log, > a list of all files that were actually backed up, the final status, > including the return status, so that I can catch a problem and send email > notification. If you use borg create in a script, you can also directly catch the return status (for bash it is $? ). --show-rc is primarily to see it later / for log analyzers (like in borgweb). > Also I notice from the docs there is this env variable BORG_LOGGING_CONF, > but I couldn?t find any information about how to use it. Hmm, didn't I link to the python logging module configuration file format description? It's an ini-like file and you only need it if you have rather special logging needs. By default, we just output the log to stderr and log with the level you give on the commandline (--verbose is same as --info -> INFO log level). -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From tw at waldmann-edv.de Fri May 13 08:26:55 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 13 May 2016 14:26:55 +0200 Subject: [Borgbackup] --numeric-owner In-Reply-To: <449B12E6-1CCF-4EA3-8050-1B210A691939@bstage.com> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <449B12E6-1CCF-4EA3-8050-1B210A691939@bstage.com> Message-ID: <5735C80F.2030203@waldmann-edv.de> On 05/13/2016 08:59 AM, Steve Schow wrote: > Should I be using ?numeric-owner? I guess for borg create you usually do not need it, it just omits storing the user and group name and just stores the uid / gid. For borg extract, it depends on what you want. Keep numeric ids as in archive or map the archive user name / group name to the local ids? > I as using a similar option previously when I was using rsync to > backup stuff, because I?m backing up remotely to a different system. While the extract case is somehow similar, borg create does not have the problem that it just copies files 1:1 to a remote filesystem like rsync does (but stores them into an archive). > The userid?s and usernames doesn?t always match up between the two systems. > So what happens when I backup something up from machine A to machine B > without this option? Nothing. It just stores username, groupname, userid, groupid into the file metadata inside the archive. > when I go to restore to the original machine A, will the stringified > username be converted back to the correct UID on that machine again > at that time? Yes, the normal behaviour is to use username / group name. If you use --numeric-ids, then it will just use the uid/gid as they are in the archive. > When would be the use case to use ?numeric-owner? At extract time, if you have booted from a rescue system (e.g. CD or USB stick) that uses a different name -> id mapping than what you want to see in the restored files / dirs. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From steve at bstage.com Fri May 13 19:20:40 2016 From: steve at bstage.com (Steve Schow) Date: Fri, 13 May 2016 17:20:40 -0600 Subject: [Borgbackup] logging questions In-Reply-To: <5735C41A.4040201@waldmann-edv.de> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> <5735C41A.4040201@waldmann-edv.de> Message-ID: <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> Ic. Yea the INI docs you mentioned was greek to me so I will just capture the stdout and stderr and use that instead. It would be nice if we could have that output go to an explicit log file so that when we decide to run borg ad-hoc and don?t remember to redirect stdout/stderr, we will see the output on the console and also have a log file to go look at later? I?m quite impressed by borg. Bye bye rsnapshot/rsync On May 13, 2016, at 6:10 AM, Thomas Waldmann wrote: >> --list --stats --verbose --show-rc --filter=AMEds > > Looks ok. There's a complete list of status chars in the docs - but if you just want to see added/modified/errored files, directories, symlinks, that is it. > >> Can anyone confirm that I have the right options to see in my nightly log, > > a list of all files that were actually backed up, the final status, > > including the return status, so that I can catch a problem and send email > > notification. > > If you use borg create in a script, you can also directly catch the return status (for bash it is $? ). --show-rc is primarily to see it later / for log analyzers (like in borgweb). > >> Also I notice from the docs there is this env variable BORG_LOGGING_CONF, > > but I couldn?t find any information about how to use it. > > Hmm, didn't I link to the python logging module configuration file format description? It's an ini-like file and you only need it if you have rather special logging needs. By default, we just output the log to stderr and log with the level you give on the commandline (--verbose is same as --info -> INFO log level). > > -- > > GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From sitaramc at gmail.com Sat May 14 01:31:00 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Sat, 14 May 2016 11:01:00 +0530 Subject: [Borgbackup] logging questions In-Reply-To: <5735C41A.4040201@waldmann-edv.de> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> <5735C41A.4040201@waldmann-edv.de> Message-ID: <9d8ac8e0-ea02-2df1-ddd9-cbf36145a279@gmail.com> On Fri, May 13, 2016 at 5:40 PM, Thomas Waldmann wrote: >> Also I notice from the docs there is this env variable BORG_LOGGING_CONF, > >> but I couldn?t find any information about how to use it. > > Hmm, didn't I link to the python logging module configuration file format > description? It's an ini-like file and you only need it if you have rather > special logging needs. By default, we just output the log to stderr and log "special" is a very subjective word. Is there any other way to separate the "--list" of files (which are very nicely machine parsable) from the "--progress" output, which is clearly terminal oriented output and doesn't make sense to parse? I couldn't find one, without using the logging conf. Or foregoing one of those two outputs. regards sitaram PS: In case someone is wondering, I use the machine parsable output in a few different ways, but the most common is telling me how much time I spent on my different sub-projects during the past month/quarter etc. (Where "number of times files in directory X have changed" is considered loosely indicative of "time spent on project X"). From sitaramc at gmail.com Sat May 14 01:06:31 2016 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Sat, 14 May 2016 10:36:31 +0530 Subject: [Borgbackup] logging questions In-Reply-To: <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> <5735C41A.4040201@waldmann-edv.de> <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> Message-ID: <9c70e69d-6b25-8082-7d3c-30a6903c26f2@gmail.com> On 05/14/2016 04:50 AM, Steve Schow wrote: > Ic. Yea the INI docs you mentioned was greek to me so I will just me too, but I -- eventually -- figured out a "bare minimum". Here's what I use (put this in some file and point the env var BORG_LOGGING_CONF to it when you run your borg command): [loggers] keys=root [handlers] keys=hand01 [formatters] keys=form01 [logger_root] level=NOTSET handlers=hand01 [handler_hand01] class=FileHandler level=DEBUG formatter=form01 args=('/root/borg.log', 'w') [formatter_form01] format=F1 %(asctime)s %(levelname)s %(message)s datefmt= class=logging.Formatter So many levels of indirection to basically set the output file name, the log level, and the format! PS: And now I'm looking at it, I don't know where that "F1" on the format line came from, or if it is even needed! From steve at bstage.com Sat May 14 10:27:57 2016 From: steve at bstage.com (Steve Schow) Date: Sat, 14 May 2016 08:27:57 -0600 Subject: [Borgbackup] logging questions In-Reply-To: <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> <5735C41A.4040201@waldmann-edv.de> <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> Message-ID: Another reason to have a log file option on borg?is that?.if am going a backup to a remote machine and I want log files to end up on the remote machine. Don?t know if the INI method can do that or not? but? I would like it if log files could end up near the repo..including on a remote machine. On May 13, 2016, at 5:20 PM, Steve Schow wrote: > Ic. Yea the INI docs you mentioned was greek to me so I will just capture the stdout and stderr and use that instead. It would be nice if we could have that output go to an explicit log file so that when we decide to run borg ad-hoc and don?t remember to redirect stdout/stderr, we will see the output on the console and also have a log file to go look at later? > > I?m quite impressed by borg. Bye bye rsnapshot/rsync > > > > > > On May 13, 2016, at 6:10 AM, Thomas Waldmann wrote: > >>> --list --stats --verbose --show-rc --filter=AMEds >> >> Looks ok. There's a complete list of status chars in the docs - but if you just want to see added/modified/errored files, directories, symlinks, that is it. >> >>> Can anyone confirm that I have the right options to see in my nightly log, >>> a list of all files that were actually backed up, the final status, >>> including the return status, so that I can catch a problem and send email >>> notification. >> >> If you use borg create in a script, you can also directly catch the return status (for bash it is $? ). --show-rc is primarily to see it later / for log analyzers (like in borgweb). >> >>> Also I notice from the docs there is this env variable BORG_LOGGING_CONF, >>> but I couldn?t find any information about how to use it. >> >> Hmm, didn't I link to the python logging module configuration file format description? It's an ini-like file and you only need it if you have rather special logging needs. By default, we just output the log to stderr and log with the level you give on the commandline (--verbose is same as --info -> INFO log level). >> >> -- >> >> GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 >> Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From steve at bstage.com Sat May 14 10:27:57 2016 From: steve at bstage.com (Steve Schow) Date: Sat, 14 May 2016 08:27:57 -0600 Subject: [Borgbackup] logging questions In-Reply-To: <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> <5735C41A.4040201@waldmann-edv.de> <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> Message-ID: Another reason to have a log file option on borg?is that?.if am going a backup to a remote machine and I want log files to end up on the remote machine. Don?t know if the INI method can do that or not? but? I would like it if log files could end up near the repo..including on a remote machine. On May 13, 2016, at 5:20 PM, Steve Schow wrote: > Ic. Yea the INI docs you mentioned was greek to me so I will just capture the stdout and stderr and use that instead. It would be nice if we could have that output go to an explicit log file so that when we decide to run borg ad-hoc and don?t remember to redirect stdout/stderr, we will see the output on the console and also have a log file to go look at later? > > I?m quite impressed by borg. Bye bye rsnapshot/rsync > > > > > > On May 13, 2016, at 6:10 AM, Thomas Waldmann wrote: > >>> --list --stats --verbose --show-rc --filter=AMEds >> >> Looks ok. There's a complete list of status chars in the docs - but if you just want to see added/modified/errored files, directories, symlinks, that is it. >> >>> Can anyone confirm that I have the right options to see in my nightly log, >>> a list of all files that were actually backed up, the final status, >>> including the return status, so that I can catch a problem and send email >>> notification. >> >> If you use borg create in a script, you can also directly catch the return status (for bash it is $? ). --show-rc is primarily to see it later / for log analyzers (like in borgweb). >> >>> Also I notice from the docs there is this env variable BORG_LOGGING_CONF, >>> but I couldn?t find any information about how to use it. >> >> Hmm, didn't I link to the python logging module configuration file format description? It's an ini-like file and you only need it if you have rather special logging needs. By default, we just output the log to stderr and log with the level you give on the commandline (--verbose is same as --info -> INFO log level). >> >> -- >> >> GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 >> Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From sebelk at gmail.com Sat May 14 18:24:45 2016 From: sebelk at gmail.com (Sergio Belkin) Date: Sat, 14 May 2016 19:24:45 -0300 Subject: [Borgbackup] Lock error Message-ID: Hi, When I try to resume an interrupted "borg create" it outputs this error: Failed to create/acquire the lock /mnt/backup/sergio_backup/lock.exclusive (timeout). Why? I'm using borg 1.0.2 on Fedora 23. Thanks in advance! -- -- Sergio Belkin LPIC-2 Certified - http://www.lpi.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.klaver at aklaver.com Sat May 14 18:36:46 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Sat, 14 May 2016 15:36:46 -0700 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: On 05/14/2016 03:24 PM, Sergio Belkin wrote: > Hi, > > When I try to resume an interrupted "borg create" it outputs this error: > > Failed to create/acquire the lock > /mnt/backup/sergio_backup/lock.exclusive (timeout). > > Why? http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files > > I'm using borg 1.0.2 on Fedora 23. > > Thanks in advance! > > -- > -- > Sergio Belkin > LPIC-2 Certified - http://www.lpi.org > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From sebelk at gmail.com Sat May 14 19:10:57 2016 From: sebelk at gmail.com (Sergio Belkin) Date: Sat, 14 May 2016 20:10:57 -0300 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: 2016-05-14 19:36 GMT-03:00 Adrian Klaver : > On 05/14/2016 03:24 PM, Sergio Belkin wrote: > >> Hi, >> >> When I try to resume an interrupted "borg create" it outputs this error: >> >> Failed to create/acquire the lock >> /mnt/backup/sergio_backup/lock.exclusive (timeout). >> >> Why? >> > > http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files Thanks! > > > >> I'm using borg 1.0.2 on Fedora 23. >> >> Thanks in advance! >> >> -- >> -- >> Sergio Belkin >> LPIC-2 Certified - http://www.lpi.org >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> >> > > -- > Adrian Klaver > adrian.klaver at aklaver.com > -- -- Sergio Belkin LPIC-2 Certified - http://www.lpi.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at bstage.com Sun May 15 15:32:05 2016 From: steve at bstage.com (Steve Schow) Date: Sun, 15 May 2016 13:32:05 -0600 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: So I ran into this problem also today while running my first very large backup. The size of my backup is close to 2TB. it got most of the way through and then froze for some reason. I hit ctrl-C and tried to launch it again and got these errors about locks. Because I was using a remote repository, I found that I had to issue the break-lock command on both computers before I could run my backup again. I hope this design will get a revisit sometime in the future. In the meantime, what can I do to automate the handling of this problem? I want my backup on a cron job. Anyone have any suggestions for how to automatically clean up the lock problem when it occurs? Particularly since it has to be cleaned up no both machines? On May 14, 2016, at 4:36 PM, Adrian Klaver wrote: > On 05/14/2016 03:24 PM, Sergio Belkin wrote: >> Hi, >> >> When I try to resume an interrupted "borg create" it outputs this error: >> >> Failed to create/acquire the lock >> /mnt/backup/sergio_backup/lock.exclusive (timeout). >> >> Why? > > http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files > From adrian.klaver at aklaver.com Sun May 15 15:52:40 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Sun, 15 May 2016 12:52:40 -0700 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: On 05/15/2016 12:32 PM, Steve Schow wrote: > So I ran into this problem also today while running my first very large backup. The size of my backup is close to 2TB. it got most of the way through and then froze for some reason. I hit ctrl-C and tried to launch it again and got these errors about locks. Because I was using a remote repository, I found that I had to issue the break-lock command on both computers before I could run my backup again. > > I hope this design will get a revisit sometime in the future. > > In the meantime, what can I do to automate the handling of this problem? I want my backup on a cron job. Anyone have any suggestions for how to automatically clean up the lock problem when it occurs? Particularly since it has to be cleaned up no both machines? My advice would be to not back up 2TB at one time, especially for the first backup. I would use: http://borgbackup.readthedocs.io/en/stable/usage.html#borg-create PATH paths to archive to feed it smaller portions at a time. In fact if it where me and depending on what you are backing up I would probably break the backup into smaller repos. > > > > On May 14, 2016, at 4:36 PM, Adrian Klaver wrote: > >> On 05/14/2016 03:24 PM, Sergio Belkin wrote: >>> Hi, >>> >>> When I try to resume an interrupted "borg create" it outputs this error: >>> >>> Failed to create/acquire the lock >>> /mnt/backup/sergio_backup/lock.exclusive (timeout). >>> >>> Why? >> >> http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files >> > -- Adrian Klaver adrian.klaver at aklaver.com From steve at bstage.com Sun May 15 22:09:59 2016 From: steve at bstage.com (Steve Schow) Date: Sun, 15 May 2016 20:09:59 -0600 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: Ok, I will keep that in mind that but that didn?t really answer my question about how to automate the clean up of abandoned locks On May 15, 2016, at 1:52 PM, Adrian Klaver wrote: > On 05/15/2016 12:32 PM, Steve Schow wrote: >> So I ran into this problem also today while running my first very large backup. The size of my backup is close to 2TB. it got most of the way through and then froze for some reason. I hit ctrl-C and tried to launch it again and got these errors about locks. Because I was using a remote repository, I found that I had to issue the break-lock command on both computers before I could run my backup again. >> >> I hope this design will get a revisit sometime in the future. >> >> In the meantime, what can I do to automate the handling of this problem? I want my backup on a cron job. Anyone have any suggestions for how to automatically clean up the lock problem when it occurs? Particularly since it has to be cleaned up no both machines? > > My advice would be to not back up 2TB at one time, especially for the first backup. I would use: > > http://borgbackup.readthedocs.io/en/stable/usage.html#borg-create > > PATH paths to archive > > to feed it smaller portions at a time. In fact if it where me and depending on what you are backing up I would probably break the backup into smaller repos. > >> >> >> >> On May 14, 2016, at 4:36 PM, Adrian Klaver wrote: >> >>> On 05/14/2016 03:24 PM, Sergio Belkin wrote: >>>> Hi, >>>> >>>> When I try to resume an interrupted "borg create" it outputs this error: >>>> >>>> Failed to create/acquire the lock >>>> /mnt/backup/sergio_backup/lock.exclusive (timeout). >>>> >>>> Why? >>> >>> http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files >>> >> > > > -- > Adrian Klaver > adrian.klaver at aklaver.com From steve at bstage.com Sun May 15 22:17:40 2016 From: steve at bstage.com (Steve Schow) Date: Sun, 15 May 2016 20:17:40 -0600 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> By the way, why is Borg not able to handle larger repos? I will definitely consider the idea of breaking it up into some smaller repos, but no question that is less then ideal for me. On May 15, 2016, at 1:52 PM, Adrian Klaver wrote: >> > > My advice would be to not back up 2TB at one time, especially for the first backup. I would use: > > http://borgbackup.readthedocs.io/en/stable/usage.html#borg-create > > PATH paths to archive > > to feed it smaller portions at a time. In fact if it where me and depending on what you are backing up I would probably break the backup into smaller repos. From steve at bstage.com Sun May 15 22:23:46 2016 From: steve at bstage.com (Steve Schow) Date: Sun, 15 May 2016 20:23:46 -0600 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> So here is one idea I have for automatically cleaning up the lock files. I would appreciate any feedback.. The only thing is that it sees like I had to go to the remote machine and break the lock there and it didn?t work over SSH. using bash shell script: function finish { borg break-lock $REPO } trap finish EXIT trap finish INT trap finish QUIT trap finish TERM borg create bla bla bla On May 15, 2016, at 1:32 PM, Steve Schow wrote: > So I ran into this problem also today while running my first very large backup. The size of my backup is close to 2TB. it got most of the way through and then froze for some reason. I hit ctrl-C and tried to launch it again and got these errors about locks. Because I was using a remote repository, I found that I had to issue the break-lock command on both computers before I could run my backup again. > > I hope this design will get a revisit sometime in the future. > > In the meantime, what can I do to automate the handling of this problem? I want my backup on a cron job. Anyone have any suggestions for how to automatically clean up the lock problem when it occurs? Particularly since it has to be cleaned up no both machines? > > > > On May 14, 2016, at 4:36 PM, Adrian Klaver wrote: > >> On 05/14/2016 03:24 PM, Sergio Belkin wrote: >>> Hi, >>> >>> When I try to resume an interrupted "borg create" it outputs this error: >>> >>> Failed to create/acquire the lock >>> /mnt/backup/sergio_backup/lock.exclusive (timeout). >>> >>> Why? >> >> http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files >> > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Mon May 16 04:18:04 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 16 May 2016 10:18:04 +0200 Subject: [Borgbackup] Lock error In-Reply-To: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> Message-ID: <5739823C.6080005@waldmann-edv.de> There is a problem why borg can't finish the backup (crashes or gets killed, leaves the lock behind) That is the root cause you need to find and fix. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Mon May 16 05:16:10 2016 From: public at enkore.de (public at enkore.de) Date: Mon, 16 May 2016 11:16:10 +0200 Subject: [Borgbackup] Lock error In-Reply-To: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> References: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> Message-ID: It can, it just doesn't get there for you because it crashes/locks up. That's what you want to look into. - Does it lock up (no further progress, no or little CPU usage)? - Or does it use CPU but no I/O (at all)? - If it still uses CPU and I/O, use --progress to see whether it's actually stuck or just takes some time. Are things like network mounts in the backup data set? If so, are those reliable? About data set size: 2 TB isn't that much, but if it contains many kinds of data (e.g. an operating system, documents and pictures) it may make sense to split that into multiple archives (not repos), just to have a better overview over the backups. Cheers, Marian On 16.05.2016 04:17, Steve Schow wrote: > > By the way, why is Borg not able to handle larger repos? I will > definitely consider the idea of breaking it up into some smaller > repos, but no question that is less then ideal for me. From adrian.klaver at aklaver.com Mon May 16 10:23:12 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 16 May 2016 07:23:12 -0700 Subject: [Borgbackup] Lock error In-Reply-To: References: Message-ID: <0fd56c8a-4b47-b25f-39da-e93d7fb297f9@aklaver.com> On 05/15/2016 07:09 PM, Steve Schow wrote: > Ok, I will keep that in mind that but that didn?t really answer my question about how to automate the clean up of abandoned locks I would say the issue would be determining whether they are really abandoned or not? The issue being overriding the locking mechanism when you should not and causing damage to the repo. Any automation would need to examine whether the lock is held by a legitimate Borg process or a defunct one and honestly I do not know to achieve that. > > On May 15, 2016, at 1:52 PM, Adrian Klaver wrote: > >> On 05/15/2016 12:32 PM, Steve Schow wrote: >>> So I ran into this problem also today while running my first very large backup. The size of my backup is close to 2TB. it got most of the way through and then froze for some reason. I hit ctrl-C and tried to launch it again and got these errors about locks. Because I was using a remote repository, I found that I had to issue the break-lock command on both computers before I could run my backup again. >>> >>> I hope this design will get a revisit sometime in the future. >>> >>> In the meantime, what can I do to automate the handling of this problem? I want my backup on a cron job. Anyone have any suggestions for how to automatically clean up the lock problem when it occurs? Particularly since it has to be cleaned up no both machines? >> >> My advice would be to not back up 2TB at one time, especially for the first backup. I would use: >> >> http://borgbackup.readthedocs.io/en/stable/usage.html#borg-create >> >> PATH paths to archive >> >> to feed it smaller portions at a time. In fact if it where me and depending on what you are backing up I would probably break the backup into smaller repos. >> >>> >>> >>> >>> On May 14, 2016, at 4:36 PM, Adrian Klaver wrote: >>> >>>> On 05/14/2016 03:24 PM, Sergio Belkin wrote: >>>>> Hi, >>>>> >>>>> When I try to resume an interrupted "borg create" it outputs this error: >>>>> >>>>> Failed to create/acquire the lock >>>>> /mnt/backup/sergio_backup/lock.exclusive (timeout). >>>>> >>>>> Why? >>>> >>>> http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files >>>> >>> >> >> >> -- >> Adrian Klaver >> adrian.klaver at aklaver.com > -- Adrian Klaver adrian.klaver at aklaver.com From adrian.klaver at aklaver.com Mon May 16 10:29:53 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 16 May 2016 07:29:53 -0700 Subject: [Borgbackup] Lock error In-Reply-To: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> References: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> Message-ID: On 05/15/2016 07:17 PM, Steve Schow wrote: > > By the way, why is Borg not able to handle larger repos? I will definitely consider the idea of breaking it up into some smaller repos, but no question that is less then ideal for me. As a general principle any time I start using a new program I do the crawl-walk-run sequence, eg start small and work my way up. It is not really a matter of whether Borg can handle a large repo as determining the error that is causing Borg to stop, as others have pointed out. This often easier to do working with a subset of the data. > > On May 15, 2016, at 1:52 PM, Adrian Klaver wrote: > >>> >> >> My advice would be to not back up 2TB at one time, especially for the first backup. I would use: >> >> http://borgbackup.readthedocs.io/en/stable/usage.html#borg-create >> >> PATH paths to archive >> >> to feed it smaller portions at a time. In fact if it where me and depending on what you are backing up I would probably break the backup into smaller repos. > -- Adrian Klaver adrian.klaver at aklaver.com From steve at bstage.com Mon May 16 12:03:00 2016 From: steve at bstage.com (Steve Schow) Date: Mon, 16 May 2016 10:03:00 -0600 Subject: [Borgbackup] Lock error In-Reply-To: <5739823C.6080005@waldmann-edv.de> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> Message-ID: <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> It hasn?t happened again, so i can?t replicate it. Well actually I can easily..by simply hitting ctrl-C while its running it ends up in the broken lock state. However I have low confidence in using borg for automated backups at this point due to this. On May 16, 2016, at 2:18 AM, Thomas Waldmann wrote: > There is a problem why borg can't finish the backup (crashes or gets killed, leaves the lock behind) > > That is the root cause you need to find and fix. > > > -- > > > GPG ID: FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From steve at bstage.com Mon May 16 12:06:20 2016 From: steve at bstage.com (Steve Schow) Date: Mon, 16 May 2016 10:06:20 -0600 Subject: [Borgbackup] Lock error In-Reply-To: References: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> Message-ID: <98BDA426-DFC3-4001-9449-3D4F82C4461A@bstage.com> On May 16, 2016, at 3:16 AM, public at enkore.de wrote: > It can, it just doesn't get there for you because it crashes/locks up. It hasn?t happened again yet. > > > Are things like network mounts in the backup data set? If so, are those > reliable? yes > > About data set size: 2 TB isn't that much, but if it contains many kinds > of data (e.g. an operating system, documents and pictures) it may make > sense to split that into multiple archives (not repos), just to have a > better overview over the backups. I agree 2TB is nothing. I hear people telling me to break it up into smaller repos and I?m wondering why. If there is a problem with late repos?then frankly I don?t think it should be trusted with small repos. I?m impressed by Atiic?.and now borg. It does certain things wonderfully. However, this lock file orphaning is a real design flaw that impacts backup automation. The fact that people are warning against ?large? repos is a problem. The fact that I hear rumors about repo corruption being enough of a possibility to have the ?check? command?.is a problem? I?m still going to run with it for a while and see how ti goes. Its not the only form of backup I am using. But I am impressed by it in some ways and I?d like to see where it goes. it may no be ready for prime time yet as far as I can see. From adrian.klaver at aklaver.com Mon May 16 12:08:44 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 16 May 2016 09:08:44 -0700 Subject: [Borgbackup] Lock error In-Reply-To: <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> Message-ID: On 05/16/2016 09:03 AM, Steve Schow wrote: > It hasn?t happened again, so i can?t replicate it. Well actually I can easily..by simply hitting ctrl-C while its running it ends up in the broken lock state. > > However I have low confidence in using borg for automated backups at this point due to this. Well I use it on multiple machines to back up data from even more machines via cron jobs that run day and night and the only problems I have had are self inflicted. Namely that I was not paying attention and had cron jobs walk over the same repo at the same time. The worse that happened was that particular backup failed, but the next succeeded. > > > On May 16, 2016, at 2:18 AM, Thomas Waldmann wrote: > >> There is a problem why borg can't finish the backup (crashes or gets killed, leaves the lock behind) >> >> That is the root cause you need to find and fix. >> >> >> -- >> >> >> GPG ID: FAF7B393 >> GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From steve at bstage.com Mon May 16 12:14:13 2016 From: steve at bstage.com (Steve Schow) Date: Mon, 16 May 2016 10:14:13 -0600 Subject: [Borgbackup] Lock error In-Reply-To: References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> Message-ID: <63EF23E5-83A1-410D-BC25-76B8E57183E8@bstage.com> it is remotely possible that the one time this happened to me before was exactly because of the cron thing because I had scheduled a cron to do it, while this job was still in progress?..and the next day it may have tried to run again while the first one was still running. Unfortunately I wasn?t redirecting logging yet when that ?might? have happened?so I don?t have any way to know. I will be implementing a mechanism to prevent borg from being able to run over itself.. I had to go back and delete several check point files and restart the backup using an archive name from the day before (since the name of the archive was using a datestamp), and its been running fine ever since?I hope almost done. back to my original question? how can I automatically clean up the lock files when this happens again? Did anyone have a look at my bash shell traps? On May 16, 2016, at 10:08 AM, Adrian Klaver wrote: > On 05/16/2016 09:03 AM, Steve Schow wrote: >> It hasn?t happened again, so i can?t replicate it. Well actually I can easily..by simply hitting ctrl-C while its running it ends up in the broken lock state. >> >> However I have low confidence in using borg for automated backups at this point due to this. > > Well I use it on multiple machines to back up data from even more machines via cron jobs that run day and night and the only problems I have had are self inflicted. Namely that I was not paying attention and had cron jobs walk over the same repo at the same time. The worse that happened was that particular backup failed, but the next succeeded. > >> >> >> On May 16, 2016, at 2:18 AM, Thomas Waldmann wrote: >> >>> There is a problem why borg can't finish the backup (crashes or gets killed, leaves the lock behind) >>> >>> That is the root cause you need to find and fix. >>> >>> >>> -- >>> >>> >>> GPG ID: FAF7B393 >>> GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 >>> >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > > > -- > Adrian Klaver > adrian.klaver at aklaver.com From steve at bstage.com Mon May 16 14:09:01 2016 From: steve at bstage.com (Steve Schow) Date: Mon, 16 May 2016 12:09:01 -0600 Subject: [Borgbackup] Lock error In-Reply-To: References: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> Message-ID: <45159B56-A6A6-49A3-92D9-0C56914D5308@bstage.com> So as an experiment I am trying two runs of borg at the same time to two different repos in parallel and one nice benefit is that I?m getting double the upload speed this way? Running several smaller repos in parallel could substantially decrease the amount of time ti takes to do the backup On May 16, 2016, at 3:16 AM, public at enkore.de wrote: > > About data set size: 2 TB isn't that much, but if it contains many kinds > of data (e.g. an operating system, documents and pictures) it may make > sense to split that into multiple archives (not repos), just to have a > better overview over the backups. From adrian.klaver at aklaver.com Mon May 16 16:12:59 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 16 May 2016 13:12:59 -0700 Subject: [Borgbackup] Lock error In-Reply-To: <45159B56-A6A6-49A3-92D9-0C56914D5308@bstage.com> References: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> <45159B56-A6A6-49A3-92D9-0C56914D5308@bstage.com> Message-ID: <86a5f2b8-d9be-aa66-e36a-4705d36f0c6d@aklaver.com> On 05/16/2016 11:09 AM, Steve Schow wrote: > So as an experiment I am trying two runs of borg at the same time to two different repos in parallel and one nice benefit is that I?m getting double the upload speed this way? Running several smaller repos in parallel could substantially decrease the amount of time ti takes to do the backup It would seem to come down to this: http://borgbackup.readthedocs.io/en/stable/internals.html There are a lot of moving parts involved in populating a repo with an archive, especially in the initial load. Spreading the work load across multiple repos helps, as you have seen. > > On May 16, 2016, at 3:16 AM, public at enkore.de wrote: > >> >> About data set size: 2 TB isn't that much, but if it contains many kinds >> of data (e.g. an operating system, documents and pictures) it may make >> sense to split that into multiple archives (not repos), just to have a >> better overview over the backups. > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From steve at bstage.com Mon May 16 17:30:50 2016 From: steve at bstage.com (Steve Schow) Date: Mon, 16 May 2016 15:30:50 -0600 Subject: [Borgbackup] Lock error In-Reply-To: <86a5f2b8-d9be-aa66-e36a-4705d36f0c6d@aklaver.com> References: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> <45159B56-A6A6-49A3-92D9-0C56914D5308@bstage.com> <86a5f2b8-d9be-aa66-e36a-4705d36f0c6d@aklaver.com> Message-ID: <1C5DC752-00E1-4785-A2C5-EE44E836F6AB@bstage.com> It doesn?t really have much to do with the locking stuff?it has to do with the fact that Borg is doing compression, encryption, and other CPU oriented tasks?might involve some local disk I/O in the process of handling that ?crunching? also? meanwhile its sending the results of that crunching over SSH to a remote borg what is sitting there waiting for it. I suspect borg is not multi threaded for this?so quite literally, while sending stuff over the net, its not doing any other crunching at the same time, and visa versa, which equates to a LOT of wait time?which means, hardware is not being utilized fully. But even if it does fork threads for handling these different tasks, if one or the other is slower then the other, then one of them will block and wait also. With more then one instance running at a time, we get the same result as multi threads would get?perhaps a little better because at the remote side its also writing to two different repos, so the two instances don?t block each other hardly at all. each one individually may having waiting occurring, but the other instances can take advantage of that to get some hardware time and so forth. I hit 100% CPU util with about 4 concurrent instances of borg running on my little linux NAS, the borg serve on my mac is only hitting about 50% util handling the server side of it. More than that and upload speed actually started to decrease. 4 instances is giving me triple the overall speed. its not clear to me if encryption is happening on the local side or the serve side (note, I would prefer it on the serve side FWIW). On May 16, 2016, at 2:12 PM, Adrian Klaver wrote: > On 05/16/2016 11:09 AM, Steve Schow wrote: >> So as an experiment I am trying two runs of borg at the same time to two different repos in parallel and one nice benefit is that I?m getting double the upload speed this way? Running several smaller repos in parallel could substantially decrease the amount of time ti takes to do the backup > > It would seem to come down to this: > > http://borgbackup.readthedocs.io/en/stable/internals.html > > There are a lot of moving parts involved in populating a repo with an archive, especially in the initial load. Spreading the work load across multiple repos helps, as you have seen. > >> >> On May 16, 2016, at 3:16 AM, public at enkore.de wrote: >> >>> >>> About data set size: 2 TB isn't that much, but if it contains many kinds >>> of data (e.g. an operating system, documents and pictures) it may make >>> sense to split that into multiple archives (not repos), just to have a >>> better overview over the backups. >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > > > -- > Adrian Klaver > adrian.klaver at aklaver.com From steve at bstage.com Mon May 16 17:39:22 2016 From: steve at bstage.com (Steve Schow) Date: Mon, 16 May 2016 15:39:22 -0600 Subject: [Borgbackup] Lock error In-Reply-To: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> Message-ID: <43DE3496-3439-4681-ACA0-6AA2392741CA@bstage.com> Actually I will improve it a bit more?I will use this same trap mechanism to make sure that I don?t accidentally get a long running backup running so long that the next cron doesn?t stomp on it? The wrapper bash script would be like this, which should make sure that as long as nobody is trying to call borg create directly but only using this wrapper, then it will not run twice at once?.and will clean up any lock if something gets orphaned that way. lock=/tmp/$REPO.lock if [ -d $lock ] then echo ?borg is currently running on $REPO, exiting? exit 1 fi function finish { borg break-lock $REPO rm -rf $lock } trap finish EXIT trap finish INT trap finish QUIT trap finish TERM mkdir -p $lock borg create bla bla bla $REPO On May 15, 2016, at 8:23 PM, Steve Schow wrote: > So here is one idea I have for automatically cleaning up the lock files. I would appreciate any feedback.. The only thing is that it sees like I had to go to the remote machine and break the lock there and it didn?t work over SSH. > > using bash shell script: > > function finish { > borg break-lock $REPO > } > trap finish EXIT > trap finish INT > trap finish QUIT > trap finish TERM > > borg create bla bla bla > > > > > > On May 15, 2016, at 1:32 PM, Steve Schow wrote: > >> So I ran into this problem also today while running my first very large backup. The size of my backup is close to 2TB. it got most of the way through and then froze for some reason. I hit ctrl-C and tried to launch it again and got these errors about locks. Because I was using a remote repository, I found that I had to issue the break-lock command on both computers before I could run my backup again. >> >> I hope this design will get a revisit sometime in the future. >> >> In the meantime, what can I do to automate the handling of this problem? I want my backup on a cron job. Anyone have any suggestions for how to automatically clean up the lock problem when it occurs? Particularly since it has to be cleaned up no both machines? >> >> >> >> On May 14, 2016, at 4:36 PM, Adrian Klaver wrote: >> >>> On 05/14/2016 03:24 PM, Sergio Belkin wrote: >>>> Hi, >>>> >>>> When I try to resume an interrupted "borg create" it outputs this error: >>>> >>>> Failed to create/acquire the lock >>>> /mnt/backup/sergio_backup/lock.exclusive (timeout). >>>> >>>> Why? >>> >>> http://borgbackup.readthedocs.io/en/stable/internals.html#lock-files >>> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Tue May 17 11:45:47 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 17 May 2016 17:45:47 +0200 Subject: [Borgbackup] Lock error In-Reply-To: <63EF23E5-83A1-410D-BC25-76B8E57183E8@bstage.com> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> <63EF23E5-83A1-410D-BC25-76B8E57183E8@bstage.com> Message-ID: <573B3CAB.2030802@waldmann-edv.de> On 05/16/2016 06:14 PM, Steve Schow wrote: > I will be implementing a mechanism to prevent borg from being able to run over itself.. Guess for what the cache and repo lock were made for. :) > I had to go back and delete several check point files and You should not delete a checkpoint before having completed a backup of the same data set or you will remove chunks that were already transmitted into the repo and maybe re-usable by your next backup. So, first have a completed backup, then delete checkpoints (or any older backup archive of same data set) - it will be faster that way. > restart the backup using an archive name from the day before > (since the name of the archive was using a datestamp) There is no requirement to match some old archive / checkpoint archive name to "continue / resume" it. If you just run a new backup of same data set with any name BEFORE deleting the checkpoint archive, it will detect that some chunks are already in the repo and reuse them. > how can I automatically clean up the lock files when this happens again? I think you should not automatically do that, because you first have to make sure that there is no backup running and also, you may want to find out why it broke and fix the root cause. >>> However I have low confidence in using borg for automated backups at this point due to this. You never should have too much blind confidence in some automated backup, you rather should check the logs frequently and now and then try if recovering data actually works. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From steve at bstage.com Tue May 17 12:02:31 2016 From: steve at bstage.com (Steve Schow) Date: Tue, 17 May 2016 10:02:31 -0600 Subject: [Borgbackup] Lock error In-Reply-To: <573B3CAB.2030802@waldmann-edv.de> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> <63EF23E5-83A1-410D-BC25-76B8E57183E8@bstage.com> <573B3CAB.2030802@waldmann-edv.de> Message-ID: <3D9F9EFF-A1D5-4D93-9372-CDA5C1D03C7C@bstage.com> On May 17, 2016, at 9:45 AM, Thomas Waldmann wrote: > On 05/16/2016 06:14 PM, Steve Schow wrote: >> I will be implementing a mechanism to prevent borg from being able to run over itself.. > > Guess for what the cache and repo lock were made for. :) yes in theory. But apparently it doesn?t completely stop it from happening? > >> I had to go back and delete several check point files and > > You should not delete a checkpoint before having completed a backup of the same data set or you will remove chunks that were already transmitted into the repo and maybe re-usable by your next backup. > > So, first have a completed backup, then delete checkpoints (or any older backup archive of same data set) - it will be faster that way. > Well unfortunately the system was left in a bad way with multiple checkpoint files?.due to the fact that it started on the first day with a snapshot name of MMDDYY and then when it was incomplete and restarted the next day with a new snapshot name for that day?twice?there were 3 checkpoint files left sitting there? If I understand you correctly I need to delete this whole repo now and start over? borg is losing my confidence from stuff like this. It should not be so easy to fowl it up All of the excuses you are giving me are not answers, they are just excuses for a system which can?t be automated. Sorry, but moving on to backuppc now. Good luck with the project From tw at waldmann-edv.de Tue May 17 12:13:38 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 17 May 2016 18:13:38 +0200 Subject: [Borgbackup] Lock error In-Reply-To: <98BDA426-DFC3-4001-9449-3D4F82C4461A@bstage.com> References: <1D99031E-1202-47AA-A09B-C1D49C3E6537@bstage.com> <98BDA426-DFC3-4001-9449-3D4F82C4461A@bstage.com> Message-ID: <573B4332.20309@waldmann-edv.de> On 05/16/2016 06:06 PM, Steve Schow wrote: > I hear people telling me to break it up into smaller repos and I?m wondering why. Well, it's not because there is some (practically interesting) limit in borg. Well, there is some limit, but it is at a much higher capacity. But, you'll need resources to manage the repo and the chunks stored in it (like RAM and disk space) and you need more at once if the repo is bigger. There's a formula in the docs, if you are interested in the details. It's much less of a problem since borg 1.0 due to bigger default target chunk size, but bit some attic and borg < 1.0 users with little RAM but large repo (like e.g. on NAS devices with << 1GB RAM, but >> 1TB disk). > If there is a problem with late repos?then frankly I don?t think it should be trusted with small repos. If your paint program can't handle a 1000000 x 1000000 pixel image, it should not be trusted with a 1000 x 1000 image. :P > However, this lock file orphaning is a real design flaw that impacts backup automation. BTW, are you running latest borg release? We recently added quite some code that avoids orphan locks for quite some cases. Of course, we can't cleanup the locks if you kill -9 the borg process or power gets interrupted or your machine crashes / freezes for some reason. > The fact that I hear rumors about repo corruption being enough of a possibility to have the ?check? command?.is a problem. Repo corruption can have many reasons, quite often including hardware reasons. And having "borg check" is not a problem, but one way to find and maybe even fix such issues. Or to see that everything is well, too. There is an old ticket on the attic issue tracker that speculated about repo corruption with large repositories. I (and another borg user also) tried to reproduce corruption with large repos, but we could not reproduce (as far as borg is concerned). What I did find in that experiment is some malfunctioning hardware, though. So I currently have no reason to believe that there is some corruption problem related to big repositories when using borg. See our issue #5 about fixed / superseded / should-be-closed attic issues. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Tue May 17 12:21:38 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 17 May 2016 18:21:38 +0200 Subject: [Borgbackup] logging questions In-Reply-To: References: <77E6A307-4374-474A-9EE8-A1DAAFBEB576@bstage.com> <5733A15A.7030705@waldmann-edv.de> <874ma4xitf.fsf@uwo.ca> <87a8jwvwqx.fsf@uwo.ca> <3AD3EF8C-959D-4879-8293-53BB54F476FB@bstage.com> <92D4EA6A-CEFE-4489-8B54-F13309F5B407@bstage.com> <5735C41A.4040201@waldmann-edv.de> <14EC0595-AE0C-4926-9FB0-1D83FCBA6221@bstage.com> Message-ID: <573B4512.2010409@waldmann-edv.de> On 05/14/2016 04:27 PM, Steve Schow wrote: > Another reason to have a log file option on borg?is that?.if am going a backup to a > remote machine and I want log files to end up on the remote machine. Don?t know if > the INI method can do that or not? but? I would like it if log files could end > up near the repo..including on a remote machine. One of the ideas behind borg is that it can encrypt all the data and metadata and backup to a machine that is not fully trusted and still not expose any of your data to somebody looking at the repo. That is why log output is not stored on the repo machine, but transmitted to the client. If you like, you can just redirect it to a file and scp the file onto the (hopefully trustworthy) backup server. If you use a logging configuration file, you can also use a ton of other logging methods, python's logging module is quite powerful (but thus, also not very easy to use or configure). Also, its docs are a bit sub-optimal... -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Tue May 17 12:22:43 2016 From: public at enkore.de (public at enkore.de) Date: Tue, 17 May 2016 18:22:43 +0200 Subject: [Borgbackup] Lock error In-Reply-To: <3D9F9EFF-A1D5-4D93-9372-CDA5C1D03C7C@bstage.com> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> <63EF23E5-83A1-410D-BC25-76B8E57183E8@bstage.com> <573B3CAB.2030802@waldmann-edv.de> <3D9F9EFF-A1D5-4D93-9372-CDA5C1D03C7C@bstage.com> Message-ID: On 17.05.2016 18:02, Steve Schow wrote: > > On May 17, 2016, at 9:45 AM, Thomas Waldmann wrote: > >> On 05/16/2016 06:14 PM, Steve Schow wrote: >>> I will be implementing a mechanism to prevent borg from being able to run over itself.. >> >> Guess for what the cache and repo lock were made for. :) > > > yes in theory. But apparently it doesn?t completely stop it from happening? The mechanism is conservative; unless break-lock is used Borg won't run concurrently. The only exception here are in-consistent file systems. Mounting cloud services (S3, ...) as a drive, for example. > > If I understand you correctly I need to delete this whole repo now and start over? > I don't see why that would be necessary, Thomas didn't suggest that either. > > borg is losing my confidence from stuff like this. It should not be so easy to fowl it up > > All of the excuses you are giving me are not answers, they are just excuses for a system which can?t be automated. Sorry, but moving on to backuppc now. > When in doubt, Borg is conservative and asks for user intervention before proceeding. As has been pointed out in other discussions automatically detecting whether a lock is stale or not is only possible in some cases, but not reliably for other cases. Independent of the software used one should be able to have some confidence in the backup software used ("it works"), but at the same time be cautious ("it software, so it will screw up at some point"), i.e. regularly check that the backup as a whole is working. Reliable backups -- where you can sit back and relax when your house burns down, because you know that your backups will work -- are not 100 % automatable. Cheers, Marian From tw at waldmann-edv.de Tue May 17 12:35:39 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 17 May 2016 18:35:39 +0200 Subject: [Borgbackup] Lock error In-Reply-To: <3D9F9EFF-A1D5-4D93-9372-CDA5C1D03C7C@bstage.com> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> <63EF23E5-83A1-410D-BC25-76B8E57183E8@bstage.com> <573B3CAB.2030802@waldmann-edv.de> <3D9F9EFF-A1D5-4D93-9372-CDA5C1D03C7C@bstage.com> Message-ID: <573B485B.4070004@waldmann-edv.de> >>> I will be implementing a mechanism to prevent borg from being able to run over itself.. >> Guess for what the cache and repo lock were made for. :) > yes in theory. But apparently it doesn?t completely stop it from happening? Well, if you can reproduce a problem related to "locks not locking", I'ld like to see a bug report about that. Using break-lock without first making sure that no backup is running is not a bug in the software. >>> I had to go back and delete several check point files and >> >> You should not delete a checkpoint before having completed a backup of the same data set or you will remove chunks that were already transmitted into the repo and maybe re-usable by your next backup. >> >> So, first have a completed backup, then delete checkpoints (or any older backup archive of same data set) - it will be faster that way. >> > > > Well unfortunately the system was left in a bad way with multiple checkpoint files?. I guess you could have a thousand checkpoint archives and still the repo would not be in a bad (like "corrupted") state. A checkpoint archive is a valid, but incomplete archive. It is incomplete because something / somebody interrupted it. Broken network connections, users rebooting the machine, users hitting Ctrl-C, power outage, machine crash, ... > due to the fact that it started on the first day with a snapshot name of MMDDYY > and then when it was incomplete and restarted the next day with a new snapshot name > for that day?twice?there were 3 checkpoint files left sitting there? As I said: names do not matter here. FYI: the only place where archive names actually do matter is when you use --prefix, e.g. to limit prune to some of the archives (not: all). > If I understand you correctly I need to delete this whole repo now and start over? I didn't say that. > borg is losing my confidence from stuff like this. It should not be so easy to fowl it up Maybe you are reading too much between the lines? Or old attic tickets? > All of the excuses you are giving me are not answers, they are just excuses for a system which can?t be automated. Sorry, but moving on to backuppc now. My impression is that you didn't read or didn't understand what I was saying. Good luck with any software. :) -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From adrian.klaver at aklaver.com Tue May 17 14:09:37 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Tue, 17 May 2016 11:09:37 -0700 Subject: [Borgbackup] Lock error In-Reply-To: <3D9F9EFF-A1D5-4D93-9372-CDA5C1D03C7C@bstage.com> References: <39C99866-2178-4C2E-B6EC-BDDBFAFC17E6@bstage.com> <5739823C.6080005@waldmann-edv.de> <48B0F672-771F-4490-B1C8-83CBC563FED3@bstage.com> <63EF23E5-83A1-410D-BC25-76B8E57183E8@bstage.com> <573B3CAB.2030802@waldmann-edv.de> <3D9F9EFF-A1D5-4D93-9372-CDA5C1D03C7C@bstage.com> Message-ID: On 05/17/2016 09:02 AM, Steve Schow wrote: > > On May 17, 2016, at 9:45 AM, Thomas Waldmann wrote: > >> On 05/16/2016 06:14 PM, Steve Schow wrote: >>> I will be implementing a mechanism to prevent borg from being able to run over itself.. >> >> Guess for what the cache and repo lock were made for. :) > > > yes in theory. But apparently it doesn?t completely stop it from happening? > > >> >>> I had to go back and delete several check point files and >> >> You should not delete a checkpoint before having completed a backup of the same data set or you will remove chunks that were already transmitted into the repo and maybe re-usable by your next backup. >> >> So, first have a completed backup, then delete checkpoints (or any older backup archive of same data set) - it will be faster that way. >> > > > Well unfortunately the system was left in a bad way with multiple checkpoint files?.due to the fact that it started on the first day with a snapshot name of MMDDYY and then when it was incomplete and restarted the next day with a new snapshot name for that day?twice?there were 3 checkpoint files left sitting there? > > If I understand you correctly I need to delete this whole repo now and start over? > > borg is losing my confidence from stuff like this. It should not be so easy to fowl it up > > All of the excuses you are giving me are not answers, they are just excuses for a system which can?t be automated. Sorry, but moving on to backuppc now. Really? http://backuppc.sourceforge.net/ Version 3.3.1 released on January 11th, 2015 Version 4.0.0alpha3 released on December 1st, 2013 I wish you luck on that(really) as my experience with backuppc is that it is orders of magnitude more complex to set up then BorgBackup and not necessarily more stable for the trouble. > > Good luck with the project > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Fri May 20 16:09:34 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 20 May 2016 22:09:34 +0200 Subject: [Borgbackup] borgbackup 1.0.3 released Message-ID: <573F6EFE.9080900@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.0.3 Some fixes, please upgrade. Please read the changelog before upgrading: https://github.com/borgbackup/borg/blob/1.0.3/docs/changes.rst Cheers, Thomas -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From aclark at nexient.com Mon May 23 12:08:08 2016 From: aclark at nexient.com (Anthony Clark) Date: Mon, 23 May 2016 16:08:08 +0000 Subject: [Borgbackup] web list of multiple repositories Message-ID: Hello All, First off, I'm liking Borg backup a whole lot. The disk space savings I'm seeing are wonderful, and the "mount to restore" behaviour is great too. I am currently backing up from a few dozen hosts to a central backup server. Each host has its own repository, with one or archives per day. Has anyone written a read only web interface similar to borgweb that can list multiple repositories on a single host? I'd like a central location that I can send others to look at to confirm backups are running, and to show non-technical people the high level statistics. Offtopic: I'd also like to figure out a good way to check the last time a successful borg backup run happened for a given server (ideally within Zabbix, our monitoring system) I'm not the best coder out there so rewriting the borgweb interface feels beyond me. My way of attacking this, if no one else has already, would be to iterate over each repository in /backup and run borg list, capture that output and run borg info on each archive that hasn't been run previously. (I'm a sysadmin and Puppet code DevOps person, not a programmer!). Those iterations would spit out basic read only html to be served up by nginx or whatever. Nothing fancy! :) Warm Regards, Anthony Clark Nexient, providing services for: ISG DevOps, Health and Wellness Solutions, Johnson and Johnson This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Mon May 23 12:47:48 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 23 May 2016 18:47:48 +0200 Subject: [Borgbackup] web list of multiple repositories In-Reply-To: References: Message-ID: <57433434.2090808@waldmann-edv.de> > Has anyone written a read only web interface similar to borgweb that can > list multiple repositories on a single host? I'd like a central > location that I can send others to look at to confirm backups are > running, Assuming that logs are somehow stored in a specific path that depends on the repo / host name, that sounds like a relatively simple extension of borgweb (e.g. /var/logs/borg//*.log). Would be quite some work though as the user interface needs to be changed and UX should be not worse afterwards. Also client (JS) / server (Python) interface likely would need some extension. But it sounds still in scope of borgweb (which does not intend to do everything, just very simple tasks like checking logs and starting a backup). Of course, one could also just use multiple borgweb and link them from a simple index page. > and to show non-technical people the high level statistics. Maybe that also. Would need to get that info from the logs. > Offtopic: I'd also like to figure out a good way to check the last time > a successful borg backup run happened for a given server (ideally within > Zabbix, our monitoring system) A simple way might be to either: - check / remember the return code returned by the borg command in the script where you invoke borg, or - log the return code (using --show-rc option) and analyze the logs later. There is a section in the docs about the return codes / log levels. > I'm not the best coder out there so rewriting the borgweb interface > feels beyond me. You could open an issue in the borgweb tracker with above ideas and see what happens. :) Putting a bounty on it might help. > My way of attacking this, if no one else has already, > would be to iterate over each repository in /backup and run borg list, > capture that output and run borg info on each archive that hasn't been > run previously. Guess I'ld rather run borg create with --list --stats --show-rc then you already have that info in your logs, plus the rc code. Also, running "borg check" or "borg extract --dry-run" now and then might be good for confirming everything is good. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From pschiffe at redhat.com Sat May 28 08:55:16 2016 From: pschiffe at redhat.com (Peter Schiffer) Date: Sat, 28 May 2016 14:55:16 +0200 Subject: [Borgbackup] Borg Docker image Message-ID: <4b72e22b-b7b1-a375-1079-f79ae2164dda@redhat.com> Hello, recently I found out about Borg and I started to really like it. I was looking for a Docker image with Borg, but didn't find any useful enough which would work more in a container way (configuration mostly done by env vars), so I've created one: https://hub.docker.com/r/pschiffe/borg/ I'm writing here just to let you know, maybe it'll be useful for somebody. If you have any feedback for the image, or if I did something horribly wrong, please let me know :-) peter From tve at voneicken.com Sun Jun 5 15:57:40 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Sun, 5 Jun 2016 19:57:40 +0000 Subject: [Borgbackup] strategy for "off-site" backups Message-ID: <0100015522246ba2-bc09025f-bb85-4302-abfc-4cf3df494ced-000000@email.amazonses.com> I'm using borg for on-site backups (i.e. high bandwidth between client and borg server) but I'd also like to send some backups off-site (to the cloud, really). E.g., if I'm doing daily backups I'd like to ship a weekly or perhaps only monthly off-site. Is there a simple way to do this that doesn't require doing extra backups and ideally also doesn't require a borg server off-site? The current fallback option I have is to rsync borg repositories weekly or monthly, but that then does sync the dailies at that point in time. What would be awesome is if there was a way to get borg to list the repository files that need to be shipped off-site for a specific archive or set of archives. Am I missing something or thinking about it the wrong way? Thanks! TvE -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.klaver at aklaver.com Sun Jun 5 16:36:41 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Sun, 5 Jun 2016 13:36:41 -0700 Subject: [Borgbackup] strategy for "off-site" backups In-Reply-To: <0100015522246ba2-bc09025f-bb85-4302-abfc-4cf3df494ced-000000@email.amazonses.com> References: <0100015522246ba2-bc09025f-bb85-4302-abfc-4cf3df494ced-000000@email.amazonses.com> Message-ID: On 06/05/2016 12:57 PM, Thorsten von Eicken wrote: > I'm using borg for on-site backups (i.e. high bandwidth between client > and borg server) but I'd also like to send some backups off-site (to the > cloud, really). E.g., if I'm doing daily backups I'd like to ship a > weekly or perhaps only monthly off-site. Is there a simple way to do > this that doesn't require doing extra backups and ideally also doesn't > require a borg server off-site? > The current fallback option I have is to rsync borg repositories weekly > or monthly, but that then does sync the dailies at that point in time. > What would be awesome is if there was a way to get borg to list the > repository files that need to be shipped off-site for a specific archive > or set of archives. > Am I missing something or thinking about it the wrong way? http://borgbackup.readthedocs.io/en/stable/internals.html " Repository and Archives ...Deduplication is performed across multiple backups, both on data and metadata, using Chunks created by the chunker using the Buzhash algorithm...." I think there would be a dependency issue when trying to break out specific backups(archives). I can't see how that would be resolved without also moving over the dependent information. > Thanks! > TvE > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From hpj at urpla.net Tue Jun 14 04:39:05 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Tue, 14 Jun 2016 10:39:05 +0200 Subject: [Borgbackup] Version 1.0.3 has grown a new failing test in test_sparse_file Message-ID: <1887645.0sFgsexqOn@xrated> Hi, while preparing 1.0.3 for openSUSE, I noticed two more failing tests: [ 112s] =================================== FAILURES =================================== [ 112s] ______________________ ArchiverTestCase.test_sparse_file _______________________ [ 112s] [ 112s] self = [ 112s] [ 112s] def test_sparse_file(self): [ 112s] # no sparse file support on Mac OS X [ 112s] sparse_support = sys.platform != 'darwin' [ 112s] filename = os.path.join(self.input_path, 'sparse') [ 112s] content = b'foobar' [ 112s] hole_size = 5 * (1 << CHUNK_MAX_EXP) # 5 full chunker buffers [ 112s] with open(filename, 'wb') as fd: [ 112s] # create a file that has a hole at the beginning and end (if the [ 112s] # OS and filesystem supports sparse files) [ 112s] fd.seek(hole_size, 1) [ 112s] fd.write(content) [ 112s] fd.seek(hole_size, 1) [ 112s] pos = fd.tell() [ 112s] fd.truncate(pos) [ 112s] total_len = hole_size + len(content) + hole_size [ 112s] st = os.stat(filename) [ 112s] self.assert_equal(st.st_size, total_len) [ 112s] if sparse_support and hasattr(st, 'st_blocks'): [ 112s] self.assert_true(st.st_blocks * 512 < total_len / 9) # is input sparse? [ 112s] self.cmd('init', self.repository_location) [ 112s] self.cmd('create', self.repository_location + '::test', 'input') [ 112s] with changedir('output'): [ 112s] self.cmd('extract', '--sparse', self.repository_location + '::test') [ 112s] self.assert_dirs_equal('input', 'output/input') [ 112s] filename = os.path.join(self.output_path, 'input', 'sparse') [ 112s] with open(filename, 'rb') as fd: [ 112s] # check if file contents are as expected [ 112s] self.assert_equal(fd.read(hole_size), b'\0' * hole_size) [ 112s] self.assert_equal(fd.read(len(content)), content) [ 112s] self.assert_equal(fd.read(hole_size), b'\0' * hole_size) [ 112s] st = os.stat(filename) [ 112s] self.assert_equal(st.st_size, total_len) [ 112s] if sparse_support and hasattr(st, 'st_blocks'): [ 112s] > self.assert_true(st.st_blocks * 512 < total_len / 9) # is output sparse? [ 112s] E AssertionError: False is not true [ 112s] [ 112s] borg/testsuite/archiver.py:416: AssertionError [ 112s] ___________________ RemoteArchiverTestCase.test_sparse_file ____________________ [ 112s] [ 112s] self = [ 112s] [ 112s] def test_sparse_file(self): [ 112s] # no sparse file support on Mac OS X [ 112s] sparse_support = sys.platform != 'darwin' [ 112s] filename = os.path.join(self.input_path, 'sparse') [ 112s] content = b'foobar' [ 112s] hole_size = 5 * (1 << CHUNK_MAX_EXP) # 5 full chunker buffers [ 112s] with open(filename, 'wb') as fd: [ 112s] # create a file that has a hole at the beginning and end (if the [ 112s] # OS and filesystem supports sparse files) [ 112s] fd.seek(hole_size, 1) [ 112s] fd.write(content) [ 112s] fd.seek(hole_size, 1) [ 112s] pos = fd.tell() [ 112s] fd.truncate(pos) [ 112s] total_len = hole_size + len(content) + hole_size [ 112s] st = os.stat(filename) [ 112s] self.assert_equal(st.st_size, total_len) [ 112s] if sparse_support and hasattr(st, 'st_blocks'): [ 112s] self.assert_true(st.st_blocks * 512 < total_len / 9) # is input sparse? [ 112s] self.cmd('init', self.repository_location) [ 112s] self.cmd('create', self.repository_location + '::test', 'input') [ 112s] with changedir('output'): [ 112s] self.cmd('extract', '--sparse', self.repository_location + '::test') [ 112s] self.assert_dirs_equal('input', 'output/input') [ 112s] filename = os.path.join(self.output_path, 'input', 'sparse') [ 112s] with open(filename, 'rb') as fd: [ 112s] # check if file contents are as expected [ 112s] self.assert_equal(fd.read(hole_size), b'\0' * hole_size) [ 112s] self.assert_equal(fd.read(len(content)), content) [ 112s] self.assert_equal(fd.read(hole_size), b'\0' * hole_size) [ 112s] st = os.stat(filename) [ 112s] self.assert_equal(st.st_size, total_len) [ 112s] if sparse_support and hasattr(st, 'st_blocks'): [ 112s] > self.assert_true(st.st_blocks * 512 < total_len / 9) # is output sparse? [ 112s] E AssertionError: False is not true [ 112s] [ 112s] borg/testsuite/archiver.py:416: AssertionError [ 112s] =================== 49 tests deselected by '-knot benchmark' =================== [ 112s] = 2 failed, 415 passed, 57 skipped, 49 deselected, 2 xfailed in 94.72 seconds == [ 112s] error: Bad exit status from /var/tmp/rpm-tmp.bH663H (%check) This is with openSUSE 13.2 and Kernel 4.2.5 on a xfs filesystem. While at it, I noticed, that the document generation also fails. The usual build sequence for packaging is: CFLAGS="%{optflags}" python3 setup.py build make -C docs html man && rm docs/_build/html/.buildinfo resulting in: [ 14s] /home/abuild/rpmbuild/BUILD/borgbackup-1.0.3/docs/api.rst:5: WARNING: autodoc: failed to import module 'borg.archiver'; the following exception was raised: [ 14s] Traceback (most recent call last): [ 14s] File "/usr/lib/python3.4/site-packages/sphinx/ext/autodoc.py", line 507, in import_object [ 14s] __import__(self.modname) [ 14s] File "/home/abuild/rpmbuild/BUILD/borgbackup-1.0.3/borg/archiver.py", line 19, in [ 14s] from .helpers import Error, location_validator, archivename_validator, format_line, format_time, format_file_size, \ [ 14s] File "/home/abuild/rpmbuild/BUILD/borgbackup-1.0.3/borg/helpers.py", line 25, in [ 14s] from . import hashindex [ 14s] ImportError: cannot import name 'hashindex' [ 14s] /home/abuild/rpmbuild/BUILD/borgbackup-1.0.3/docs/api.rst:9: WARNING: autodoc: failed to import module 'borg.upgrader'; the following exception was raised: [ 14s] Traceback (most recent call last): [ 14s] File "/usr/lib/python3.4/site-packages/sphinx/ext/autodoc.py", line 507, in import_object [ 14s] __import__(self.modname) [ 14s] File "/home/abuild/rpmbuild/BUILD/borgbackup-1.0.3/borg/upgrader.py", line 9, in [ 14s] from .helpers import get_keys_dir, get_cache_dir, ProgressIndicatorPercent [ 14s] File "/home/abuild/rpmbuild/BUILD/borgbackup-1.0.3/borg/helpers.py", line 25, in [ 14s] from . import hashindex [ 14s] ImportError: cannot import name 'hashindex' [ 14s] /home/abuild/rpmbuild/BUILD/borgbackup-1.0.3/docs/api.rst:13: WARNING: autodoc: failed to import module 'borg.archive'; the following exception was raised: [ 14s] Traceback (most recent call last): Any idea, how to transform this sequence to succeed? Cheers, Pete From tw at waldmann-edv.de Tue Jun 14 07:58:41 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 14 Jun 2016 13:58:41 +0200 Subject: [Borgbackup] Version 1.0.3 has grown a new failing test in test_sparse_file In-Reply-To: <1887645.0sFgsexqOn@xrated> References: <1887645.0sFgsexqOn@xrated> Message-ID: <575FF171.6000905@waldmann-edv.de> Hi, > while preparing 1.0.3 for openSUSE, I noticed two more failing tests: > > [ 112s] ______________________ ArchiverTestCase.test_sparse_file _______________________ > [ 112s] > [ 112s] def test_sparse_file(self): ... > [ 112s] st = os.stat(filename) > [ 112s] self.assert_equal(st.st_size, total_len) > [ 112s] if sparse_support and hasattr(st, 'st_blocks'): > [ 112s] > self.assert_true(st.st_blocks * 512 < total_len / 9) # is output sparse? > [ 112s] E AssertionError: False is not true > [ 112s] > [ 112s] borg/testsuite/archiver.py:416: AssertionError We've seen this recently on Solaris / ZFS also. > [ 112s] ___________________ RemoteArchiverTestCase.test_sparse_file ____________________ Same thing. This is harmless - maybe xfs and zfs just have more overhead, so they do not fullfil that assertion. > While at it, I noticed, that the document generation also fails. > The usual build sequence for packaging is: > > CFLAGS="%{optflags}" python3 setup.py build > make -C docs html man && rm docs/_build/html/.buildinfo > > resulting in: > > [ 14s] ImportError: cannot import name 'hashindex' hashindex is a Cython module. For releases, we bundle the resulting C source, but that needs to be compiled (and usually is, if build deps are present). Cheers, Thomas -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From hpj at urpla.net Tue Jun 14 10:05:44 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Tue, 14 Jun 2016 16:05:44 +0200 Subject: [Borgbackup] Version 1.0.3 has grown a new failing test in test_sparse_file In-Reply-To: <575FF171.6000905@waldmann-edv.de> References: <1887645.0sFgsexqOn@xrated> <575FF171.6000905@waldmann-edv.de> Message-ID: <10810857.SoSjBg9mkU@xrated> On Dienstag, 14. Juni 2016 13:58:41 Thomas Waldmann wrote: > Hi, > > > while preparing 1.0.3 for openSUSE, I noticed two more failing tests: > > > > [ 112s] ______________________ ArchiverTestCase.test_sparse_file > > _______________________ [ 112s] > > > [ 112s] def test_sparse_file(self): > ... > > > [ 112s] st = os.stat(filename) > > [ 112s] self.assert_equal(st.st_size, total_len) > > [ 112s] if sparse_support and hasattr(st, 'st_blocks'): > > [ 112s] > self.assert_true(st.st_blocks * 512 < total_len / 9) > > # is output sparse? [ 112s] E AssertionError: False is not > > true > > [ 112s] > > [ 112s] borg/testsuite/archiver.py:416: AssertionError > > We've seen this recently on Solaris / ZFS also. > This is harmless - maybe xfs and zfs just have more overhead, so they do > not fullfil that assertion. Sure, but since we run the test suite on every build, we need to sedding out the test. Not that nice. May be, the assertion is formulated too rigid? I'm sure, that XFS handles sparse files correctly, I tend too believe, that this is an optimization. As long as just one sector is occupied anyway, there's really no need to make the sparse dance. > > While at it, I noticed, that the document generation also fails. > > The usual build sequence for packaging is: > > > > CFLAGS="%{optflags}" python3 setup.py build > > make -C docs html man && rm docs/_build/html/.buildinfo > > > > resulting in: > > > > [ 14s] ImportError: cannot import name 'hashindex' > > hashindex is a Cython module. For releases, we bundle the resulting C > source, but that needs to be compiled (and usually is, if build deps are > present). Yes, of course. As you see, the "python3 setup.py build" is done before the make call. The build creates the the cython modules successfully (otherwise the test suite wouldn't run successfully). In similar projects, this sequence is enough to build the docs properly. Here's the full log of an example build [issue around line [ 66s]: https://build.opensuse.org/build/home:frispete:python3/openSUSE_Tumbleweed/x86_64/borgbackup/_log Latest attempt looks like this: CFLAGS="%{optflags}" python3 setup.py build pyvenv --system-site-packages --without-pip borg-env source borg-env/bin/activate python3 setup.py install PYTHONPATH=$(pwd)/build/lib.linux-$(uname -m)-%{py3_ver} make -C docs html man && rm docs/_build/html/.buildinfo but that doesn't fix it either. Looks like I have to dive into the sphinx setup. Cheers, Pete From hpj at urpla.net Tue Jun 14 12:17:20 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Tue, 14 Jun 2016 18:17:20 +0200 Subject: [Borgbackup] Wish: option, that don't create an archive, if nothing has changed Message-ID: <3157715.CHKoflpuRq@xrated> Hi, I've successfully moved from rdiff-backup to borg with great success. One feature of rdiff-backup, I like very much: don't create an archive, if nothing has changed. Of course, this has to be an dedicated create option. Why: I'm backung up a lot of VMs, that only used occasionally. It's nice to see the real repo state, that is the last time, a machine from that repo has run. What do you think? Pete From tw at waldmann-edv.de Tue Jun 14 12:45:36 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 14 Jun 2016 18:45:36 +0200 Subject: [Borgbackup] Wish: option, that don't create an archive, if nothing has changed In-Reply-To: <3157715.CHKoflpuRq@xrated> References: <3157715.CHKoflpuRq@xrated> Message-ID: <576034B0.8060206@waldmann-edv.de> Hi Hans-Peter, > I've successfully moved from rdiff-backup to borg with great success. Thanks for the feedback! :) > One > feature of rdiff-backup, I like very much: don't create an archive, if nothing > has changed. Of course, this has to be an dedicated create option. > > Why: I'm backung up a lot of VMs, that only used occasionally. It's nice to > see the real repo state, that is the last time, a machine from that repo has > run. This is actually hard to do for borg, due to the way it works: - it only does one pass (so it won't know beforehands whatever files there will come) - it always does full backups (plus dedup of data and metadata), it is not an incremental backup scheme. So, there is always something to backup, it might just result in almost 100% of it getting deduplicated. I am currently working on a versions view for FUSE (borg mount), that might come in handy to find out when some specific file has changed. Cheers, Thomas -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From tw at waldmann-edv.de Wed Jun 15 07:19:40 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 15 Jun 2016 13:19:40 +0200 Subject: [Borgbackup] Version 1.0.3 has grown a new failing test in test_sparse_file In-Reply-To: <10810857.SoSjBg9mkU@xrated> References: <1887645.0sFgsexqOn@xrated> <575FF171.6000905@waldmann-edv.de> <10810857.SoSjBg9mkU@xrated> Message-ID: <7F66FFF9-729C-4D7F-8111-8F66F7FC8C64@waldmann-edv.de> see my recent PR against 1.0-maint, that fixes the sparse test. -- Sent from my mobile device. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmhikaru at gmail.com Wed Jun 15 23:47:55 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Wed, 15 Jun 2016 20:47:55 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup Message-ID: <20160616034755.GA5964@raspberrypi> Hi, I have a Raspberry pi 1 B (roughly equivalent to a pentium 2 300mhz) acting as a client to a remote repository on a dual cpu Intel Xeon acting as a server. Both are running borg 1.0.3. I am trying to use borg to backup multiple machines to this repository, and it works with other clients and the server itself, but when the Rpi attempts to synchronize its cache with the remote end, it gets stuck doing something incredibly cpu intensive for literally hours while doing the synchronization. To be clear, the first time I run a backup when there is no cache, the synchronization takes a while, but completes within a few minutes, and more importantly the backup proceeds and completes. When cache already exists however, the synchronization process just drags on and on. It would probably eventually complete if I allowed it, but it's just painful to have it take over an hour twiddling its thumbs doing nothing useful when I need it to be actually backing itself up. I have read somewhere that attic kept a copy of the cache used to do the synchronization on the server and the clients had to download it. If I understood what I read correctly, Borg changed this to get generated on the client side so it'd use less bandwidth. Unfortunately in my use case, the cpu of the Raspberry Pi 1B is so painfully slow that this is actually making it quite difficult for me to use it this way. Could it be possible to have a configuration setting to restore the original behavior, where the server side stores the cache so clients can download it? Alternately, being allowed to use a configuration setting to have the server side generate the data needed instead of the *client* side would work just as well, as the disparity in cpu power (2.5Ghz per core of the Xeon machine vs 700mhz Rpi) is quite obviously slanted in one particular direction. It feels very silly to be having this problem honestly. Currently, I have had to use sshfs to mount the root of the Rpi on the server that has the repository so it can read the data and write to the repository using a decent cpu. It seems to work fine this way, which I find ironic since there are plenty of places claiming sshfs is very cpu intensive. Sshfs does have a major disadvantage though, in that borgs -x switch doesn't work properly for backups done through it, so I had to add specific exclusions for things like /proc, /sys, /dev/, /var/run, etc. Not fun. Unfortunately although sshfs works for now, if I was trying to use selinux xattrs on the Rpi, it would simply not work since sshfs does not support them. I want to eventually be able to use selinux on that machine, and so this isn't a solution I can use permanently. One solution that would probably work (and be rather stupid) would be to clear the borg cache from the Rpi before each run. If I could find a way to do that without blowing away the local file cache or the bits that inform borg that it has seen the repository before, it might not be as stupid, so if someone knows how to do that, please pipe up. Another idea might be to use a separate repository just for the Rpi, but I want to be able to have my cake and eat it too, as the deduplication capability is a rather nice bit of work, and some of the (very large!) files on the Rpi are total dupes of things on other machines. I want to be able to use this as I am now (backing up to the multiple machine repository) while being able to run the program from the Rpi, so it can be used as intended without strange quirks. If you have other suggestions or recommendations, I'd love hearing about them. I have tried disabling the client side cache, but it doesn't prevent the server synchronization process from getting stuck doing... whatever it is it's doing. Thank you, Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From tw at waldmann-edv.de Thu Jun 16 04:41:34 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 16 Jun 2016 10:41:34 +0200 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <20160616034755.GA5964@raspberrypi> References: <20160616034755.GA5964@raspberrypi> Message-ID: <5762663E.8060702@waldmann-edv.de> Hi, On 06/16/2016 05:47 AM, tmhikaru at gmail.com wrote: > Hi, I have a Raspberry pi 1 B (roughly equivalent to a pentium 2 > 300mhz) acting as a client to a remote repository on a dual cpu Intel Xeon > acting as a server. Both are running borg 1.0.3. I am trying to use borg > to backup multiple machines to this repository, and it works with other > clients and the server itself, but when the Rpi attempts to synchronize its > cache with the remote end, it gets stuck doing something incredibly cpu > intensive for literally hours while doing the synchronization. Yes, this is expected. If you want to avoid it, have a separate repo for the rpi. > To be clear, > the first time I run a backup when there is no cache, the synchronization > takes a while, but completes within a few minutes, and more importantly the > backup proceeds and completes. When cache already exists however, the > synchronization process just drags on and on. Don't you think it could be rather related to how much data there is (in total) in the repository at the time when you resync? If there isn't much yet, it goes quickly. But if you have a lot of archives in there, with a lot of data, resyncing the chunks cache takes quite some time (even on much more powerful machines). > I have read somewhere that attic kept a copy of the cache used to do > the synchronization on the server and the clients had to download it. No, attic just did a complete cache rebuild from file metadata in the repo without using any locally cached data and could be much slower if you have a slow connection. borg uses locally cached per-archive chunk-indexes (except if you do the hack to save space by disallowing this) to save some data-transfer from remote and also to only have to do this computation once per archive. The code that merges these single-archive indexes into the global index is pure C and quite fast. > Could it be possible to have > a configuration setting to restore the original behavior, where the server > side stores the cache so clients can download it? Attic did not work like you think. Maybe you just read some over-simplified explanation of it somewhere. There is a "borgception" ticket in our issue tracker that describes a similar idea, but it is not implemented yet. > Alternately, being > allowed to use a configuration setting to have the server side generate the > data needed instead of the *client* side would work just as well, Borg (and attic) do not store secret keys or process unencrypted data on the server (except the latter, obviously, if you do not use encryption). Thus, it is not able to compute the chunk index. This is a design decision as the repo storage is assumed to be potentially untrusted (e.g. a 3rd party machine, a usb disk). > Sshfs does have a major disadvantage though, in that borgs -x switch doesn't > work properly for backups done through it, so I had to add > specific exclusions for things like /proc, /sys, /dev/, /var/run, etc. Not > fun. Not sure what you mean. > Unfortunately although sshfs works for now, if I was trying to use > selinux xattrs on the Rpi, it would simply not work since sshfs does not > support them. Just use a separate repo for your rpi. Problem solved. > One solution that would > probably work (and be rather stupid) would be to clear the borg cache from > the Rpi before each run. I doubt that. > that, please pipe up. Another idea might be to use a separate repository > just for the Rpi, but I want to be able to have my cake and eat it too, as > the deduplication capability is a rather nice bit of work, and some of the > (very large!) files on the Rpi are total dupes of things on other machines. Sadly, sharing repos currently has its cpu time price (and trying to make that faster had its disk space price). > If you have other suggestions or recommendations, I'd love hearing > about them. I have tried disabling the client side cache, but it doesn't > prevent the server synchronization process from getting stuck doing... > whatever it is it's doing. It doesn't get stuck, it is just taking long on slow CPUs. The little RAM on the rpi might also get you into trouble, if you have a lot of data in the repo, see the formula in the docs. Cheers, Thomas -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tve at voneicken.com Thu Jun 16 02:48:46 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Thu, 16 Jun 2016 06:48:46 +0000 Subject: [Borgbackup] what do I do with inconsistencies? Message-ID: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> This is a pretty new repository with just a couple of archives and I'm having consistency issues that don't seem to get repaired: == Starting backup for repo home at 2016-06-15T09:49-0700 Archive: backup at backup:/big/h/home::home-2016-06-15T09:49-0700 borg create --show-rc -s -e .cache -C lzma backup at backup:/big/h/home::home-2016-06-15T09:49-0700 /big/home /etc /root /big/usr-local == Starting check for repo home Repository: backup at backup:/big/h/home borg check --show-rc --last 3 backup at backup:/big/h/home borg.repository Remote: Index object count mismatch. 549207 != 549310 borg.repository Remote: Completed repository check, errors found. terminating with warning status, rc 1 and later I tried to repair it: root at h /b/h/tve# borg check --show-rc --repair backup at backup:/big/h/home 'check --repair' is an experimental feature that might result in data loss. Type 'YES' if you understand this and want to continue: YES 103 orphaned objects found! Archive consistency check complete, problems found. root at h /b/h/tve# borg check --show-rc --repair backup at backup:/big/h/home 'check --repair' is an experimental feature that might result in data loss. Type 'YES' if you understand this and want to continue: YES 1 orphaned objects found! Archive consistency check complete, problems found. Suggestions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.klaver at aklaver.com Thu Jun 16 09:40:36 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Thu, 16 Jun 2016 06:40:36 -0700 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> Message-ID: On 06/15/2016 11:48 PM, Thorsten von Eicken wrote: > This is a pretty new repository with just a couple of archives and I'm > having consistency issues that don't seem to get repaired: > > == Starting backup for repo home at 2016-06-15T09:49-0700 > Archive: backup at backup:/big/h/home::home-2016-06-15T09:49-0700 > borg create --show-rc -s -e .cache -C lzma > backup at backup:/big/h/home::home-2016-06-15T09:49-0700 /big/home /etc > /root /big/usr-local > == Starting check for repo home > Repository: backup at backup:/big/h/home > borg check --show-rc --last 3 backup at backup:/big/h/home > borg.repository Remote: Index object count mismatch. 549207 != 549310 > borg.repository Remote: Completed repository check, errors found. > terminating with warning status, rc 1 > > and later I tried to repair it: > > root at h /b/h/tve# borg check --show-rc --repair backup at backup:/big/h/home > 'check --repair' is an experimental feature that might result in data loss. > Type 'YES' if you understand this and want to continue: YES > 103 orphaned objects found! > Archive consistency check complete, problems found. > root at h /b/h/tve# borg check --show-rc --repair backup at backup:/big/h/home > 'check --repair' is an experimental feature that might result in data loss. > Type 'YES' if you understand this and want to continue: YES > 1 orphaned objects found! > Archive consistency check complete, problems found. > > Suggestions? I do not know the answer, but I do have some questions: What version of Borg? What are the OSes and file systems involved? What is the history of the repo? In other words have you changed the Borg version that was backing up to it? Have there been any incomplete/disconnected create runs? After your last repair what does check show without --repair? > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Thu Jun 16 13:30:36 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 16 Jun 2016 19:30:36 +0200 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> Message-ID: <5762E23C.2020502@waldmann-edv.de> To add to Adrian's questions: Where is the repo? Local mount? USB / SATA disk? Network via ssh? nfs? WAN / WiFi / LAN? Did you encounter interruptions while doing a backup (unplug, lan/wifi/wan disconnect)? -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From tve at voneicken.com Thu Jun 16 13:13:30 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Thu, 16 Jun 2016 17:13:30 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> Message-ID: <010001555a3413ea-ff67669f-125e-41d0-a4f8-2edc2e72c6e0-000000@email.amazonses.com> The host making the backups is ubuntu 14.04 on x86_64: # borg -V borg 1.0.3 The server holding the archive runs arch linux on ARM (an ODROID): # borg -V borg 1.0.4.dev283+ng8083799 There was a previous backup error due to incorrect permissions on some files causing writes to fail (sorry, I hadn't put 2&2 together): Remote: PermissionError: [Errno 13] Permission denied: '/big/h/home/data/0/245' Last night I reran another check with repair, and now I just re-ran a check without repair: # borg check --show-rc --repair backup at backup:/big/h/home 'check --repair' is an experimental feature that might result in data loss. Type 'YES' if you understand this and want to continue: YES 104 orphaned objects found! Archive consistency check complete, problems found. # borg check --show-rc backup at backup:/big/h/home borg.repository Remote: Index object count mismatch. 560978 != 561083 borg.repository Remote: Completed repository check, errors found. terminating with warning status, rc 1 Some details on the incorrect permissions, the problem is root ownership instead of backup: # ls -ls total 29616 4 -rw------- 1 backup backup 26 Jun 4 05:00 README 4 -rw------- 1 backup backup 635 Jun 4 05:02 config 4 drwx------ 3 backup backup 4096 Jun 4 05:02 data 4 -rw------- 1 root root 526 Jun 5 20:21 hints.245 29596 -rw------- 1 root root 30304298 Jun 5 20:21 index.245 4 -rw------- 1 backup backup 31 Jun 14 19:14 lock.roster and # ls -ls data/0 ... 4 -rw------- 1 backup backup 17 Jun 5 10:43 235 4 -rw------- 1 backup backup 17 Jun 5 10:43 237 4 -rw------- 1 root root 17 Jun 5 20:06 239 4 -rw------- 1 root root 17 Jun 5 20:06 241 4 -rw------- 1 root root 17 Jun 5 20:21 243 9020 -rw------- 1 root root 9234533 Jun 5 20:21 244 4 -rw------- 1 root root 17 Jun 5 20:21 245 330628 -rw------- 1 backup backup 338561277 Jun 4 05:40 54 129184 -rw------- 1 backup backup 132281434 Jun 4 05:46 58 ... On 6/16/2016 6:40 AM, Adrian Klaver wrote: > On 06/15/2016 11:48 PM, Thorsten von Eicken wrote: >> This is a pretty new repository with just a couple of archives and I'm >> having consistency issues that don't seem to get repaired: >> >> == Starting backup for repo home at 2016-06-15T09:49-0700 >> Archive: backup at backup:/big/h/home::home-2016-06-15T09:49-0700 >> borg create --show-rc -s -e .cache -C lzma >> backup at backup:/big/h/home::home-2016-06-15T09:49-0700 /big/home /etc >> /root /big/usr-local >> == Starting check for repo home >> Repository: backup at backup:/big/h/home >> borg check --show-rc --last 3 backup at backup:/big/h/home >> borg.repository Remote: Index object count mismatch. 549207 != 549310 >> borg.repository Remote: Completed repository check, errors found. >> terminating with warning status, rc 1 >> >> and later I tried to repair it: >> >> root at h /b/h/tve# borg check --show-rc --repair backup at backup:/big/h/home >> 'check --repair' is an experimental feature that might result in data >> loss. >> Type 'YES' if you understand this and want to continue: YES >> 103 orphaned objects found! >> Archive consistency check complete, problems found. >> root at h /b/h/tve# borg check --show-rc --repair backup at backup:/big/h/home >> 'check --repair' is an experimental feature that might result in data >> loss. >> Type 'YES' if you understand this and want to continue: YES >> 1 orphaned objects found! >> Archive consistency check complete, problems found. >> >> Suggestions? > > I do not know the answer, but I do have some questions: > > What version of Borg? > > What are the OSes and file systems involved? > > What is the history of the repo? > In other words have you changed the Borg version that was backing up > to it? > > Have there been any incomplete/disconnected create runs? > > After your last repair what does check show without --repair? > >> >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tve at voneicken.com Fri Jun 17 01:21:38 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Fri, 17 Jun 2016 05:21:38 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <5762E23C.2020502@waldmann-edv.de> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> Message-ID: <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> Answers inline (see also my reply to Adrian) On 6/16/2016 10:30 AM, Thomas Waldmann wrote: > To add to Adrian's questions: > > Where is the repo? Local mount? USB / SATA disk? Remote, from Ubuntu 14.04 x86_64 client to Arch ARM server > > Network via ssh? nfs? Ethernet, 100Mbps, via SSH > > WAN / WiFi / LAN? > > Did you encounter interruptions while doing a backup (unplug, > lan/wifi/wan disconnect)? > Yes, permissions issue, see reply to Adrian. Thanks much! Thorsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From tve at voneicken.com Fri Jun 17 01:31:38 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Fri, 17 Jun 2016 05:31:38 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <5762E23C.2020502@waldmann-edv.de> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> Message-ID: <010001555cd7dce9-e0f68660-49c2-4a93-b3de-ba64cd8ac2a4-000000@email.amazonses.com> Answers inline (see also my reply to Adrian) On 6/16/2016 10:30 AM, Thomas Waldmann wrote: > To add to Adrian's questions: > > Where is the repo? Local mount? USB / SATA disk? Remote, from Ubuntu 14.04 x86_64 client to Arch ARM server > Network via ssh? nfs? Ethernet, 100Mbps, via SSH > WAN / WiFi / LAN? > > Did you encounter interruptions while doing a backup (unplug, > lan/wifi/wan disconnect)? Yes, permissions issue, see reply to Adrian. Thanks much! Thorsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From tve at voneicken.com Fri Jun 17 01:41:39 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Fri, 17 Jun 2016 05:41:39 +0000 Subject: [Borgbackup] borg check without progress? Message-ID: <010001555ce1073f-cdfd1d4e-a210-49c2-b028-78312698b026-000000@email.amazonses.com> I'm running borg check in a script and I get the % done in the log file, is there a trick not to see that but also not to loose any useful messages printed? E.g.: borg check --show-rc --last 3 /big/borg/h Checking segments 0.0%^MChecking segments 0.1%^MChecking segments 0.2%^MChecking segments 0.3%^MChecking seg... Thanks! Thorsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From hpj at urpla.net Fri Jun 17 05:01:03 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Fri, 17 Jun 2016 11:01:03 +0200 Subject: [Borgbackup] Some minor issues Message-ID: <3615163.UrhxtIYZKZ@xrated> Hi Thomas, hello to the collective, in my setup, I'm backing up many similar VMs, which is a perfect ground for borgs excelling deduplication feature. Since they're hosted on different machines and backup is triggered with cron, I'm using a single repo and BORG_RELOCATED_REPO_ACCESS_IS_OK=yes to harvest the maximum result. Of course, it took two attempts to get this working unattended. A quick googling revealed the environmental intervention. This deserves a note in the documentation of remote backups, probably with mentioning the reasons. What's still a little disturbing, is this: Warning: The repository at location /backup/borg was previously located at ssh://user at host/backup/borg Do you want to continue? [yN] Given the answer is defined by environment, what's the reason the print the question "Do you.."? Since no input is given anyway, it results in a missing newline, which distorts the output. I would even argue, that the warning itself isn't justified due to being actively approved beforehand, unless -v is also given. It appears on every backup run and ever recurring warnings generally doesn't improve average users confidence in the system. While at it, I noticed, that providing --stats on the command line doesn't produce any output without also giving -v, which generates other output, that is less interesting for unattended runs: Synchronizing chunks cache... Archives: 10, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 10. .... If the stats output was requested explicitly, it should be done, no matter, what other options were given or not. Cheers, Pete From adrian.klaver at aklaver.com Fri Jun 17 10:12:12 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 17 Jun 2016 07:12:12 -0700 Subject: [Borgbackup] Some minor issues In-Reply-To: <3615163.UrhxtIYZKZ@xrated> References: <3615163.UrhxtIYZKZ@xrated> Message-ID: On 06/17/2016 02:01 AM, Hans-Peter Jansen wrote: > Hi Thomas, > hello to the collective, First, what version of Borg? > > in my setup, I'm backing up many similar VMs, which is a perfect ground for > borgs excelling deduplication feature. Since they're hosted on different > machines and backup is triggered with cron, I'm using a single repo and > > BORG_RELOCATED_REPO_ACCESS_IS_OK=yes > > to harvest the maximum result. Of course, it took two attempts to get this > working unattended. A quick googling revealed the environmental intervention. > This deserves a note in the documentation of remote backups, probably with > mentioning the reasons. > > What's still a little disturbing, is this: > Warning: The repository at location /backup/borg was previously located at > ssh://user at host/backup/borg > Do you want to continue? [yN] > > Given the answer is defined by environment, what's the reason the print the > question "Do you.."? Since no input is given anyway, it results in a missing > newline, which distorts the output. I would even argue, that the warning > itself isn't justified due to being actively approved beforehand, unless > -v is also given. It appears on every backup run and ever recurring warnings > generally doesn't improve average users confidence in the system. > > While at it, I noticed, that providing --stats on the command line doesn't > produce any output without also giving -v, which generates other output, that > is less interesting for unattended runs: > > Synchronizing chunks cache... > Archives: 10, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 10. > .... What command and what is the full command line ? > > If the stats output was requested explicitly, it should be done, no matter, > what other options were given or not. > > Cheers, > Pete > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From hpj at urpla.net Fri Jun 17 10:33:12 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Fri, 17 Jun 2016 16:33:12 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: References: <3615163.UrhxtIYZKZ@xrated> Message-ID: <3429587.abki0l79xN@xrated> On Freitag, 17. Juni 2016 07:12:12 Adrian Klaver wrote: > On 06/17/2016 02:01 AM, Hans-Peter Jansen wrote: > > Hi Thomas, > > hello to the collective, > > First, what version of Borg? Silly me, 1.0.3 "of course".. [...] > What command and what is the full command line ? borg create --stats --compression lz4 user at server:/backup/borg::prefix-$(date) /path/to/data Pete From adrian.klaver at aklaver.com Fri Jun 17 12:33:56 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 17 Jun 2016 09:33:56 -0700 Subject: [Borgbackup] Some minor issues In-Reply-To: <3429587.abki0l79xN@xrated> References: <3615163.UrhxtIYZKZ@xrated> <3429587.abki0l79xN@xrated> Message-ID: On 06/17/2016 07:33 AM, Hans-Peter Jansen wrote: > On Freitag, 17. Juni 2016 07:12:12 Adrian Klaver wrote: >> On 06/17/2016 02:01 AM, Hans-Peter Jansen wrote: >>> Hi Thomas, >>> hello to the collective, >> >> First, what version of Borg? > > Silly me, 1.0.3 "of course".. > > [...] > >> What command and what is the full command line ? > > borg create --stats --compression lz4 user at server:/backup/borg::prefix-$(date) > /path/to/data Using 1.0.2, when I run create with -v --stats and prune with -v --list I get the below returned from my cron job: ------------------------------------------------------------------------------ Archive name: b_repo-061616_1830 Archive fingerprint: 056a38f007392ab1fad5d69caeb1b1982b6ec8947fa1c286392677ce632c490e Time (start): Thu, 2016-06-16 18:30:02 Time (end): Thu, 2016-06-16 18:30:02 Duration: 0.23 seconds Number of files: 6 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 19.80 kB 20.15 kB 1.57 kB All archives: 372.59 kB 379.13 kB 91.36 kB Unique chunks Total chunks Chunk index: 50 152 ------------------------------------------------------------------------------ Keeping archive: b_repo-061616_1830 Thu, 2016-06-16 18:30:02 Keeping archive: b_repo-061516_1830 Wed, 2016-06-15 18:30:02 Keeping archive: b_repo-061416_1830 Tue, 2016-06-14 18:30:02 Keeping archive: b_repo-061316_1830 Mon, 2016-06-13 18:30:02 Keeping archive: b_repo-061216_1830 Sun, 2016-06-12 18:30:02 Keeping archive: b_repo-061116_1830 Sat, 2016-06-11 18:30:02 Keeping archive: b_repo-061016_1830 Fri, 2016-06-10 18:30:02 Keeping archive: b_repo-060916_1830 Thu, 2016-06-09 18:30:02 Keeping archive: b_repo-060516_1830 Sun, 2016-06-05 18:30:02 Keeping archive: b_repo-053116_1830 Tue, 2016-05-31 18:30:02 Keeping archive: b_repo-052916_1830 Sun, 2016-05-29 18:30:02 Keeping archive: b_repo-052216_1830 Sun, 2016-05-22 18:30:02 Keeping archive: b_repo-051516_1830 Sun, 2016-05-15 18:30:02 Keeping archive: b_repo-043016_1830 Sat, 2016-04-30 18:30:02 Keeping archive: b_repo-033116_1830 Thu, 2016-03-31 18:30:02 Keeping archive: b_repo-022916_1830 Mon, 2016-02-29 18:30:02 Keeping archive: b_repo-013116_1830 Sun, 2016-01-31 18:30:02 Keeping archive: b_repo-123115_1830 Thu, 2015-12-31 18:30:02 Pruning archive: b_repo-060816_1830 Wed, 2016-06-08 18:30:02 > > Pete > -- Adrian Klaver adrian.klaver at aklaver.com From aec at osncs.com Fri Jun 17 12:55:59 2016 From: aec at osncs.com (Andre Charette) Date: Fri, 17 Jun 2016 12:55:59 -0400 Subject: [Borgbackup] borg check without progress? Message-ID: <98367a0a5b0eaab1884352d27bec9f26@osncs.com> I haven't found a way to do this without without modifying a few lines of code in "helpers.py". It's a pain since this has to be re-done every time borg gets a updated. -- /andre From hpj at urpla.net Fri Jun 17 13:31:02 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Fri, 17 Jun 2016 19:31:02 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: References: <3615163.UrhxtIYZKZ@xrated> <3429587.abki0l79xN@xrated> Message-ID: <3872508.jzagtXHDLy@xrated> Hi Adrian, On Freitag, 17. Juni 2016 09:33:56 Adrian Klaver wrote: > > Using 1.0.2, when I run create with -v --stats and prune with -v --list I > get the below returned from my cron job: > > ---------------------------------------------------------------------------- > -- Archive name: b_repo-061616_1830 > Archive fingerprint: > 056a38f007392ab1fad5d69caeb1b1982b6ec8947fa1c286392677ce632c490e Time > (start): Thu, 2016-06-16 18:30:02 > Time (end): Thu, 2016-06-16 18:30:02 > Duration: 0.23 seconds > Number of files: 6 > ---------------------------------------------------------------------------- > -- Original size Compressed size Deduplicated size This archive: > 19.80 kB 20.15 kB 1.57 kB All archives: > 372.59 kB 379.13 kB 91.36 kB > > Unique chunks Total chunks > Chunk index: 50 152 > ---------------------------------------------------------------------------- The point is, if you run "borg create --stats" (without -v), it doesn't print the stats, which is rather counter intuitive... Pete From adrian.klaver at aklaver.com Fri Jun 17 13:33:40 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 17 Jun 2016 10:33:40 -0700 Subject: [Borgbackup] Some minor issues In-Reply-To: <3872508.jzagtXHDLy@xrated> References: <3615163.UrhxtIYZKZ@xrated> <3429587.abki0l79xN@xrated> <3872508.jzagtXHDLy@xrated> Message-ID: <443d8b7f-8d36-3054-1c04-20b95990c445@aklaver.com> On 06/17/2016 10:31 AM, Hans-Peter Jansen wrote: > Hi Adrian, > > On Freitag, 17. Juni 2016 09:33:56 Adrian Klaver wrote: >> >> Using 1.0.2, when I run create with -v --stats and prune with -v --list I >> get the below returned from my cron job: >> >> ---------------------------------------------------------------------------- >> -- Archive name: b_repo-061616_1830 >> Archive fingerprint: >> 056a38f007392ab1fad5d69caeb1b1982b6ec8947fa1c286392677ce632c490e Time >> (start): Thu, 2016-06-16 18:30:02 >> Time (end): Thu, 2016-06-16 18:30:02 >> Duration: 0.23 seconds >> Number of files: 6 >> ---------------------------------------------------------------------------- >> -- Original size Compressed size Deduplicated size This archive: >> 19.80 kB 20.15 kB 1.57 kB All archives: >> 372.59 kB 379.13 kB 91.36 kB >> >> Unique chunks Total chunks >> Chunk index: 50 152 >> ---------------------------------------------------------------------------- > > The point is, if you run "borg create --stats" (without -v), it doesn't print > the stats, which is rather counter intuitive... Well it is documented: http://borgbackup.readthedocs.io/en/stable/usage.html " Type of log output The log level of the builtin logging configuration defaults to WARNING. This is because we want Borg to be mostly silent and only output warnings, errors and critical messages. ... Warning While some options (like --stats or --list) will emit more informational messages, you have to use INFO (or lower) log level to make them show up in log output. Use -v or a logging configuration. " > > Pete > -- Adrian Klaver adrian.klaver at aklaver.com From hpj at urpla.net Fri Jun 17 14:45:24 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Fri, 17 Jun 2016 20:45:24 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: <443d8b7f-8d36-3054-1c04-20b95990c445@aklaver.com> References: <3615163.UrhxtIYZKZ@xrated> <3872508.jzagtXHDLy@xrated> <443d8b7f-8d36-3054-1c04-20b95990c445@aklaver.com> Message-ID: <1532819.AYdGQ9r1W8@xrated> On Freitag, 17. Juni 2016 10:33:40 Adrian Klaver wrote: > On 06/17/2016 10:31 AM, Hans-Peter Jansen wrote: > > Hi Adrian, > > > > On Freitag, 17. Juni 2016 09:33:56 Adrian Klaver wrote: > >> Using 1.0.2, when I run create with -v --stats and prune with -v --list I > >> get the below returned from my cron job: > >> > >> ------------------------------------------------------------------------- > >> --- -- Archive name: b_repo-061616_1830 > >> Archive fingerprint: > >> 056a38f007392ab1fad5d69caeb1b1982b6ec8947fa1c286392677ce632c490e Time > >> (start): Thu, 2016-06-16 18:30:02 > >> Time (end): Thu, 2016-06-16 18:30:02 > >> Duration: 0.23 seconds > >> Number of files: 6 > >> ------------------------------------------------------------------------- > >> --->> > >> -- Original size Compressed size Deduplicated size This archive: > >> 19.80 kB 20.15 kB 1.57 kB All archives: > >> 372.59 kB 379.13 kB 91.36 kB > >> > >> Unique chunks Total chunks > >> > >> Chunk index: 50 152 > >> ------------------------------------------------------------------------- > >> ---> > > The point is, if you run "borg create --stats" (without -v), it doesn't > > print the stats, which is rather counter intuitive... > > Well it is documented: > > http://borgbackup.readthedocs.io/en/stable/usage.html > " > Type of log output > > The log level of the builtin logging configuration defaults to WARNING. > This is because we want Borg to be mostly silent and only output > warnings, errors and critical messages. > > ... > > Warning > > While some options (like --stats or --list) will emit more informational > messages, you have to use INFO (or lower) log level to make them show up > in log output. Use -v or a logging configuration. > " Well, the term "more" doesn't match users experience, since nothing is printed at all. In other words, without looking at the repo with list after create, you cannot see, if borg did anything, if -v wasn't given (intentionally). Let's stop this fruitless discussion, it's suboptimal behavior, and either Thomas want to fix it or somebody else (incl. me) sends in a PR with a fix. The aim of posting this here was discussing the "right" behavior. Pete From public at enkore.de Fri Jun 17 15:32:00 2016 From: public at enkore.de (public at enkore.de) Date: Fri, 17 Jun 2016 21:32:00 +0200 Subject: [Borgbackup] borg check without progress? In-Reply-To: <98367a0a5b0eaab1884352d27bec9f26@osncs.com> References: <98367a0a5b0eaab1884352d27bec9f26@osncs.com> Message-ID: <53f046bf-b0a9-0d34-814e-e2eea3e808fb@enkore.de> On 06/17/2016 06:55 PM, Andre Charette wrote: > I haven't found a way to do this without without modifying a few lines > of code in "helpers.py". It's a pain since this has to be re-done every > time borg gets a updated. > Fix scheduled for 1.1 Cheers, Marian From public at enkore.de Fri Jun 17 15:35:09 2016 From: public at enkore.de (public at enkore.de) Date: Fri, 17 Jun 2016 21:35:09 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: <3872508.jzagtXHDLy@xrated> References: <3615163.UrhxtIYZKZ@xrated> <3429587.abki0l79xN@xrated> <3872508.jzagtXHDLy@xrated> Message-ID: <68a7a06e-2b85-5ccb-f388-5089c58c0093@enkore.de> On 06/17/2016 07:31 PM, Hans-Peter Jansen wrote: > The point is, if you run "borg create --stats" (without -v), it doesn't print > the stats, which is rather counter intuitive... Yeah, that'll be fixed in 1.1 See current development log at https://github.com/borgbackup/borg/blob/master/docs/changes.rst > BORG_RELOCATED_REPO_ACCESS_IS_OK=yes This should only be needed once. Check if ~/.cache/borg//config contains the correct path (the one you use to access the repository in your scripts), and if that file has the correct permissions etc. Cheers, Marian From hpj at urpla.net Fri Jun 17 16:47:05 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Fri, 17 Jun 2016 22:47:05 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: <68a7a06e-2b85-5ccb-f388-5089c58c0093@enkore.de> References: <3615163.UrhxtIYZKZ@xrated> <3872508.jzagtXHDLy@xrated> <68a7a06e-2b85-5ccb-f388-5089c58c0093@enkore.de> Message-ID: <1597638.9V5pKXiySv@xrated> On Freitag, 17. Juni 2016 21:35:09 public at enkore.de wrote: > On 06/17/2016 07:31 PM, Hans-Peter Jansen wrote: > > The point is, if you run "borg create --stats" (without -v), it > > doesn't print > > > the stats, which is rather counter intuitive... > > Yeah, that'll be fixed in 1.1 > > See current development log at > https://github.com/borgbackup/borg/blob/master/docs/changes.rst Great. Good news. > > BORG_RELOCATED_REPO_ACCESS_IS_OK=yes > > This should only be needed once. Check if ~/.cache/borg//config > contains the correct path (the one you use to access the repository in > your scripts), and if that file has the correct permissions etc. Aii, I see, unfortunately, I'm using a NFS home, hence I see all repo configs in my .cache/borg dir. Hmm, Pete From tve at voneicken.com Sat Jun 18 13:02:06 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Sat, 18 Jun 2016 17:02:06 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> Message-ID: <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> I'm having more difficulties with this repo. What I did is to start a fresh repo in the same location. Specifically, I renamed the directory of the repo on the server from "home" to "home-bad" and initialized a fresh repo "home". My backup then ran over night and did a full backup into the new repo and the subsequent check again encountered errors. Is this perhaps because of the local cache on the client machine? Or is there a more intrinsic problem? Suggestions? Here's the init: # borg init --encryption=repokey backup at backup:/big/h/home Here's the create followed by the failing check: == Starting backup for repo home at 2016-06-18T03:15-0700 Archive: backup at backup:/big/h/home::home-2016-06-18T03:15-0700 borg create --show-rc --stats -v -e .cache -C lzma backup at backup:/big/h/home::home-2016-06-18T03:15-0700 /big/home /etc /root /big/usr-local ------------------------------------------------------------------------------ Archive name: home-2016-06-18T03:15-0700 Archive fingerprint: 707cbc9b15b6756a344cf8653bddc5bb2a119293dfee3da72aff0bf8a7395b6c Time (start): Sat, 2016-06-18 03:15:24 Time (end): Sat, 2016-06-18 07:21:32 Duration: 4 hours 6 minutes 7.98 seconds Number of files: 916872 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 38.05 GB 19.38 GB 16.76 GB All archives: 38.05 GB 19.38 GB 16.76 GB Unique chunks Total chunks Chunk index: 537179 935455 ------------------------------------------------------------------------------ terminating with success status, rc 0 == Starting check for repo home Repository: backup at backup:/big/h/home borg check --show-rc --last 3 backup at backup:/big/h/home borg.repository Remote: Index object count mismatch. 537180 != 537185 borg.repository Remote: Completed repository check, errors found. terminating with warning status, rc 1 On 6/16/2016 10:21 PM, Thorsten von Eicken wrote: > > Answers inline (see also my reply to Adrian) > > On 6/16/2016 10:30 AM, Thomas Waldmann wrote: >> To add to Adrian's questions: >> >> Where is the repo? Local mount? USB / SATA disk? > Remote, from Ubuntu 14.04 x86_64 client to Arch ARM server >> >> Network via ssh? nfs? > Ethernet, 100Mbps, via SSH >> >> WAN / WiFi / LAN? >> >> Did you encounter interruptions while doing a backup (unplug, >> lan/wifi/wan disconnect)? >> > Yes, permissions issue, see reply to Adrian. > > Thanks much! > Thorsten > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From hpj at urpla.net Sat Jun 18 13:59:39 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Sat, 18 Jun 2016 19:59:39 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: <68a7a06e-2b85-5ccb-f388-5089c58c0093@enkore.de> References: <3615163.UrhxtIYZKZ@xrated> <3872508.jzagtXHDLy@xrated> <68a7a06e-2b85-5ccb-f388-5089c58c0093@enkore.de> Message-ID: <6062925.EGKR569353@xrated> Hi Marian, hi Thomas, On Freitag, 17. Juni 2016 21:35:09 public at enkore.de wrote: > On 06/17/2016 07:31 PM, Hans-Peter Jansen wrote: > > The point is, if you run "borg create --stats" (without -v), it > > doesn't print > > > the stats, which is rather counter intuitive... > > Yeah, that'll be fixed in 1.1 > > See current development log at > https://github.com/borgbackup/borg/blob/master/docs/changes.rst > > > BORG_RELOCATED_REPO_ACCESS_IS_OK=yes > > This should only be needed once. Check if ~/.cache/borg//config > contains the correct path (the one you use to access the repository in > your scripts), and if that file has the correct permissions etc. Here's a patch proposal: --- a/src/borg/helpers.py +++ b/src/borg/helpers.py @@ -975,6 +975,25 @@ def yes(msg=None, false_msg=None, true_msg=None, default_msg=None, ofile = sys.stderr if default not in (True, False): raise ValueError("invalid default value, must be True or False") + # silent acceptance via environment: + # if a valid answer is given via environment + # and no env_msg is attached to this question + # print msg only, if a related {true,false}_msg is attached + # and return the related value + if env_var_override and not env_msg: + answer = os.environ.get(env_var_override) + if answer in truish: + if true_msg: + if msg: + print(msg, file=ofile) + print(true_msg, file=ofile) + return True + if answer in falsish: + if false_msg: + if msg: + print(msg, file=ofile) + print(false_msg, file=ofile) + return False if msg: print(msg, file=ofile, end='', flush=True) while True: Would something like that be acceptable? Thanks, Pete -------------- next part -------------- A non-text attachment was scrubbed... Name: borg_silent_envvar_acceptance.patch Type: text/x-patch Size: 1178 bytes Desc: not available URL: From adrian.klaver at aklaver.com Sun Jun 19 15:42:04 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Sun, 19 Jun 2016 12:42:04 -0700 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> Message-ID: <789d17ff-8a7f-06ca-d826-742154f5f823@aklaver.com> On 06/18/2016 10:02 AM, Thorsten von Eicken wrote: > I'm having more difficulties with this repo. What I did is to start a > fresh repo in the same location. Specifically, I renamed the directory > of the repo on the server from "home" to "home-bad" and initialized a > fresh repo "home". My backup then ran over night and did a full backup > into the new repo and the subsequent check again encountered errors. Is > this perhaps because of the local cache on the client machine? Or is > there a more intrinsic problem? Did you correct the permissions issue? Have you made the versions of Borg the same on both machines? Also what are the encodings/locales you are using? I ask because I see the below on my end: ????????????????????????????????????????????????????????????????? Original size??????????????? When I look at the archives everything looks alright. > > Suggestions? > > Here's the init: > > # borg init --encryption=repokey backup at backup:/big/h/home > > Here's the create followed by the failing check: > > == Starting backup for repo home at 2016-06-18T03:15-0700 > Archive: backup at backup:/big/h/home::home-2016-06-18T03:15-0700 > borg create --show-rc --stats -v -e .cache -C lzma > backup at backup:/big/h/home::home-2016-06-18T03:15-0700 /big/home /etc > /root /big/usr-local > ------------------------------------------------------------------------------ > Archive name: home-2016-06-18T03:15-0700 > Archive fingerprint: > 707cbc9b15b6756a344cf8653bddc5bb2a119293dfee3da72aff0bf8a7395b6c > Time (start): Sat, 2016-06-18 03:15:24 > Time (end):?????? Sat, 2016-06-18 07:21:32 > Duration: 4 hours 6 minutes 7.98 seconds > Number of files: 916872 > ------------------------------------------------------------------------------ > ?????????????????????????????????????????????????????????????????? > Original size??????????????? Compressed size????????? Deduplicated size > This archive:?????????????????????????????????????????? 38.05 > GB???????????????????????????????????? 19.38 > GB???????????????????????????????????? 16.76 GB > All archives:?????????????????????????????????????????? 38.05 > GB???????????????????????????????????? 19.38 > GB???????????????????????????????????? 16.76 GB > > ?????????????????????????????????????????????????????????????????? > Unique chunks???????????????????????? Total chunks > Chunk index:??????????????????????????????????????????????????? > 537179?????????????????????????????????????????? 935455 > ------------------------------------------------------------------------------ > terminating with success status, rc 0 > == Starting check for repo home > Repository: backup at backup:/big/h/home > borg check --show-rc --last 3 backup at backup:/big/h/home > borg.repository Remote: Index object count mismatch. 537180 != 537185 > borg.repository Remote: Completed repository check, errors found. > terminating with warning status, rc 1 > > > On 6/16/2016 10:21 PM, Thorsten von Eicken wrote: >> >> Answers inline (see also my reply to Adrian) >> >> On 6/16/2016 10:30 AM, Thomas Waldmann wrote: >>> To add to Adrian's questions: >>> >>> Where is the repo? Local mount? USB / SATA disk? >> Remote, from Ubuntu 14.04 x86_64 client to Arch ARM server >>> >>> Network via ssh? nfs? >> Ethernet, 100Mbps, via SSH >>> >>> WAN / WiFi / LAN? >>> >>> Did you encounter interruptions while doing a backup (unplug, >>> lan/wifi/wan disconnect)? >>> >> Yes, permissions issue, see reply to Adrian. >> >> Thanks much! >> Thorsten >> >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From tmhikaru at gmail.com Sun Jun 19 16:25:07 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Sun, 19 Jun 2016 13:25:07 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup Message-ID: <20160619202507.GA2020@raspberrypi> I accidentally replied directly to Thomas Waldmann several days ago and did not realize I hadn't sent my reply to the mailing list. Sorry about that, here it is. I would appreciate feedback on trying to determine exactly where the problem lies, or alternately, proving that none does. Tim McGrath On Thu, Jun 16, 2016 at 10:41:34AM +0200, Thomas Waldmann wrote: > Yes, this is expected. > > If you want to avoid it, have a separate repo for the rpi Not what I want, but I somewhat expected this response. If I absolutely have to do it this way, I will. > Don't you think it could be rather related to how much data there is (in > total) in the repository at the time when you resync? > > If there isn't much yet, it goes quickly. > But if you have a lot of archives in there, with a lot of data, resyncing > the chunks cache takes quite some time (even on much more powerful > machines). I can't guarantee it's not related to the amount of data that's in the repository, but I really have a feeling something is going wrong when it tries to sync large archives when cache already exists. The amount of data in the repository I have, along with the amount of archives, has only gotten larger since I started using borg to do backups on the Rpi. If the client side syncs from a totally nonexistent cache (rm -r'd .cache) it takes quite a while to finish the sync, but it actually finishes each archive sync within minutes, rather unlike what happens if cache already exists. When cache already exists, it usually gets stuck at some point early on with the first of the larger archives during the 'Merging into master chunks index' part, and rather than taking a few minutes to process synchronizing the archive like it does when cache is empty, it'll happily spin at 100% cpu for hours seemingly with no progress - there is no noticable disk I/O, memory pressure, or network traffic for this time, something that is quite a different story when it's syncing from an empty cache. I don't know what it's doing, but it's clearly not working well, if at all. I would very much like to know what it's getting hung up on - is there some kind of verbosity setting I could use to find out? Even if this is merely it getting bogged down by the size of the repository, I'd like to verify that it is in fact doing that rather than guessing at its behavior. To be clear, every single time I have blown away the cache and retried the backup operation, borg has synchronized and completed the backup successfully. It's only when the cache already exists that it winds up stuck. I want to know why that makes a difference, and how I could work around it without having to delete the cache every time I run it. >borg uses locally cached per-archive chunk-indexes (except if you do the >hack to save space by disallowing this) to save some data-transfer from >remote and also to only have to do this computation once per archive. The >code that merges these single-archive indexes into the global index is pure >C and quite fast. Before I go and do something monumentally stupid, if I wanted to test if it's getting hung up in this per archive chunk index generation you're talking about, would performing this hack be a good way to find out? I don't care about speed or disk space at this point, I want to find out what's going on. > Attic did not work like you think. Maybe you just read some over-simplified > explanation of it somewhere. > > There is a "borgception" ticket in our issue tracker that describes a > similar idea, but it is not implemented yet. Thank you for clearing that up, I apologize for my ignorance. > Borg (and attic) do not store secret keys or process unencrypted data on the > server (except the latter, obviously, if you do not use encryption). Thus, > it is not able to compute the chunk index. > > This is a design decision as the repo storage is assumed to be potentially > untrusted (e.g. a 3rd party machine, a usb disk). Pity. I don't use encryption on my backups, at least not yet - I figured I'd run into potential problems and didn't want to deal with that can of worms complicating things even further. I understand now why borg cannot do this however, thank you for explaining. > >Sshfs does have a major disadvantage though, in that borgs -x switch doesn't > > work properly for backups done through it, so I had to add > >specific exclusions for things like /proc, /sys, /dev/, /var/run, etc. Not > >fun. > > Not sure what you mean. When running on the actual hardware and using the -x switch borg will not go from, say / into /proc. An sshfs mount however seems to show up as one giant filesystem as far as borg is concerned, so using -x does *not* prevent it from going into the mounted /proc filesystem inside the sshfs mount! - My workaround for this was to make explicit excludes for the sshfs mount. It's kludgy, but it works. I don't think this is a bug in borg, just... unexpected behavior. This is the first time I've used sshfs, so this is likely my own ignorance of how it works showing. > The little RAM on the rpi might also get you into trouble, if you have a lot > of data in the repo, see the formula in the docs. Believe it or not, I thought the rpi would be impossible to run borg - but although on my main server borg runs with a bogglesome ~2.1GB of ram allocated, which would *never* fit on the Rpi which has ~490MB of ram available, working on the same remote repository I've seen it use as little as ~130MB and at most a little more than 300MB while doing its thing, and I'm not using any space saving switches either. The remote repo has ~1TB of data in it, and this doesn't seem to push borg's memory constraints too far on the rpi. I was very surprised and impressed to say the least, I wasn't expecting it to work at all. Now I want to make it work better. Thank you for taking the time to explain things to me, I appreciate it. Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From tw at waldmann-edv.de Sun Jun 19 18:57:26 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 20 Jun 2016 00:57:26 +0200 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <20160619202507.GA2020@raspberrypi> References: <20160619202507.GA2020@raspberrypi> Message-ID: <57672356.6080308@waldmann-edv.de> > To be clear, every single time I have blown away the cache and retried the > backup operation, borg has synchronized and completed the backup > successfully. Can you check how big the "chunks" cache is after that? >>> Sshfs does have a major disadvantage though, in that borgs -x switch doesn't >>> work properly for backups done through it, so I had to add >>> specific exclusions for things like /proc, /sys, /dev/, /var/run, etc. Not >>> fun. >> >> Not sure what you mean. > > [...] Guess that one sshfs mount is one file system, even if the underlying storage on the remote side are multiple file systems. borg checks if the device number changes when doing the -x detection. If that doesn't work, we can conclude sshfs does not pass through that information. > Believe it or not, I thought the rpi would be impossible to run borg - but > although on my main server borg runs with a bogglesome ~2.1GB of ram > allocated, which would *never* fit on the Rpi which has ~490MB of ram > available, working on the same remote repository I've seen it use as little > as ~130MB and at most a little more than 300MB while doing its thing, and > I'm not using any space saving switches either. That sounds strange. I'ld expect the memory usage to be similar. Maybe on a 64bit system a little bit more than on a 32bit system, but the chunks cache would be exactly the same amount of memory for both. Are you maybe seeing the big slowdown in the moment it begins with paging memory to disk / SD card? With little memory that can happen suddenly when the hash table 75% full and gets enlarged. In that moment, the old hash table and the new larger one both need to be in memory while it transfers the entries from old to new. Maybe watch "top" (or htop?) while it is resyncing to see that. Have a look at memory and swap usage. Also watch the clock "top" displays to see whether the display is updating at all. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sun Jun 19 19:12:10 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 20 Jun 2016 01:12:10 +0200 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> Message-ID: <576726CA.1020501@waldmann-edv.de> On 06/18/2016 07:02 PM, Thorsten von Eicken wrote: > I'm having more difficulties with this repo. What I did is to start a > fresh repo in the same location. Specifically, I renamed the directory > of the repo on the server from "home" to "home-bad" and initialized a > fresh repo "home". My backup then ran over night and did a full backup > into the new repo and the subsequent check again encountered errors. Is > this perhaps because of the local cache on the client machine? You mean the cache of the (now) home-bad repo influencing operations on the (new) home repo? No. Repos are identified by a unique id. If you did "borg init" to create a new repo, it has a different id. > # borg init --encryption=repokey backup at backup:/big/h/home Encryption enabled using that key mode is the default for borg >= 1.0. > Time (end):?????? Sat, 2016-06-18 07:21:32 Somehow spaces look like garbage in your output. > == Starting check for repo home > Repository: backup at backup:/big/h/home > borg check --show-rc --last 3 backup at backup:/big/h/home > borg.repository Remote: Index object count mismatch. 537180 != 537185 > borg.repository Remote: Completed repository check, errors found. > terminating with warning status, rc 1 That should not be. Can you reproduce this on different hardware? You could e.g. try almost the same commands on your x86_64 machine, just using a local repo. If that works try again to the ARM machine, but use a different repo storage medium. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tve at voneicken.com Mon Jun 20 03:27:34 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Mon, 20 Jun 2016 07:27:34 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <576726CA.1020501@waldmann-edv.de> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> <576726CA.1020501@waldmann-edv.de> Message-ID: <010001556cb515a8-e3eb275e-c401-41fd-8c00-7c3c5b1988a9-000000@email.amazonses.com> On 6/19/2016 4:12 PM, Thomas Waldmann wrote: >> == Starting check for repo home >> Repository: backup at backup:/big/h/home >> borg check --show-rc --last 3 backup at backup:/big/h/home >> borg.repository Remote: Index object count mismatch. 537180 != 537185 >> borg.repository Remote: Completed repository check, errors found. >> terminating with warning status, rc 1 > > That should not be. Can you reproduce this on different hardware? > > You could e.g. try almost the same commands on your x86_64 machine, > just using a local repo. If that works try again to the ARM machine, > but use a different repo storage medium. > So I did one test, which is to create the almost-identical backup on the x86_64 machine locally and from the x86_64 machine remotely onto the ARM box. I did the same init/create/check sequence in both cases with just the repo location being different. The result is that the local backup checked out OK and the remote one failed. This is how the good create/check looked: # borg create --show-rc --stats -v -e .cache -C lzma home-test::home-2016-06-19 /big/home /etc /root /big/usr-local ------------------------------------------------------------------------------ Archive name: home-2016-06-19 Archive fingerprint: 4a2f0549b8c15f1ffc5f1cb76bcbecb3eb1845149b2bd234f18de05a1bd8ab73 Time (start): Sun, 2016-06-19 17:43:51 Time (end): Sun, 2016-06-19 22:17:22 Duration: 4 hours 33 minutes 31.50 seconds Number of files: 917266 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 38.08 GB 19.39 GB 16.77 GB All archives: 38.08 GB 19.39 GB 16.77 GB Unique chunks Total chunks Chunk index: 537453 935954 ------------------------------------------------------------------------------ terminating with success status, rc 0 # borg check -v home-test Starting repository check Completed repository check, no problems found. Starting archive consistency check... Analyzing archive home-2016-06-19 (1/1) Archive consistency check complete, no problems found. This is how the bad create/check looked: # borg create --show-rc --stats -v -e .cache -C lzma backup at backup:/big/h/home2::home-2016-06-19 /big/ ------------------------------------------------------------------------------ Archive name: home-2016-06-19 Archive fingerprint: 75c8ef99a5fb4a9d8c0c699c9e85abf7d0a989e1209fa5bead9965144d69b214 Time (start): Sun, 2016-06-19 17:56:20 Time (end): Sun, 2016-06-19 22:27:56 Duration: 4 hours 31 minutes 36.10 seconds Number of files: 917266 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 38.08 GB 19.41 GB 16.76 GB All archives: 38.08 GB 19.41 GB 16.76 GB Unique chunks Total chunks Chunk index: 537312 935857 ------------------------------------------------------------------------------ terminating with success status, rc 0 # borg check -v backup at backup:/big/h/home2 borg.repository Remote: Starting repository check borg.repository Remote: Index object count mismatch. 537313 != 537318 borg.repository Remote: Completed repository check, errors found. I was hacking on the machine while the backups were running and the second create started a few minutes later, so the minor difference in the chunk counts could be explained by that. I then rsync'd the broken repository to the x86_64 box and ran a check there and got the same error report. So the issue is not in the check code but the create code. I'm wondering what to test next. Some thoughts: - rsync the data to the ARM box and perform a local create/check there - nfs mount the data onto the ARM box and perform a local create/check this way Suggestions? Thorsten From public at enkore.de Mon Jun 20 08:42:02 2016 From: public at enkore.de (public at enkore.de) Date: Mon, 20 Jun 2016 14:42:02 +0200 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <010001556cb515a8-e3eb275e-c401-41fd-8c00-7c3c5b1988a9-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> <576726CA.1020501@waldmann-edv.de> <010001556cb515a8-e3eb275e-c401-41fd-8c00-7c3c5b1988a9-000000@email.amazonses.com> Message-ID: On 06/20/2016 09:27 AM, Thorsten von Eicken wrote: > I'm wondering what to test next. Some thoughts: > - rsync the data to the ARM box and perform a local create/check there > - nfs mount the data onto the ARM box and perform a local create/check > this way > > Suggestions? > Thorsten Try (1) first to see whether it's a networking-related issue or happens even with local files. Cheers, Marian From tmhikaru at gmail.com Tue Jun 21 04:52:10 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Tue, 21 Jun 2016 01:52:10 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <57672356.6080308@waldmann-edv.de> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> Message-ID: <20160621085210.GA14044@raspberrypi> On Mon, Jun 20, 2016 at 12:57:26AM +0200, Thomas Waldmann wrote: > >To be clear, every single time I have blown away the cache and retried the > >backup operation, borg has synchronized and completed the backup > >successfully. > > Can you check how big the "chunks" cache is after that? I'm not sure how I would do that. Do you mean du -h ~/.cache ? I'll do a test run on the weekend and let you know. On the server right now, I get: 718M /root/.cache/borg > >Believe it or not, I thought the rpi would be impossible to run borg - but > >although on my main server borg runs with a bogglesome ~2.1GB of ram > >allocated, which would *never* fit on the Rpi which has ~490MB of ram > >available, working on the same remote repository I've seen it use as little > >as ~130MB and at most a little more than 300MB while doing its thing, and > >I'm not using any space saving switches either. > > That sounds strange. I'ld expect the memory usage to be similar. Maybe on a > 64bit system a little bit more than on a 32bit system, but the chunks cache > would be exactly the same amount of memory for both. > > Are you maybe seeing the big slowdown in the moment it begins with paging > memory to disk / SD card? With little memory that can happen suddenly when > the hash table 75% full and gets enlarged. In that moment, the old hash > table and the new larger one both need to be in memory while it transfers > the entries from old to new. > > Maybe watch "top" (or htop?) while it is resyncing to see that. Have a look > at memory and swap usage. Also watch the clock "top" displays to see whether > the display is updating at all. I have run top before as well as monitored the swap/memory use. Quite simply, there's no memory pressure for it to go into swap - it never uses more than a few megabytes of swap, and the amount generally doesn't change from before, during, and after it runs. When it gets stuck, no memory is being measurably allocated to or freed from the program in top output. No networking data is, as far as I can tell being sent or recieved by the program while monitoring the network traffic from the server the repository is on. While it is attempting to resync and is stuck I can demonstrably access the rpi using ssh, run programs, read my email etc. Disk I/O I don't have a very good measurement of, but from the blinkenlights on the external disk, it doesn't seem do be doing much of anything at all, and general use of the rpi is unburdened - trying to do anything that requires disk to be touched while it is loaded up with say, updatedb is a task in patience normally.. During a clean resync it's VERY active, either pulling as much data as it can via the network (~2.4MB/s, which is about the hardware limit for the Rpi) during the archive sync, or when it's merging into the master chunk index the disk is very busy for an extended time. When it gets stuck, it just sits there - the program is constantly in running state with cpu use at 100%, and gets nowhere. Something is obviously going wrong here. The way my Rpi is configured is a little different from stock - it boots the kernel off the sdcard in it with minimum graphics memory, then mounts root off an external usb hard disk which is much faster than the sd card, as well as not making me worry about using swap on it using up the lifetime of the card. Not that I generally have to worry about it using swap too much. Generally I only run into swap heavily when I've screwed up fiercely, and it is *painfully* noticable how overloaded it becomes - things like echoing back what you typed in ssh suddenly can take several seconds to respond. When I do that test this weekend I'll first try simplifying the required steps and see if I get the same sort of hang; I'll try backing up a single file, then muck around with the server side so it has to resync and tell it to try backing up that file again. If that works on making it hang, it'll reduce the amount of time required to do each no cache test by about an hour. I'll also run a full backup from a clean cache using time so you can see how long it takes when it works, and record the memory use of borg (maybe using a script to write ps output in 1 min intervals?) on both the Rpi and the server when I have them doing tasks. A full backup of the Rpi using the server via sshfs should give you an interesting data point to contrast the Rpi trying to do the same on its own hardware. If I was mistaken about what you are asking for with the chunks cache, please explain and I'll try to get what you want. Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From tw at waldmann-edv.de Tue Jun 21 07:59:49 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 21 Jun 2016 13:59:49 +0200 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <20160621085210.GA14044@raspberrypi> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> <20160621085210.GA14044@raspberrypi> Message-ID: <57692C35.3080901@waldmann-edv.de> >>> To be clear, every single time I have blown away the cache and retried the >>> backup operation, borg has synchronized and completed the backup >>> successfully. >> >> Can you check how big the "chunks" cache is after that? > I'm not sure how I would do that. # find out repo's ID: repo-server$ grep ^id repo/config # check size of chunks cache: raspi$ ls -l ~/.cache/borg//chunks main-server$ ls -l ~/.cache/borg//chunks For the latter 2, you need to be logged in as the user running the borg backups (root?). > When I do that test this weekend I'll first try simplifying the required > steps and see if I get the same sort of hang; I'll try backing up a single > file, then muck around with the server side so it has to resync You can trigger a resync by modifying 1 digit of the manifest in the cache's config - borg will then think the cache is out of sync: [cache] ... manifest = 43867a6f631e3ea4e7520e62904ac26615566b2a3c7a0b42656900f0e2074032 ... > I'll also run a full backup from a clean cache using time so you can see how > long it takes when it works, and record the memory use of borg (maybe using > a script to write ps output in 1 min intervals?) on both the Rpi and the > server when I have them doing tasks. On the client (raspi), you could use: /usr/bin/time borg ... The memory usage of the borg repo server is maybe not that interesting. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Tue Jun 21 08:01:01 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 21 Jun 2016 14:01:01 +0200 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <57692C35.3080901@waldmann-edv.de> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> <20160621085210.GA14044@raspberrypi> <57692C35.3080901@waldmann-edv.de> Message-ID: <57692C7D.3090305@waldmann-edv.de> > /usr/bin/time borg ... Or even better readable as: /usr/bin/time -v borg ... -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tve at voneicken.com Tue Jun 21 12:15:51 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Tue, 21 Jun 2016 16:15:51 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> <576726CA.1020501@waldmann-edv.de> <010001556cb515a8-e3eb275e-c401-41fd-8c00-7c3c5b1988a9-000000@email.amazonses.com> Message-ID: <0100015573bf19db-7884c56f-18be-4c17-a40b-485ce5c45185-000000@email.amazonses.com> More tests, looks like borg 1.0.4 with lzma doesn't work on ARM, or the Arch build is broken. I narrowed the backup that fails to about 6GB of /usr/local. I did a remote backup x86_64->ARM and got the usual inconsistency. I then rsync'ed /usr/local to the ARM box and did a local backup there and got the same inconsistency. Something I haven't mentioned before is that I run two other backups x86_64->ARM nightly and they do not produce inconsistencies, but they also do not use any compression (they're all compressed media files). I can continue testing various combinations but maybe one of the borg maintainers has an rPI or ODROID or other ARM box and can run some tests as well? As far as I can tell you need a dir structure of some minimum size (I tried something tiny and it worked fine) and then perform a borg create with lzma. Here's the log: # borg init usr-local2 Enter new passphrase: Enter same passphrase again: Do you want your passphrase to be displayed for verification? [yN]: n # borg create --show-rc --stats -v -e .cache -C lzma usr-local2::usr-local-2016-06-20 /big/usr-local Enter passphrase for key /big/h/usr-local2: ------------------------------------------------------------------------------ Archive name: usr-local-2016-06-20 Archive fingerprint: c2bc42a6f7837cf44ca3e4182ebe2e01437876b3c24a8551668b19dcd9b14ce8 Time (start): Tue, 2016-06-21 02:54:38 Time (end): Tue, 2016-06-21 08:17:51 Duration: 5 hours 23 minutes 13.95 seconds Number of files: 354059 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 5.86 GB 1.80 GB 1.35 GB All archives: 5.86 GB 1.80 GB 1.35 GB Unique chunks Total chunks Chunk index: 190030 359279 ------------------------------------------------------------------------------ terminating with success status, rc 0 # borg check -v usr-local2 Starting repository check Index object count mismatch. 190031 != 190041 Completed repository check, errors found. On 6/20/2016 5:42 AM, public at enkore.de wrote: > On 06/20/2016 09:27 AM, Thorsten von Eicken wrote: >> I'm wondering what to test next. Some thoughts: >> - rsync the data to the ARM box and perform a local create/check there >> - nfs mount the data onto the ARM box and perform a local create/check >> this way >> >> Suggestions? >> Thorsten > Try (1) first to see whether it's a networking-related issue or happens > even with local files. > > Cheers, Marian > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmhikaru at gmail.com Wed Jun 22 00:04:23 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Tue, 21 Jun 2016 21:04:23 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <57692C35.3080901@waldmann-edv.de> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> <20160621085210.GA14044@raspberrypi> <57692C35.3080901@waldmann-edv.de> Message-ID: <20160622040422.GA21457@raspberrypi> ... Well, there's only one repo any of them know about, so this simplifies things immensely: > # check size of chunks cache: > raspi$ ls -l ~/.cache/borg//chunks > main-server$ ls -l ~/.cache/borg//chunks Server side it looks like this: -rw-------. 1 root root 96336214 Jun 13 19:58 .cache/borg/40f5689bc9c5faac015dd94283a4dfb42dc5361bf0c77fd1402c365235ebd8f9/chunks Rpi cache does not exist... Yet. Last time I ran into problems and gave up I deleted the cache on it when I switched to sshfs. Might be able to kick it off tonight, I'll see if I can get it to cooperate and if I have the time, otherwise this test will have to wait until the weekend. > For the latter 2, you need to be logged in as the user running the borg > backups (root?). > > >When I do that test this weekend I'll first try simplifying the required > >steps and see if I get the same sort of hang; I'll try backing up a single > >file, then muck around with the server side so it has to resync > > You can trigger a resync by modifying 1 digit of the manifest in the cache's > config - borg will then think the cache is out of sync: > > [cache] > ... > manifest = 43867a6f631e3ea4e7520e62904ac26615566b2a3c7a0b42656900f0e2074032 > ... Thanks, that'll help a lot! > The memory usage of the borg repo server is maybe not that interesting. Alright, that'll be one less thing I'll have to do then. Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From tve at voneicken.com Wed Jun 22 00:55:17 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Wed, 22 Jun 2016 04:55:17 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <0100015573bf19db-7884c56f-18be-4c17-a40b-485ce5c45185-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> <576726CA.1020501@waldmann-edv.de> <010001556cb515a8-e3eb275e-c401-41fd-8c00-7c3c5b1988a9-000000@email.amazonses.com> <0100015573bf19db-7884c56f-18be-4c17-a40b-485ce5c45185-000000@email.amazonses.com> Message-ID: <0100015576765f1c-b3eaf78a-b910-4404-b6e2-4a4b6215d2d9-000000@email.amazonses.com> Yet another test. BTW, would it be better if I opened a github ticket with all this info? I'm fine either way as long as it leads to a fix :-) I did the same backup as previously (rsync'ed the data to the ARM box and ran a local backup with lzma resulting in insonsistencies) but this time I chose lz4 compression. Guess what... no inconsistency. OK, sample size is one, so who knows... It can't be the lzma code itself since that runs on the x86_64 box when doing a remote backup, so it must be something else around it? Unless it's just chance... The log of the test is: # borg init usr-local3 Enter new passphrase: Enter same passphrase again: Do you want your passphrase to be displayed for verification? [yN]: n # borg create --show-rc --stats -v -e .cache -C lz4 usr-local3::usr-local-2016-06-20 /big/usr-local Enter passphrase for key /big/h/usr-local3: ------------------------------------------------------------------------------ Archive name: usr-local-2016-06-20 Archive fingerprint: 7cef44fb78c14ac908b50d4f407e660f680fe95386431c9900fcb6d05d8c23a5 Time (start): Tue, 2016-06-21 16:15:25 Time (end): Tue, 2016-06-21 16:56:37 Duration: 41 minutes 11.83 seconds Number of files: 354059 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 5.86 GB 2.81 GB 2.24 GB All archives: 5.86 GB 2.81 GB 2.24 GB Unique chunks Total chunks Chunk index: 190022 359242 ------------------------------------------------------------------------------ terminating with success status, rc 0 # borg check -v usr-local3 Starting repository check Completed repository check, no problems found. Starting archive consistency check... Enter passphrase for key /big/h/usr-local3: Analyzing archive usr-local-2016-06-20 (1/1) Archive consistency check complete, no problems found. How can I help from here on? On 6/21/2016 9:15 AM, Thorsten von Eicken wrote: > > More tests, looks like borg 1.0.4 with lzma doesn't work on ARM, or > the Arch build is broken. I narrowed the backup that fails to about > 6GB of /usr/local. I did a remote backup x86_64->ARM and got the usual > inconsistency. I then rsync'ed /usr/local to the ARM box and did a > local backup there and got the same inconsistency. Something I haven't > mentioned before is that I run two other backups x86_64->ARM nightly > and they do not produce inconsistencies, but they also do not use any > compression (they're all compressed media files). > > I can continue testing various combinations but maybe one of the borg > maintainers has an rPI or ODROID or other ARM box and can run some > tests as well? As far as I can tell you need a dir structure of some > minimum size (I tried something tiny and it worked fine) and then > perform a borg create with lzma. > > Here's the log: > > # borg init usr-local2 > Enter new passphrase: > Enter same passphrase again: > Do you want your passphrase to be displayed for verification? [yN]: n > # borg create --show-rc --stats -v -e .cache -C lzma > usr-local2::usr-local-2016-06-20 /big/usr-local > Enter passphrase for key /big/h/usr-local2: > ------------------------------------------------------------------------------ > Archive name: usr-local-2016-06-20 > Archive fingerprint: > c2bc42a6f7837cf44ca3e4182ebe2e01437876b3c24a8551668b19dcd9b14ce8 > Time (start): Tue, 2016-06-21 02:54:38 > Time (end):?????? Tue, 2016-06-21 08:17:51 > Duration: 5 hours 23 minutes 13.95 seconds > Number of files: 354059 > ------------------------------------------------------------------------------ > ?????????????????????????????????????????????????????????????????? > Original size??????????????? Compressed size????????? Deduplicated size > This archive:????????????????????????????????????????????? 5.86 > GB??????????????????????????????????????? 1.80 > GB??????????????????????????????????????? 1.35 GB > All archives:????????????????????????????????????????????? 5.86 > GB??????????????????????????????????????? 1.80 > GB??????????????????????????????????????? 1.35 GB > > ?????????????????????????????????????????????????????????????????? > Unique chunks???????????????????????? Total chunks > Chunk index:??????????????????????????????????????????????????? > 190030?????????????????????????????????????????? 359279 > ------------------------------------------------------------------------------ > terminating with success status, rc 0 > # borg check -v usr-local2 > Starting repository check > Index object count mismatch. 190031 != 190041 > Completed repository check, errors found. > > > On 6/20/2016 5:42 AM, public at enkore.de wrote: >> On 06/20/2016 09:27 AM, Thorsten von Eicken wrote: >>> I'm wondering what to test next. Some thoughts: >>> - rsync the data to the ARM box and perform a local create/check there >>> - nfs mount the data onto the ARM box and perform a local create/check >>> this way >>> >>> Suggestions? >>> Thorsten >> Try (1) first to see whether it's a networking-related issue or happens >> even with local files. >> >> Cheers, Marian >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tmhikaru at gmail.com Wed Jun 22 02:55:23 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Tue, 21 Jun 2016 23:55:23 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <57692C7D.3090305@waldmann-edv.de> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> <20160621085210.GA14044@raspberrypi> <57692C35.3080901@waldmann-edv.de> <57692C7D.3090305@waldmann-edv.de> Message-ID: <20160622065523.GA24755@raspberrypi> On Tue, Jun 21, 2016 at 02:01:01PM +0200, Thomas Waldmann wrote: > >/usr/bin/time borg ... > > Or even better readable as: > > /usr/bin/time -v borg ... Just to get a few things straight, I'm running raspbian jessie on my Rpi 1B, which is essentially debian jessie, so the borgbackup package does not exist. The raspberry pi also has the annoyance of being an arm based system, so I can't simply use the standalone 32bit/64bit intel compatible binary on it, nor can I use the standalone binaries someone created that are intended for the Raspberry Pi, as they were made for model 2, which has a different (better!) arm cpu. However, I was able to work around this by using the instructions here: https://borgbackup.readthedocs.io/en/stable/installation.html#using-pip to create a borg binary. Rpi: tm at raspberrypi ~ $ borg-env/bin/borg --version borg 1.0.3 Server: (Fedora 23 on a 64bit Intel machine) tm at roll:~$ borg --version borg 1.0.3 Okay, so I managed to get a quick (~1hr spent working on it) test in tonight from a clean start and got some interesting data, but borg has conspired to make a liar out of me - it's for the first time gotten stuck starting from a rm'd cachedir. :( The weird bit is that right after it started merging the archive chunks into the main index and got stuck the memory usage dropped slightly. After doing that, the memory usage no longer changed, and it's been doing just about nothing ever since other than using lots of cpu. I installed iotop by the way and can verify it is not writing or reading any data while running it as iotop -o -a as root. Mutt of all things shows up, and a number of other minor things like the ext4 journalling kernel thread, but the disk is all but quiet. tm at raspberrypi ~ $ free -h total used free shared buffers cached Mem: 482M 465M 16M 18M 34M 192M -/+ buffers/cache: 238M 243M Swap: 2.0G 16M 2.0G As you can see, it's not running heavily into swap, and has plenty of ram available if it needs it. And yes, it's still running right now while I'm writing this message using the Rpi via ssh. I'll try to copy and paste the command I ran and the output sent to me: [root at roll tm]# /usr/bin/time -v ssh raspberrypi /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi\{now\} / /boot Warning: Attempting to access a previously unknown unencrypted repository! Do you want to continue? [yN] y Synchronizing chunks cache... Archives: 16, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 16. Fetching and building archive index for rpi2016-06-07 23:21:46.321762 ... Merging into master chunks index ... Fetching and building archive index for rpi2016-06-13 18:11:35.797978 ... Merging into master chunks index ... Fetching and building archive index for slack2016-06-07 21:58:57.787423 ... Merging into master chunks index ... Fetching and building archive index for main2016-06-08 18:03:16.749427 ... Merging into master chunks index ... And here is where it got stuck. After ctrl-c'ing it, I see: ^CKilled by signal 2. Command exited with non-zero status 255 Command being timed: "ssh raspberrypi /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot" User time (seconds): 0.03 System time (seconds): 0.00 Percent of CPU this job got: 0% Elapsed (wall clock) time (h:mm:ss or m:ss): 1:15:09 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 6764 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 424 Voluntary context switches: 28 Involuntary context switches: 2 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 255 Ugh, that's not very useful, it's giving me stats on the ssh command on the server, D'oh. If I do this again I'll run time *on the client side* and you even TOLD me to do that - sorry about this. The borg client on the rpi is still running, and so is the copy on the server. Typically I killall -9 the thing, but lets see if I can get it to respond to other signals. Sending SIGTERM did nothing. Had to kill it. Killing the client command at least also terminated the borg server on the server, something I've had trouble with before when this happens. This is the bashism I used to create some sort of memory/cpu usage log: while true; do ps aux | grep -v grep | grep borg-env >> /home/tm/borglog; sleep 60; done When it swelled to 180104 VSZ 174168 RSS it was still in the process of synchronizing the main archive index and there was noticable network and disk activity for the duration. After it drops to 145432 VSZ 139632 RSS it's only at minute 9, but this is the first entry recorded after it started work on merging into master chunks index for the main archive it got stuck on. If it is supposed to be doing some kind of I/O to disk/network it's not doing it according to iotop, and you can clearly see it's neither allocating nor freeing memory for an entire hour... I can't fathom that whatever it is trying to process actually would make no noticable Disk/Network I/O or memory usage change for this long. Any way we could find out exactly what it's doing when this happens? Note that the cpu time is greatly inaccurate compared to the wall clock, the actual run was an hour and fifteen minutes, I stopped recording after 22:49. I'm exhausted. Goodnight, Tim McGrath Header added for convenience, here's the result of that log: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 23880 68.4 5.9 35472 29312 ? Rs 21:34 0:13 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 81.6 9.1 51112 45084 ? Rs 21:34 1:04 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 83.1 12.7 68920 62980 ? Rs 21:34 1:55 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 83.4 14.3 76792 70852 ? Rs 21:34 2:46 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 83.7 24.0 124824 118604 ? Rs 21:34 3:37 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 83.7 30.1 155188 148996 ? Rs 21:34 4:28 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 83.7 30.6 157316 151380 ? Rs 21:34 5:19 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 83.7 35.2 180104 174168 ? Rs 21:34 6:09 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 83.5 28.2 145432 139632 ? Rs 21:34 6:59 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 85.2 28.2 145432 139632 ? Rs 21:34 7:58 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 86.4 28.2 145432 139632 ? Rs 21:34 8:58 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 86.3 28.2 145432 139632 ? Rs 21:34 9:49 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 84.4 28.2 145432 139632 ? Rs 21:34 10:27 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 81.8 28.2 145432 139632 ? Rs 21:34 10:58 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 79.8 28.2 145432 139632 ? Rs 21:34 11:30 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 79.7 28.2 145432 139632 ? Rs 21:34 12:18 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 79.8 28.2 145432 139632 ? Rs 21:34 13:06 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 79.6 28.2 145432 139632 ? Rs 21:34 13:52 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 79.1 28.2 145432 139632 ? Rs 21:34 14:35 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 78.6 28.2 145432 139632 ? Rs 21:34 15:18 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 78.4 28.2 145432 139632 ? Rs 21:34 16:02 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 78.8 28.2 145432 139632 ? Rs 21:34 16:55 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 78.8 28.2 145432 139632 ? Rs 21:34 17:42 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 78.6 28.2 145432 139632 ? Rs 21:34 18:28 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 78.5 28.2 145432 139632 ? Rs 21:34 19:14 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 78.4 28.2 145432 139632 ? Rs 21:34 20:00 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.8 28.2 145432 139632 ? Rs 21:34 20:38 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.7 28.2 145432 139632 ? Rs 21:34 21:24 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.7 28.2 145432 139632 ? Rs 21:34 22:09 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.6 28.2 145432 139632 ? Rs 21:34 22:55 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.6 28.2 145432 139632 ? Rs 21:34 23:41 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.5 28.2 145432 139632 ? Rs 21:34 24:27 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.5 28.2 145432 139632 ? Rs 21:34 25:13 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.4 28.2 145432 139632 ? Rs 21:34 25:59 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.4 28.2 145432 139632 ? Rs 21:34 26:46 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.4 28.2 145432 139632 ? Rs 21:34 27:32 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.3 28.2 145432 139632 ? Rs 21:34 28:18 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.3 28.2 145432 139632 ? Rs 21:34 29:03 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.2 28.2 145432 139632 ? Rs 21:34 29:49 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.2 28.2 145432 139632 ? Rs 21:34 30:36 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 77.2 28.2 145432 139632 ? Rs 21:34 31:22 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.9 28.2 145432 139632 ? Rs 21:34 32:00 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.8 28.2 145432 139632 ? Rs 21:34 32:46 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.8 28.2 145432 139632 ? Rs 21:34 33:31 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.8 28.2 145432 139632 ? Rs 21:34 34:17 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.7 28.2 145432 139632 ? Rs 21:34 35:03 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.7 28.2 145432 139632 ? Rs 21:34 35:49 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.7 28.2 145432 139632 ? Rs 21:34 36:35 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.7 28.2 145432 139632 ? Rs 21:34 37:21 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.7 28.2 145432 139632 ? Rs 21:34 38:07 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.6 28.2 145432 139632 ? Rs 21:34 38:53 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.6 28.2 145432 139632 ? Rs 21:34 39:39 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.6 28.2 145432 139632 ? Rs 21:34 40:25 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.6 28.2 145432 139632 ? Rs 21:34 41:11 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.6 28.2 145432 139632 ? Rs 21:34 41:57 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.5 28.2 145432 139632 ? Rs 21:34 42:40 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.3 28.2 145432 139632 ? Rs 21:34 43:18 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.3 28.2 145432 139632 ? Rs 21:34 44:04 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.2 28.2 145432 139632 ? Rs 21:34 44:48 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.2 28.2 145432 139632 ? Rs 21:34 45:32 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.2 28.2 145432 139632 ? Rs 21:34 46:18 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.1 28.2 145432 139632 ? Rs 21:34 47:03 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.1 28.2 145432 139632 ? Rs 21:34 47:47 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.0 28.2 145432 139632 ? Rs 21:34 48:32 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.0 28.2 145432 139632 ? Rs 21:34 49:17 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 76.0 28.2 145432 139632 ? Rs 21:34 50:01 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.9 28.2 145432 139632 ? Rs 21:34 50:45 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.9 28.2 145432 139632 ? Rs 21:34 51:29 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.8 28.2 145432 139632 ? Rs 21:34 52:14 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.8 28.2 145432 139632 ? Rs 21:34 52:58 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.7 28.2 145432 139632 ? Rs 21:34 53:39 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.6 28.2 145432 139632 ? Rs 21:34 54:22 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.6 28.2 145432 139632 ? Rs 21:34 55:06 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.5 28.2 145432 139632 ? Rs 21:34 55:51 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot root 23880 75.5 28.2 145432 139632 ? Rs 21:34 56:36 /home/tm/borg-env/bin/python3 /home/tm/borg-env/bin/borg create -v --stats --list -x --progress --exclude-caches --keep-tag-files --numeric-owner --compression lz4,6 --exclude-from=/root/backupexcludes.txt 10.0.0.238:/media/backup/borg::rpi{now} / /boot -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From tve at voneicken.com Wed Jun 22 11:57:08 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Wed, 22 Jun 2016 15:57:08 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <0100015576765f1c-b3eaf78a-b910-4404-b6e2-4a4b6215d2d9-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> <576726CA.1020501@waldmann-edv.de> <010001556cb515a8-e3eb275e-c401-41fd-8c00-7c3c5b1988a9-000000@email.amazonses.com> <0100015573bf19db-7884c56f-18be-4c17-a40b-485ce5c45185-000000@email.amazonses.com> <0100015576765f1c-b3eaf78a-b910-4404-b6e2-4a4b6215d2d9-000000@email.amazonses.com> Message-ID: <0100015578d450c0-adf1b095-8dfb-44b1-9ca3-a62068a514a9-000000@email.amazonses.com> Yet another test run, same backup on ARM but using zlib compression and I get an inconsistency. # borg check -v usr-local3 Starting repository check Completed repository check, no problems found. Starting archive consistency check... Enter passphrase for key /big/h/usr-local3: Analyzing archive usr-local-2016-06-20 (1/1) Archive consistency check complete, no problems found. # borg init usr-local4 Enter new passphrase: Enter same passphrase again: Do you want your passphrase to be displayed for verification? [yN]: # borg create --show-rc --stats -v -e .cache -C zlib usr-local4::usr-local-2016-06-20 /big/usr-local Enter passphrase for key /big/h/usr-local4: ------------------------------------------------------------------------------ Archive name: usr-local-2016-06-20 Archive fingerprint: ab47bdebc8cbed2a04194139a5d0bd8ce5ec639d578cd65276e51fba043fac41 Time (start): Wed, 2016-06-22 06:23:23 Time (end): Wed, 2016-06-22 07:19:26 Duration: 56 minutes 3.31 seconds Number of files: 354059 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 5.86 GB 2.10 GB 1.63 GB All archives: 5.86 GB 2.10 GB 1.63 GB Unique chunks Total chunks Chunk index: 190159 359426 ------------------------------------------------------------------------------ terminating with success status, rc 0 [root at backup h]# borg check -v usr-local4 Starting repository check Index object count mismatch. 190160 != 190162 Completed repository check, errors found. On 6/21/2016 9:55 PM, Thorsten von Eicken wrote: > Yet another test. BTW, would it be better if I opened a github ticket > with all this info? I'm fine either way as long as it leads to a fix :-) > > I did the same backup as previously (rsync'ed the data to the ARM box > and ran a local backup with lzma resulting in insonsistencies) but > this time I chose lz4 compression. Guess what... no inconsistency. OK, > sample size is one, so who knows... It can't be the lzma code itself > since that runs on the x86_64 box when doing a remote backup, so it > must be something else around it? Unless it's just chance... > > The log of the test is: > > # borg init usr-local3 > Enter new passphrase: > Enter same passphrase again: > Do you want your passphrase to be displayed for verification? [yN]: n > # borg create --show-rc --stats -v -e .cache -C lz4 > usr-local3::usr-local-2016-06-20 /big/usr-local > Enter passphrase for key /big/h/usr-local3: > ------------------------------------------------------------------------------ > > Archive name: usr-local-2016-06-20 > Archive fingerprint: > 7cef44fb78c14ac908b50d4f407e660f680fe95386431c9900fcb6d05d8c23a5 > Time (start): Tue, 2016-06-21 16:15:25 > Time (end): Tue, 2016-06-21 16:56:37 > Duration: 41 minutes 11.83 seconds > Number of files: 354059 > ------------------------------------------------------------------------------ > > Original size Compressed size Deduplicated > size > This archive: 5.86 GB 2.81 GB > 2.24 GB > All archives: 5.86 GB 2.81 GB > 2.24 GB > > Unique chunks Total chunks > Chunk index: 190022 359242 > ------------------------------------------------------------------------------ > > terminating with success status, rc 0 > # borg check -v usr-local3 > Starting repository check > Completed repository check, no problems found. > Starting archive consistency check... > Enter passphrase for key /big/h/usr-local3: > Analyzing archive usr-local-2016-06-20 (1/1) > Archive consistency check complete, no problems found. > > How can I help from here on? > > > On 6/21/2016 9:15 AM, Thorsten von Eicken wrote: >> >> More tests, looks like borg 1.0.4 with lzma doesn't work on ARM, or >> the Arch build is broken. I narrowed the backup that fails to about >> 6GB of /usr/local. I did a remote backup x86_64->ARM and got the >> usual inconsistency. I then rsync'ed /usr/local to the ARM box and >> did a local backup there and got the same inconsistency. Something I >> haven't mentioned before is that I run two other backups x86_64->ARM >> nightly and they do not produce inconsistencies, but they also do not >> use any compression (they're all compressed media files). >> >> I can continue testing various combinations but maybe one of the borg >> maintainers has an rPI or ODROID or other ARM box and can run some >> tests as well? As far as I can tell you need a dir structure of some >> minimum size (I tried something tiny and it worked fine) and then >> perform a borg create with lzma. >> >> Here's the log: >> >> # borg init usr-local2 >> Enter new passphrase: >> Enter same passphrase again: >> Do you want your passphrase to be displayed for verification? [yN]: n >> # borg create --show-rc --stats -v -e .cache -C lzma >> usr-local2::usr-local-2016-06-20 /big/usr-local >> Enter passphrase for key /big/h/usr-local2: >> ------------------------------------------------------------------------------ >> >> Archive name: usr-local-2016-06-20 >> Archive fingerprint: >> c2bc42a6f7837cf44ca3e4182ebe2e01437876b3c24a8551668b19dcd9b14ce8 >> Time (start): Tue, 2016-06-21 02:54:38 >> Time (end):?????? Tue, 2016-06-21 08:17:51 >> Duration: 5 hours 23 minutes 13.95 seconds >> Number of files: 354059 >> ------------------------------------------------------------------------------ >> >> ?????????????????????????????????????????????????????????????????? >> Original size??????????????? Compressed size????????? Deduplicated size >> This archive:????????????????????????????????????????????? 5.86 >> GB??????????????????????????????????????? 1.80 >> GB??????????????????????????????????????? 1.35 GB >> All archives:????????????????????????????????????????????? 5.86 >> GB??????????????????????????????????????? 1.80 >> GB??????????????????????????????????????? 1.35 GB >> >> ?????????????????????????????????????????????????????????????????? >> Unique chunks???????????????????????? Total chunks >> Chunk index:??????????????????????????????????????????????????? >> 190030?????????????????????????????????????????? 359279 >> ------------------------------------------------------------------------------ >> >> terminating with success status, rc 0 >> # borg check -v usr-local2 >> Starting repository check >> Index object count mismatch. 190031 != 190041 >> Completed repository check, errors found. >> >> >> On 6/20/2016 5:42 AM, public at enkore.de wrote: >>> On 06/20/2016 09:27 AM, Thorsten von Eicken wrote: >>>> I'm wondering what to test next. Some thoughts: >>>> - rsync the data to the ARM box and perform a local create/check there >>>> - nfs mount the data onto the ARM box and perform a local create/check >>>> this way >>>> >>>> Suggestions? >>>> Thorsten >>> Try (1) first to see whether it's a networking-related issue or happens >>> even with local files. >>> >>> Cheers, Marian >>> >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup >> >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicrel at arcor.de Wed Jun 22 16:11:07 2016 From: nicrel at arcor.de (nicrel at arcor.de) Date: Wed, 22 Jun 2016 22:11:07 +0200 (CEST) Subject: [Borgbackup] borg prune behaviour Message-ID: <549954662.296473.1466626267315.JavaMail.ngmail@webmail24.arcor-online.net> Hi everybody, I'm trying to prune my very first borg repo created on 2016-05-25. I tried dry-run param to have an idea of what it would prune when leaving only 7 days daily backups. When running the suggested command from the documentatioin site, from the command line output I got, I would consider no files to be deleted? Can somebody shed some light on this issue? >From documentation, I had expected pruning of all archives apart from (keep the last 7 daily ones): 2016-06-15 Wed, 2016-06-15 12:21:33 2016-06-16 Thu, 2016-06-16 12:21:28 2016-06-17 Fri, 2016-06-17 12:21:36 2016-06-18 Sat, 2016-06-18 12:21:16 2016-06-19 Sun, 2016-06-19 12:21:17 2016-06-20 Mon, 2016-06-20 12:21:17 2016-06-22 Wed, 2016-06-22 12:21:26 Summary might be correct (0 bytes gained, as I have probably only added files); but wouldn't borg at least list the archive tags that will be removed? 8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<------- user at host:/mnt/remoterepo# borg list /mnt/remoterepo/repository.borg_a 2016-05-25 Thu, 2016-05-26 00:02:11 2016-05-26 Thu, 2016-05-26 22:50:16 2016-05-26b Thu, 2016-05-26 23:01:38 2016-06-01 Wed, 2016-06-01 23:57:49 2016-06-02 Thu, 2016-06-02 22:55:11 2016-06-03 Fri, 2016-06-03 12:21:10 2016-06-04 Sat, 2016-06-04 12:21:21 2016-06-05 Sun, 2016-06-05 12:21:23 2016-06-06 Mon, 2016-06-06 12:21:14 2016-06-07 Tue, 2016-06-07 12:21:20 2016-06-08 Wed, 2016-06-08 12:21:30 2016-06-09 Thu, 2016-06-09 12:21:20 2016-06-11 Sat, 2016-06-11 15:43:16 2016-06-12 Sun, 2016-06-12 12:21:25 2016-06-13 Mon, 2016-06-13 12:21:18 2016-06-14 Tue, 2016-06-14 12:21:25 2016-06-15 Wed, 2016-06-15 12:21:33 2016-06-16 Thu, 2016-06-16 12:21:28 2016-06-17 Fri, 2016-06-17 12:21:36 2016-06-18 Sat, 2016-06-18 12:21:16 2016-06-19 Sun, 2016-06-19 12:21:17 2016-06-20 Mon, 2016-06-20 12:21:17 2016-06-22 Wed, 2016-06-22 12:21:26 user at host:/mnt/remoterepo# borg prune -s -v --dry-run --keep-daily=7 /mnt/remoterepo/repository.borg_a ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size Deleted data: 0 B 0 B 0 B All archives: 5.66 TB 5.21 TB 215.21 GB Unique chunks Total chunks Chunk index: 259286 7263925 ------------------------------------------------------------------------------ user at host:/mnt/remoterepo# borg --version borg 1.0.3 Thumbs up for this great product & best regards, Carsten From adrian.klaver at aklaver.com Wed Jun 22 16:48:17 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Wed, 22 Jun 2016 13:48:17 -0700 Subject: [Borgbackup] borg prune behaviour In-Reply-To: <549954662.296473.1466626267315.JavaMail.ngmail@webmail24.arcor-online.net> References: <549954662.296473.1466626267315.JavaMail.ngmail@webmail24.arcor-online.net> Message-ID: <2b6e53d3-b1ec-add4-4429-dc38d5108124@aklaver.com> On 06/22/2016 01:11 PM, nicrel at arcor.de wrote: > Hi everybody, > I'm trying to prune my very first borg repo created on 2016-05-25. > > I tried dry-run param to have an idea of what it would prune when leaving only 7 days daily backups. > > When running the suggested command from the documentatioin site, from the command line output I got, I would consider no files to be deleted? > > Can somebody shed some light on this issue? > > From documentation, I had expected pruning of all archives apart from (keep the last 7 daily ones): > > 2016-06-15 Wed, 2016-06-15 12:21:33 > 2016-06-16 Thu, 2016-06-16 12:21:28 > 2016-06-17 Fri, 2016-06-17 12:21:36 > 2016-06-18 Sat, 2016-06-18 12:21:16 > 2016-06-19 Sun, 2016-06-19 12:21:17 > 2016-06-20 Mon, 2016-06-20 12:21:17 > 2016-06-22 Wed, 2016-06-22 12:21:26 > > Summary might be correct (0 bytes gained, as I have probably only added files); but wouldn't borg at least list the archive tags that will be removed? > > > 8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<------- > > > user at host:/mnt/remoterepo# borg list /mnt/remoterepo/repository.borg_a > 2016-05-25 Thu, 2016-05-26 00:02:11 > 2016-05-26 Thu, 2016-05-26 22:50:16 > 2016-05-26b Thu, 2016-05-26 23:01:38 > 2016-06-01 Wed, 2016-06-01 23:57:49 > 2016-06-02 Thu, 2016-06-02 22:55:11 > 2016-06-03 Fri, 2016-06-03 12:21:10 > 2016-06-04 Sat, 2016-06-04 12:21:21 > 2016-06-05 Sun, 2016-06-05 12:21:23 > 2016-06-06 Mon, 2016-06-06 12:21:14 > 2016-06-07 Tue, 2016-06-07 12:21:20 > 2016-06-08 Wed, 2016-06-08 12:21:30 > 2016-06-09 Thu, 2016-06-09 12:21:20 > 2016-06-11 Sat, 2016-06-11 15:43:16 > 2016-06-12 Sun, 2016-06-12 12:21:25 > 2016-06-13 Mon, 2016-06-13 12:21:18 > 2016-06-14 Tue, 2016-06-14 12:21:25 > 2016-06-15 Wed, 2016-06-15 12:21:33 > 2016-06-16 Thu, 2016-06-16 12:21:28 > 2016-06-17 Fri, 2016-06-17 12:21:36 > 2016-06-18 Sat, 2016-06-18 12:21:16 > 2016-06-19 Sun, 2016-06-19 12:21:17 > 2016-06-20 Mon, 2016-06-20 12:21:17 > 2016-06-22 Wed, 2016-06-22 12:21:26 > > user at host:/mnt/remoterepo# borg prune -s -v --dry-run --keep-daily=7 /mnt/remoterepo/repository.borg_a Add --list to above. http://borgbackup.readthedocs.io/en/1.0.3/usage.html#borg-prune "--list output verbose list of archives it keeps/prunes" > ------------------------------------------------------------------------------ > Original size Compressed size Deduplicated size > Deleted data: 0 B 0 B 0 B > All archives: 5.66 TB 5.21 TB 215.21 GB > > Unique chunks Total chunks > Chunk index: 259286 7263925 > ------------------------------------------------------------------------------ > user at host:/mnt/remoterepo# borg --version > borg 1.0.3 > > Thumbs up for this great product & best regards, > Carsten > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From nicrel at arcor.de Wed Jun 22 16:59:17 2016 From: nicrel at arcor.de (nicrel at arcor.de) Date: Wed, 22 Jun 2016 22:59:17 +0200 (CEST) Subject: [Borgbackup] borg prune behaviour In-Reply-To: <2b6e53d3-b1ec-add4-4429-dc38d5108124@aklaver.com> References: <2b6e53d3-b1ec-add4-4429-dc38d5108124@aklaver.com> <549954662.296473.1466626267315.JavaMail.ngmail@webmail24.arcor-online.net> Message-ID: <2040155632.296786.1466629157569.JavaMail.ngmail@webmail24.arcor-online.net> Hi Adrian, thanks for your quick response! That solved my issue immediately. Anyways, I'd suggest to add --list to examples shown on https://borgbackup.readthedocs.io/en/stable/usage.html#borg-prune. There it says "It is strongly recommended to always run prune --dry-run ... first so you will see what it would do without it actually doing anything." but with the examples shown, you just don't see what it would acutally be doing as --list is missing from the examples... Maybe even setting --list as default for prune would be a good idea? Best regards, Carsten ----- Original Nachricht ---- Von: Adrian Klaver An: nicrel at arcor.de, borgbackup at python.org Datum: 22.06.2016 22:48 Betreff: Re: [Borgbackup] borg prune behaviour > On 06/22/2016 01:11 PM, nicrel at arcor.de wrote: > > Hi everybody, > > I'm trying to prune my very first borg repo created on 2016-05-25. > > > > I tried dry-run param to have an idea of what it would prune when leaving > only 7 days daily backups. > > > > When running the suggested command from the documentatioin site, from the > command line output I got, I would consider no files to be deleted? > > > > Can somebody shed some light on this issue? > > > > From documentation, I had expected pruning of all archives apart from > (keep the last 7 daily ones): > > > > 2016-06-15 Wed, 2016-06-15 12:21:33 > > 2016-06-16 Thu, 2016-06-16 12:21:28 > > 2016-06-17 Fri, 2016-06-17 12:21:36 > > 2016-06-18 Sat, 2016-06-18 12:21:16 > > 2016-06-19 Sun, 2016-06-19 12:21:17 > > 2016-06-20 Mon, 2016-06-20 12:21:17 > > 2016-06-22 Wed, 2016-06-22 12:21:26 > > > > Summary might be correct (0 bytes gained, as I have probably only added > files); but wouldn't borg at least list the archive tags that will be > removed? > > > > > > > 8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-------8<-- > -----8<-------8<------- > > > > > > user at host:/mnt/remoterepo# borg list /mnt/remoterepo/repository.borg_a > > 2016-05-25 Thu, 2016-05-26 00:02:11 > > 2016-05-26 Thu, 2016-05-26 22:50:16 > > 2016-05-26b Thu, 2016-05-26 23:01:38 > > 2016-06-01 Wed, 2016-06-01 23:57:49 > > 2016-06-02 Thu, 2016-06-02 22:55:11 > > 2016-06-03 Fri, 2016-06-03 12:21:10 > > 2016-06-04 Sat, 2016-06-04 12:21:21 > > 2016-06-05 Sun, 2016-06-05 12:21:23 > > 2016-06-06 Mon, 2016-06-06 12:21:14 > > 2016-06-07 Tue, 2016-06-07 12:21:20 > > 2016-06-08 Wed, 2016-06-08 12:21:30 > > 2016-06-09 Thu, 2016-06-09 12:21:20 > > 2016-06-11 Sat, 2016-06-11 15:43:16 > > 2016-06-12 Sun, 2016-06-12 12:21:25 > > 2016-06-13 Mon, 2016-06-13 12:21:18 > > 2016-06-14 Tue, 2016-06-14 12:21:25 > > 2016-06-15 Wed, 2016-06-15 12:21:33 > > 2016-06-16 Thu, 2016-06-16 12:21:28 > > 2016-06-17 Fri, 2016-06-17 12:21:36 > > 2016-06-18 Sat, 2016-06-18 12:21:16 > > 2016-06-19 Sun, 2016-06-19 12:21:17 > > 2016-06-20 Mon, 2016-06-20 12:21:17 > > 2016-06-22 Wed, 2016-06-22 12:21:26 > > > > user at host:/mnt/remoterepo# borg prune -s -v --dry-run --keep-daily=7 > /mnt/remoterepo/repository.borg_a > > Add --list to above. > > http://borgbackup.readthedocs.io/en/1.0.3/usage.html#borg-prune > "--list output verbose list of archives it keeps/prunes" > > > > ---------------------------------------------------------------------------- > -- > > Original size Compressed size Deduplicated > size > > Deleted data: 0 B 0 B > 0 B > > All archives: 5.66 TB 5.21 TB > 215.21 GB > > > > Unique chunks Total chunks > > Chunk index: 259286 7263925 > > > ---------------------------------------------------------------------------- > -- > > user at host:/mnt/remoterepo# borg --version > > borg 1.0.3 > > > > Thumbs up for this great product & best regards, > > Carsten > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > > > > > -- > Adrian Klaver > adrian.klaver at aklaver.com > From tve at voneicken.com Thu Jun 23 00:17:19 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Thu, 23 Jun 2016 04:17:19 +0000 Subject: [Borgbackup] what do I do with inconsistencies? In-Reply-To: <0100015578d450c0-adf1b095-8dfb-44b1-9ca3-a62068a514a9-000000@email.amazonses.com> References: <0100015557f81f63-eda93dfc-24e8-4a98-9c1c-9bc16f01be20-000000@email.amazonses.com> <5762E23C.2020502@waldmann-edv.de> <010001555cceb5ce-23718357-e689-4700-83c2-3a89a07f6bd6-000000@email.amazonses.com> <0100015564765c7c-d026f194-0b1e-46b9-b899-1f6a3c3ba3fc-000000@email.amazonses.com> <576726CA.1020501@waldmann-edv.de> <010001556cb515a8-e3eb275e-c401-41fd-8c00-7c3c5b1988a9-000000@email.amazonses.com> <0100015573bf19db-7884c56f-18be-4c17-a40b-485ce5c45185-000000@email.amazonses.com> <0100015576765f1c-b3eaf78a-b910-4404-b6e2-4a4b6215d2d9-000000@email.amazonses.com> <0100015578d450c0-adf1b095-8dfb-44b1-9ca3-a62068a514a9-000000@email.amazonses.com> Message-ID: <010001557b79f8bf-af1a48c3-6dd0-4925-ac69-031a0fb573c6-000000@email.amazonses.com> The plot thickens a bit. I uninstalled borg-git (which was 1.0.4.dev283+ng8083799) and installed 1.0.3 and now a local backup with zlib compression results in no corruption. I'm going to try a remote backup next... -------------- next part -------------- An HTML attachment was scrubbed... URL: From tve at voneicken.com Thu Jun 23 21:19:10 2016 From: tve at voneicken.com (Thorsten von Eicken) Date: Fri, 24 Jun 2016 01:19:10 +0000 Subject: [Borgbackup] cannot downgrade from 1.0.4 to 1.0.3? Message-ID: <010001557ffd3cad-3aa4a0ec-5794-44a9-a13d-1d1919831dde-000000@email.amazonses.com> Due to the problem with repo inconsistencies (see other thread) I had to downgrade from 1.0.4 to 1.0.3 and now I get the following errors that lead me to believe that downgrading is not supported? borg create --show-rc --stats -v -e .cache -C none backup at backup:/big/h/photos::photos-2016-06-23T07:35-0700 /soumak/photos Remote: Borg 1.0.3: exception in RPC call: Remote: Traceback (most recent call last): Remote: File "/usr/lib/python3.5/site-packages/borg/remote.py", line 96, in serve Remote: res = f(*args) Remote: File "/usr/lib/python3.5/site-packages/borg/repository.py", line 455, in put Remote: self.prepare_txn(self.get_transaction_id()) Remote: File "/usr/lib/python3.5/site-packages/borg/repository.py", line 217, in prepare_txn Remote: raise ValueError('Unknown hints file version: %d' % hints['version']) Remote: KeyError: 'version' Remote: Platform: Linux backup 3.10.96-5-ARCH #1 SMP PREEMPT Thu Apr 28 19:19:32 MDT 2016 armv7l Remote: Linux: arch Remote: Borg: 1.0.3 Python: CPython 3.5.1 Remote: PID: 16144 CWD: /big Remote: sys.argv: ['/usr/bin/borg', 'serve', '--restrict-to-path', '/big/h'] Remote: SSH_ORIGINAL_COMMAND: 'borg serve --umask=077 --info' Remote: Remote: Borg 1.0.3: exception in RPC call: Remote: Traceback (most recent call last): Remote: File "/usr/lib/python3.5/site-packages/borg/remote.py", line 96, in serve Remote: res = f(*args) Remote: File "/usr/lib/python3.5/site-packages/borg/repository.py", line 458, in put Remote: self.segments[segment] -= 1 Remote: AttributeError: 'Repository' object has no attribute 'segments' Remote: Platform: Linux backup 3.10.96-5-ARCH #1 SMP PREEMPT Thu Apr 28 19:19:32 MDT 2016 armv7l Remote: Linux: arch Remote: Borg: 1.0.3 Python: CPython 3.5.1 Remote: PID: 16144 CWD: /big Remote: sys.argv: ['/usr/bin/borg', 'serve', '--restrict-to-path', '/big/h'] Remote: SSH_ORIGINAL_COMMAND: 'borg serve --umask=077 --info' Remote: Remote Exception (see remote log for the traceback) Platform: Linux h 3.13.0-77-generic #121-Ubuntu SMP Wed Jan 20 10:50:42 UTC 2016 x86_64 x86_64 Linux: Ubuntu 14.04 trusty Borg: 1.0.3 Python: CPython 3.4.3 PID: 5689 CWD: /root sys.argv: ['/usr/bin/borg', 'create', '--show-rc', '--stats', '-v', '-e', '.cache', '-C', 'none', 'backup at backup:/big/h/photos::photos-2016-06-23T07:35-0700', '/soumak/photos'] SSH_ORIGINAL_COMMAND: None terminating with error status, rc 2 From public at enkore.de Fri Jun 24 03:53:48 2016 From: public at enkore.de (Marian Beermann) Date: Fri, 24 Jun 2016 09:53:48 +0200 Subject: [Borgbackup] cannot downgrade from 1.0.4 to 1.0.3? In-Reply-To: <010001557ffd3cad-3aa4a0ec-5794-44a9-a13d-1d1919831dde-000000@email.amazonses.com> References: <010001557ffd3cad-3aa4a0ec-5794-44a9-a13d-1d1919831dde-000000@email.amazonses.com> Message-ID: I assume with 1.0.4 you mean the development branch. If so, delete the hints. and index. files in the repository. Cheers, Marian On 24.06.2016 03:19, Thorsten von Eicken wrote: > Due to the problem with repo inconsistencies (see other thread) I had to > downgrade from 1.0.4 to 1.0.3 and now I get the following errors that > lead me to believe that downgrading is not supported? > > borg create --show-rc --stats -v -e .cache -C none > backup at backup:/big/h/photos::photos-2016-06-23T07:35-0700 /soumak/photos > Remote: Borg 1.0.3: exception in RPC call: > Remote: Traceback (most recent call last): > Remote: File "/usr/lib/python3.5/site-packages/borg/remote.py", line > 96, in serve > Remote: res = f(*args) > Remote: File "/usr/lib/python3.5/site-packages/borg/repository.py", > line 455, in put > Remote: self.prepare_txn(self.get_transaction_id()) > Remote: File "/usr/lib/python3.5/site-packages/borg/repository.py", > line 217, in prepare_txn > Remote: raise ValueError('Unknown hints file version: %d' % > hints['version']) > Remote: KeyError: 'version' > Remote: Platform: Linux backup 3.10.96-5-ARCH #1 SMP PREEMPT Thu Apr 28 > 19:19:32 MDT 2016 armv7l > Remote: Linux: arch > Remote: Borg: 1.0.3 Python: CPython 3.5.1 > Remote: PID: 16144 CWD: /big > Remote: sys.argv: ['/usr/bin/borg', 'serve', '--restrict-to-path', > '/big/h'] > Remote: SSH_ORIGINAL_COMMAND: 'borg serve --umask=077 --info' > Remote: > Remote: Borg 1.0.3: exception in RPC call: > Remote: Traceback (most recent call last): > Remote: File "/usr/lib/python3.5/site-packages/borg/remote.py", line > 96, in serve > Remote: res = f(*args) > Remote: File "/usr/lib/python3.5/site-packages/borg/repository.py", > line 458, in put > Remote: self.segments[segment] -= 1 > Remote: AttributeError: 'Repository' object has no attribute 'segments' > Remote: Platform: Linux backup 3.10.96-5-ARCH #1 SMP PREEMPT Thu Apr 28 > 19:19:32 MDT 2016 armv7l > Remote: Linux: arch > Remote: Borg: 1.0.3 Python: CPython 3.5.1 > Remote: PID: 16144 CWD: /big > Remote: sys.argv: ['/usr/bin/borg', 'serve', '--restrict-to-path', > '/big/h'] > Remote: SSH_ORIGINAL_COMMAND: 'borg serve --umask=077 --info' > Remote: > Remote Exception (see remote log for the traceback) > Platform: Linux h 3.13.0-77-generic #121-Ubuntu SMP Wed Jan 20 10:50:42 > UTC 2016 x86_64 x86_64 > Linux: Ubuntu 14.04 trusty > Borg: 1.0.3 Python: CPython 3.4.3 > PID: 5689 CWD: /root > sys.argv: ['/usr/bin/borg', 'create', '--show-rc', '--stats', '-v', > '-e', '.cache', '-C', 'none', > 'backup at backup:/big/h/photos::photos-2016-06-23T07:35-0700', > '/soumak/photos'] > SSH_ORIGINAL_COMMAND: None > > terminating with error status, rc 2 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From hpj at urpla.net Mon Jun 27 06:55:37 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Mon, 27 Jun 2016 12:55:37 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: <6062925.EGKR569353@xrated> References: <3615163.UrhxtIYZKZ@xrated> <68a7a06e-2b85-5ccb-f388-5089c58c0093@enkore.de> <6062925.EGKR569353@xrated> Message-ID: <9231988.Y0NyYby6AT@xrated> On Samstag, 18. Juni 2016 19:59:39 Hans-Peter Jansen wrote: > Hi Marian, hi Thomas, > > On Freitag, 17. Juni 2016 21:35:09 public at enkore.de wrote: > > On 06/17/2016 07:31 PM, Hans-Peter Jansen wrote: > > > The point is, if you run "borg create --stats" (without -v), it > > > > doesn't print > > > > > the stats, which is rather counter intuitive... > > > > Yeah, that'll be fixed in 1.1 > > > > See current development log at > > https://github.com/borgbackup/borg/blob/master/docs/changes.rst > > > > > BORG_RELOCATED_REPO_ACCESS_IS_OK=yes > > > > This should only be needed once. Check if ~/.cache/borg//config > > contains the correct path (the one you use to access the repository in > > your scripts), and if that file has the correct permissions etc. > > Here's a patch proposal: > > --- a/src/borg/helpers.py > +++ b/src/borg/helpers.py > @@ -975,6 +975,25 @@ def yes(msg=None, false_msg=None, true_msg=None, > default_msg=None, > ofile = sys.stderr > if default not in (True, False): > raise ValueError("invalid default value, must be True or False") > + # silent acceptance via environment: > + # if a valid answer is given via environment > + # and no env_msg is attached to this question > + # print msg only, if a related {true,false}_msg is attached > + # and return the related value > + if env_var_override and not env_msg: > + answer = os.environ.get(env_var_override) > + if answer in truish: > + if true_msg: > + if msg: > + print(msg, file=ofile) > + print(true_msg, file=ofile) > + return True > + if answer in falsish: > + if false_msg: > + if msg: > + print(msg, file=ofile) > + print(false_msg, file=ofile) > + return False > if msg: > print(msg, file=ofile, end='', flush=True) > while True: > > Would something like that be acceptable? Is that proposal too silly to be discussed? Pete From hpj at urpla.net Mon Jun 27 06:59:31 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Mon, 27 Jun 2016 12:59:31 +0200 Subject: [Borgbackup] borg 1.0.3 crashes on prune Message-ID: <10193091.PpxXBPIdjl@xrated> Hi, $ borg prune -v --list --keep-daily=7 --keep-weekly=4 --keep-monthly=12 --prefix server-vbox /backup/borg Keeping archive: server-vbox-2016-06-26 Sun, 2016-06-26 23:30:29 Keeping archive: server-vbox-2016-06-25 Sat, 2016-06-25 23:30:34 Keeping archive: server-vbox-2016-06-24 Fri, 2016-06-24 23:30:33 Keeping archive: server-vbox-2016-06-23 Thu, 2016-06-23 23:30:30 Keeping archive: server-vbox-2016-06-22 Wed, 2016-06-22 23:30:32 Keeping archive: server-vbox-2016-06-21 Tue, 2016-06-21 23:30:30 Keeping archive: server-vbox-2016-06-20 Mon, 2016-06-20 23:30:31 Keeping archive: server-vbox-2016-06-19 Sun, 2016-06-19 23:30:30 Keeping archive: server-vbox-2016-06-12 Sun, 2016-06-12 23:30:35 Keeping archive: server-vbox-2016-06-03 Fri, 2016-06-03 20:36:43 Pruning archive: server-vbox-2016-06-18 Sat, 2016-06-18 23:31:11 Pruning archive: server-vbox-2016-06-17 Fri, 2016-06-17 23:31:03 Object with key b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xec\xe4\x89\xacy\x97}~\xf1y' not found in repository /backup/borg. Traceback (most recent call last): File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 1520, in main exit_code = archiver.run(args) File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 1465, in run return args.func(args) File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 81, in wrapper return method(self, args, repository=repository, **kwargs) File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 610, in do_prune Archive(repository, key, manifest, archive.name, cache).delete(stats) File "/usr/lib64/python3.4/site-packages/borg/archive.py", line 450, in delete self.cache.chunk_decref(chunk_id, stats) File "/usr/lib64/python3.4/site-packages/borg/cache.py", line 392, in chunk_decref self.repository.delete(id, wait=False) File "/usr/lib64/python3.4/site-packages/borg/repository.py", line 476, in delete raise self.ObjectNotFound(id, self.path) from None borg.repository.ObjectNotFound: (b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xec\xe4\x89\xacy\x97}~\xf1y', '/backup/borg') Platform: Linux server 4.2.5-12-default #1 SMP PREEMPT Wed Oct 28 17:49:15 UTC 2015 (0491388) x86_64 x86_64 Linux: openSUSE 13.2 x86_64 Borg: 1.0.3 Python: CPython 3.4.4 PID: 20751 CWD: /root sys.argv: ['/usr/bin/borg', 'prune', '-v', '--list', '--keep-daily=7', '--keep-weekly=4', '--keep-monthly=12', '--prefix', 'server-vbox', '/backup/borg'] SSH_ORIGINAL_COMMAND: None Crashing backup software makes me nervous. How can I recover without endangering my data? Pete From adrian.klaver at aklaver.com Mon Jun 27 10:13:07 2016 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 27 Jun 2016 07:13:07 -0700 Subject: [Borgbackup] borg 1.0.3 crashes on prune In-Reply-To: <10193091.PpxXBPIdjl@xrated> References: <10193091.PpxXBPIdjl@xrated> Message-ID: On 06/27/2016 03:59 AM, Hans-Peter Jansen wrote: > Hi, > > $ borg prune -v --list --keep-daily=7 --keep-weekly=4 --keep-monthly=12 --prefix server-vbox /backup/borg > Keeping archive: server-vbox-2016-06-26 Sun, 2016-06-26 23:30:29 > Keeping archive: server-vbox-2016-06-25 Sat, 2016-06-25 23:30:34 > Keeping archive: server-vbox-2016-06-24 Fri, 2016-06-24 23:30:33 > Keeping archive: server-vbox-2016-06-23 Thu, 2016-06-23 23:30:30 > Keeping archive: server-vbox-2016-06-22 Wed, 2016-06-22 23:30:32 > Keeping archive: server-vbox-2016-06-21 Tue, 2016-06-21 23:30:30 > Keeping archive: server-vbox-2016-06-20 Mon, 2016-06-20 23:30:31 > Keeping archive: server-vbox-2016-06-19 Sun, 2016-06-19 23:30:30 > Keeping archive: server-vbox-2016-06-12 Sun, 2016-06-12 23:30:35 > Keeping archive: server-vbox-2016-06-03 Fri, 2016-06-03 20:36:43 > Pruning archive: server-vbox-2016-06-18 Sat, 2016-06-18 23:31:11 > Pruning archive: server-vbox-2016-06-17 Fri, 2016-06-17 23:31:03 > Object with key b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xec\xe4\x89\xacy\x97}~\xf1y' not found in repository /backup/borg. > Traceback (most recent call last): > File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 1520, in main > exit_code = archiver.run(args) > File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 1465, in run > return args.func(args) > File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 81, in wrapper > return method(self, args, repository=repository, **kwargs) > File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 610, in do_prune > Archive(repository, key, manifest, archive.name, cache).delete(stats) > File "/usr/lib64/python3.4/site-packages/borg/archive.py", line 450, in delete > self.cache.chunk_decref(chunk_id, stats) > File "/usr/lib64/python3.4/site-packages/borg/cache.py", line 392, in chunk_decref > self.repository.delete(id, wait=False) > File "/usr/lib64/python3.4/site-packages/borg/repository.py", line 476, in delete > raise self.ObjectNotFound(id, self.path) from None > borg.repository.ObjectNotFound: (b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xec\xe4\x89\xacy\x97}~\xf1y', '/backup/borg') > > Platform: Linux server 4.2.5-12-default #1 SMP PREEMPT Wed Oct 28 17:49:15 UTC 2015 (0491388) x86_64 x86_64 > Linux: openSUSE 13.2 x86_64 > Borg: 1.0.3 Python: CPython 3.4.4 > PID: 20751 CWD: /root > sys.argv: ['/usr/bin/borg', 'prune', '-v', '--list', '--keep-daily=7', '--keep-weekly=4', '--keep-monthly=12', '--prefix', 'server-vbox', '/backup/borg'] > SSH_ORIGINAL_COMMAND: None So was this all on the same machine or between machines? How did you install Borg? What does borg check show?: http://borgbackup.readthedocs.io/en/1.0.3/usage.html#borg-check > > Crashing backup software makes me nervous. > > How can I recover without endangering my data? > > Pete > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Mon Jun 27 11:43:45 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 27 Jun 2016 17:43:45 +0200 Subject: [Borgbackup] borg 1.0.3 crashes on prune In-Reply-To: References: <10193091.PpxXBPIdjl@xrated> Message-ID: <577149B1.3010205@waldmann-edv.de> >> $ borg prune -v --list --keep-daily=7 --keep-weekly=4 >> --keep-monthly=12 --prefix server-vbox /backup/borg >> ... >> Object with key >> b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xec\xe4\x89\xacy\x97}~\xf1y' >> not found in repository /backup/borg. The object seems to be not in the repo. >> Traceback (most recent call last): >> File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line >> 610, in do_prune >> Archive(repository, key, manifest, archive.name, cache).delete(stats) >> File "/usr/lib64/python3.4/site-packages/borg/archive.py", line 450, >> in delete >> self.cache.chunk_decref(chunk_id, stats) >> File "/usr/lib64/python3.4/site-packages/borg/cache.py", line 392, >> in chunk_decref >> self.repository.delete(id, wait=False) >> File "/usr/lib64/python3.4/site-packages/borg/repository.py", line >> 476, in delete >> raise self.ObjectNotFound(id, self.path) from None >> borg.repository.ObjectNotFound: >> (b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xec\xe4\x89\xacy\x97}~\xf1y', >> '/backup/borg') Strange, that (obviously) should not happen. It decremented the reference counter in the chunks index, found it 0, then (successfully) deleted the entry from the chunks index, then tried to delete the (now unused) object from the repo, but it was not there - boom. So, it looks like the index and repo contents did not agree. Can it be that 2 borgs accidentally wrote to the repo in parallel because you manually broke the lock (borg break-lock)? Were there any issues before this one when accessing the backup repo? >> '/backup/borg'] Is that the correct repo path? Is it a local device or via network somehow? How? If via network: did you have network interruptions while writing to the repo? borg check -v might be a good idea now. check its result. >> How can I recover without endangering my data? if there is a problem and the repo is valuable, make a copy of it before running borg check --repair on it. if you are super paranoid, you can also run borg extract --dry-run on some or all archives afterwards. takes time, but is best assertion of archive health. >> Crashing backup software makes me nervous. Me, too, it would be good if we could find the root cause of this. In this special case, the failing operation that caused the traceback is not the big problem - it wanted to delete something that was already gone. But a chunk reference counter (from chunk index) disagreeing with repo state is the real problem here. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From hpj at urpla.net Mon Jun 27 15:32:16 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Mon, 27 Jun 2016 21:32:16 +0200 Subject: [Borgbackup] borg 1.0.3 crashes on prune In-Reply-To: <577149B1.3010205@waldmann-edv.de> References: <10193091.PpxXBPIdjl@xrated> <577149B1.3010205@waldmann-edv.de> Message-ID: <4126588.4QyB7L5jpg@xrated> On Montag, 27. Juni 2016 17:43:45 Thomas Waldmann wrote: > >> $ borg prune -v --list --keep-daily=7 --keep-weekly=4 > >> --keep-monthly=12 --prefix server-vbox /backup/borg > >> ... > >> Object with key > >> b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xec > >> \xe4\x89\xacy\x97}~\xf1y' not found in repository /backup/borg. > > The object seems to be not in the repo. > > >> Traceback (most recent call last): > >> File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line > >> > >> 610, in do_prune > >> > >> Archive(repository, key, manifest, archive.name, cache).delete(stats) > >> > >> File "/usr/lib64/python3.4/site-packages/borg/archive.py", line 450, > >> > >> in delete > >> > >> self.cache.chunk_decref(chunk_id, stats) > >> > >> File "/usr/lib64/python3.4/site-packages/borg/cache.py", line 392, > >> > >> in chunk_decref > >> > >> self.repository.delete(id, wait=False) > >> > >> File "/usr/lib64/python3.4/site-packages/borg/repository.py", line > >> > >> 476, in delete > >> > >> raise self.ObjectNotFound(id, self.path) from None > >> > >> borg.repository.ObjectNotFound: > >> (b'\x91\xd9\xd0\xde\xad\xfaz\xeeEL\xef\x80\x1c?\xfeX"\xcf\xa8\xfe2\x14\xe > >> c\xe4\x89\xacy\x97}~\xf1y', '/backup/borg') > > Strange, that (obviously) should not happen. > > It decremented the reference counter in the chunks index, found it 0, > then (successfully) deleted the entry from the chunks index, then tried > to delete the (now unused) object from the repo, but it was not there - > boom. > > So, it looks like the index and repo contents did not agree. > > Can it be that 2 borgs accidentally wrote to the repo in parallel > because you manually broke the lock (borg break-lock)? > > Were there any issues before this one when accessing the backup repo? No, none I'm aware of. > >> '/backup/borg'] > > Is that the correct repo path? Is it a local device or via network > somehow? How? Yes, of course. It's a local path to a harddisk, dedicated for backup with a single XFS filesystem. (Checked it yesterday for other reasons) > If via network: did you have network interruptions while writing to the > repo? I've suffered from a nightly crash lately, which could be the reason. > borg check -v might be a good idea now. check its result. Analyzing archive client-vmware-2016-06-27 (28/28) Analyzing archive server-vmware-2016-06-27 (27/28) Analyzing archive server-vbox-2016-06-26 (26/28) Analyzing archive client-vmware-2016-06-26 (25/28) Analyzing archive server-vmware-2016-06-26 (24/28) Analyzing archive server-vbox-2016-06-25 (23/28) Analyzing archive client-vmware-2016-06-25 (22/28) Analyzing archive server-vmware-2016-06-25 (21/28) Analyzing archive server-vbox-2016-06-24 (20/28) Analyzing archive client-vmware-2016-06-24 (19/28) Analyzing archive server-vmware-2016-06-24 (18/28) Analyzing archive server-vbox-2016-06-23 (17/28) Analyzing archive client-vmware-2016-06-23 (16/28) Analyzing archive server-vbox-2016-06-22 (15/28) Analyzing archive client-vmware-2016-06-22 (14/28) Analyzing archive server-vmware-2016-06-22 (13/28) Analyzing archive server-vbox-2016-06-21 (12/28) Analyzing archive client-vmware-2016-06-21 (11/28) Analyzing archive server-vmware-2016-06-21 (10/28) Analyzing archive server-vbox-2016-06-20 (9/28) Analyzing archive server-vmware-2016-06-20 (8/28) Analyzing archive server-vbox-2016-06-19 (7/28) Analyzing archive client-vmware-2016-06-19 (6/28) Analyzing archive server-vmware-2016-06-19 (5/28) Analyzing archive server-vbox-2016-06-18 (4/28) Analyzing archive server-vbox-2016-06-17 (3/28) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 3894096588-3902485196) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 4142217435-4143846048) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 5142215674-5146076720) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 9025581416-9027627629) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 16676456346-16677233806) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 19461249913-19464221241) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 19831496815-19837198695) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 21796656876-21798055754) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 25209411354-25211283496) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 34240920044-34242226990) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 34895070800-34895679826) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 40756138824-40759447757) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 42145805194-42149374491) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 42332904931-42335015679) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 42645021918-42647206261) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 43472741718-43475303611) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 44357472713-44360119382) Analyzing archive server-vbox-2016-06-12 (2/28) Analyzing archive server-vbox-2016-06-03 (1/28) 31 orphaned objects found! Archive consistency check complete, problems found. Guess, server-vbox-2016-06-17 is the culprit. Hopefully, repair detects and and eliminates the damaged archive (and the orphaned objects, while at it). > >> How can I recover without endangering my data? > > if there is a problem and the repo is valuable, make a copy of it before > running borg check --repair on it. I'm trusting borg and you so far. Doing the repair right now without backups. > In this special case, the failing operation that caused the traceback is > not the big problem - it wanted to delete something that was already gone. > > But a chunk reference counter (from chunk index) disagreeing with repo > state is the real problem here. borg should detect this and bail out gracefully, but let's see, if it gets this repo into a clean state again, first. Pete From tw at waldmann-edv.de Mon Jun 27 17:18:37 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 27 Jun 2016 23:18:37 +0200 Subject: [Borgbackup] Some minor issues In-Reply-To: <9231988.Y0NyYby6AT@xrated> References: <3615163.UrhxtIYZKZ@xrated> <68a7a06e-2b85-5ccb-f388-5089c58c0093@enkore.de> <6062925.EGKR569353@xrated> <9231988.Y0NyYby6AT@xrated> Message-ID: <5771982D.6080406@waldmann-edv.de> >>>> BORG_RELOCATED_REPO_ACCESS_IS_OK=yes Did you miss to put an "export " in front of that? It needs to be in the environment (not just in a shell variable) to get through into the borg process. That said (as a rather general comment about how to use BORG_*) I don't think security critical stuff should usually be answered "yes" automatically. > Is that proposal too silly to be discussed? No, I've seen it, but didn't have time immediately and forgot it later. In general, it is a better idea to make tickets (stating 1 problem / use case) and pull requests (against master or 1.0-maint) on github rather than discussing patches on the ML. Don't mix misc. stuff into 1 ticket. Github has better facilities for code review, commenting, highlighting etc. It's also not so easy to forget tickets or PRs. :) -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From hpj at urpla.net Tue Jun 28 03:25:01 2016 From: hpj at urpla.net (Hans-Peter Jansen) Date: Tue, 28 Jun 2016 09:25:01 +0200 Subject: [Borgbackup] borg 1.0.3 crashes on prune In-Reply-To: <4126588.4QyB7L5jpg@xrated> References: <10193091.PpxXBPIdjl@xrated> <577149B1.3010205@waldmann-edv.de> <4126588.4QyB7L5jpg@xrated> Message-ID: <14400508.X3rlPNhFUc@xrated> On Montag, 27. Juni 2016 21:32:16 Hans-Peter Jansen wrote: > On Montag, 27. Juni 2016 17:43:45 Thomas Waldmann wrote: > > I'm trusting borg and you so far. Doing the repair right now without > backups. $ borg check --repair /backup/borg 'check --repair' is an experimental feature that might result in data loss. Type 'YES' if you understand this and want to continue: YES var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 3894096588-3902485196) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 4142217435-4143846048) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 5142215674-5146076720) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 9025581416-9027627629) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 16676456346-16677233806) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 19461249913-19464221241) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 19831496815-19837198695) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 21796656876-21798055754) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 25209411354-25211283496) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 34240920044-34242226990) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 34895070800-34895679826) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 40756138824-40759447757) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 42145805194-42149374491) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 42332904931-42335015679) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 42645021918-42647206261) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 43472741718-43475303611) var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 44357472713-44360119382) 31 orphaned objects found! Archive consistency check complete, problems found. $ borg check -v /backup/borg Starting repository check Completed repository check, no problems found. Starting archive consistency check... Analyzing archive client-vmware-2016-06-27 (28/28) Analyzing archive server-vmware-2016-06-27 (27/28) Analyzing archive server-vbox-2016-06-26 (26/28) Analyzing archive client-vmware-2016-06-26 (25/28) Analyzing archive server-vmware-2016-06-26 (24/28) Analyzing archive server-vbox-2016-06-25 (23/28) Analyzing archive client-vmware-2016-06-25 (22/28) Analyzing archive server-vmware-2016-06-25 (21/28) Analyzing archive server-vbox-2016-06-24 (20/28) Analyzing archive client-vmware-2016-06-24 (19/28) Analyzing archive server-vmware-2016-06-24 (18/28) Analyzing archive server-vbox-2016-06-23 (17/28) Analyzing archive client-vmware-2016-06-23 (16/28) Analyzing archive server-vbox-2016-06-22 (15/28) Analyzing archive client-vmware-2016-06-22 (14/28) Analyzing archive server-vmware-2016-06-22 (13/28) Analyzing archive server-vbox-2016-06-21 (12/28) Analyzing archive client-vmware-2016-06-21 (11/28) Analyzing archive server-vmware-2016-06-21 (10/28) Analyzing archive server-vbox-2016-06-20 (9/28) Analyzing archive server-vmware-2016-06-20 (8/28) Analyzing archive server-vbox-2016-06-19 (7/28) Analyzing archive client-vmware-2016-06-19 (6/28) Analyzing archive server-vmware-2016-06-19 (5/28) Analyzing archive server-vbox-2016-06-18 (4/28) Analyzing archive server-vbox-2016-06-17 (3/28) Analyzing archive server-vbox-2016-06-12 (2/28) Analyzing archive server-vbox-2016-06-03 (1/28) Archive consistency check complete, no problems found. Repo looks good again, and backups was fine tonight. One further comment from the usability side: the repair run should give notes about what it is _doing_: here, it removed the incomplete server-vbox-2016-06-17 backup and some orphaned chunks. Thanks, Pete From tw at waldmann-edv.de Tue Jun 28 06:40:15 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 28 Jun 2016 12:40:15 +0200 Subject: [Borgbackup] borg 1.0.3 crashes on prune In-Reply-To: <14400508.X3rlPNhFUc@xrated> References: <10193091.PpxXBPIdjl@xrated> <577149B1.3010205@waldmann-edv.de> <4126588.4QyB7L5jpg@xrated> <14400508.X3rlPNhFUc@xrated> Message-ID: <5772540F.7000505@waldmann-edv.de> >> I'm trusting borg and you so far. Doing the repair right now without >> backups. > > $ borg check --repair /backup/borg -v option would have been very useful. Without it, you only see WARNING level, but not INFO level messages. That's a bit unfortunate in this case, but that is how the logging works. :-| > 'check --repair' is an experimental feature that might result in data loss. > Type 'YES' if you understand this and want to continue: YES Like the archive name it is currently processing, before it encountered this: > var/lib/virtualbox/w2k8vserver/w2k8vserver.vdi: Missing file chunk detected (Byte 3894096588-3902485196) IIRC, when it detects missing chunks, it inserts a same-length chunk made of zero bytes. So the position of the still good stuff in the file stays same, but the bad (missing) chunks are kind of blanked. That doesn't necessarily mean such a file still works, you have to consider it as corrupted (like if you backed up a file that was already damaged in such a way). So, maybe take a note that this file is damaged in potentially all archives you have right now. A future backup from now on would be ok again (if a chunk is missing, it would be added to the repo). > One further comment from the usability side: the repair run should give > notes about what it is _doing_: here, it removed the incomplete > server-vbox-2016-06-17 backup and some orphaned chunks. Well, with -v there would have been some more infos. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tmhikaru at gmail.com Tue Jun 28 21:16:01 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Tue, 28 Jun 2016 18:16:01 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <57692C7D.3090305@waldmann-edv.de> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> <20160621085210.GA14044@raspberrypi> <57692C35.3080901@waldmann-edv.de> <57692C7D.3090305@waldmann-edv.de> Message-ID: <20160629011601.GB12032@raspberrypi> Been a while. Sorry about this, I injured my wrist and it is hard to type one handed. Did tests over the weekend. Despite everything I tried, I could not coerce the RPI into syncing properly, so out of frustration, I tried doing this to see if it'd do something differently: https://github.com/borgbackup/borg/pull/238 To my surprise, it started working and synced, and did a full backup without any trouble! Immediately afterwards I modified the manifest line in the repo config as you had suggested and ran the test backup again, and it synced and correctly id'd most files as unmodified. I recorded memory use for both and used time correctly this time, and will attach both as gzips to this message. Here is the cache data you wanted: RPI, 32bit armv6 ~480MB of ram: root at raspberrypi:~# ls -l .cache/borg/40f5689bc9c5faac015dd94283a4dfb42dc5361bf0c77fd1402c365235ebd8f9/chunks -rw------- 1 root root 96336214 Jun 24 21:17 .cache/borg/40f5689bc9c5faac015dd94283a4dfb42dc5361bf0c77fd1402c365235ebd8f9/chunks 64Bit x86 2xXeon 18GB of ram running fedora 23: [root at roll ~]# ls -l .cache/borg/40f5689bc9c5faac015dd94283a4dfb42dc5361bf0c77fd1402c365235ebd8f9/chunks -rw-------. 1 root root 96336214 Jun 26 00:52 .cache/borg/40f5689bc9c5faac015dd94283a4dfb42dc5361bf0c77fd1402c365235ebd8f9/chunks Now, the interesting thing is, this was not a permanent fix! Despite that it was able to sync twice in a row, a few days later when I tried to run a full backup using my script which is run the same way my test was, it got completely jammed and was still trying to sync one of the archives it had worked through days before in less than an hour in both previous tests, but six hours later when I came home it was just sitting there hammering away at the cpu. I have since given up on trying to use it this way, and am back to using sshfs. sshfs is much faster despite its drawbacks since there is no cache to sync, and apparently borg client reading sshfs on the server doing a full backup runs ~1 hr faster than it does running on the RPI even not counting sync time. I will read up on xattrs some more, maybe there is a solution I can use to store them without having to rely on borg to do it directly. I have NOT moved, removed or altered the cachedir since I killed the borg client on the RPI so if anything useful may be in there to find out why it is getting hung up, tell me and I'll pull it out. To be 100% clear, both of the attached tests completely worked and did not freeze up. The first one did a full backup of everything on the RPI to the remote server, and the second one just synced and then skipped 99% of everything on the RPI since it hadn't been modified. I am still utterly clueless as to why this is happening. Ideas? Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: borglog.first.gz Type: application/gzip Size: 3397 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: borglog.second.gz Type: application/gzip Size: 2182 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From tw at waldmann-edv.de Wed Jun 29 08:32:30 2016 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 29 Jun 2016 14:32:30 +0200 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <20160629011601.GB12032@raspberrypi> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> <20160621085210.GA14044@raspberrypi> <57692C35.3080901@waldmann-edv.de> <57692C7D.3090305@waldmann-edv.de> <20160629011601.GB12032@raspberrypi> Message-ID: <5773BFDE.8050606@waldmann-edv.de> On 06/29/2016 03:16 AM, tmhikaru at gmail.com wrote: > Been a while. Sorry about this, I injured my wrist and it is hard to > type one handed. Wish you a good recovery! > Did tests over the weekend. Despite everything I tried, I could not > coerce the RPI into syncing properly, so out of frustration, I tried doing > this to see if it'd do something differently: > https://github.com/borgbackup/borg/pull/238 Aside from not caching the single-archive chunk indexes on disk, it does a very similar thing. > RPI, 32bit armv6 ~480MB of ram: > -rw------- 1 root root 96336214 Jun 24 21:17 .cache/borg/40f5689bc9c5faac015dd94283a4dfb42dc5361bf0c77fd1402c365235ebd8f9/chunks OK, so the 96MB of chunk index data will still fit into RAM. Even twice maybe if it has to grow the hash table a bit (copying the smaller current table into a bigger new one). But note that is already half of your memory. Then, while it is merging the single-archive indexes, one of them is also in memory, so now we are at 2/3 of your RAM (only for that data). I still think you are maybe experiencing a memory / paging issue. And if not now, then maybe in future, if your repo or file count grows a bit. > Now, the interesting thing is, this was not a permanent fix! Despite that it > was able to sync twice in a row, a few days later when I tried to run a full > backup using my script which is run the same way my test was, it got > completely jammed If your chunks count in the repo grew significantly due to that, it maybe was using much more than 96MB then. There is also a files cache eating some memory, see the docs for the formula. The memory is needed on the machine running "borg create" (not: "borg serve", there it only needs to hold the repo index in memory). > and was still trying to sync one of the archives it had > worked through days before in less than an hour in both previous tests, but > six hours later when I came home it was just sitting there hammering away at > the cpu. I have since given up on trying to use it this way, and am back to > using sshfs. sshfs is much faster despite its drawbacks since there is no > cache to sync, and apparently borg client reading sshfs on the server doing > a full backup runs ~1 hr faster than it does running on the RPI even not > counting sync time. I assume you mean sshfs as a source of backup data. > I am still utterly clueless as to why this is happening. Ideas? Besides memory issues, it could be also an instance of the suspected "hashtable performance breakdown" (see issue tracker) - this might depend on the specific values stored into the hashtable. -- GPG ID: FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tmhikaru at gmail.com Thu Jun 30 00:34:11 2016 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Wed, 29 Jun 2016 21:34:11 -0700 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <5773BFDE.8050606@waldmann-edv.de> References: <20160619202507.GA2020@raspberrypi> <57672356.6080308@waldmann-edv.de> <20160621085210.GA14044@raspberrypi> <57692C35.3080901@waldmann-edv.de> <57692C7D.3090305@waldmann-edv.de> <20160629011601.GB12032@raspberrypi> <5773BFDE.8050606@waldmann-edv.de> Message-ID: <20160630043410.GA1708@raspberrypi> On Wed, Jun 29, 2016 at 02:32:30PM +0200, Thomas Waldmann wrote: > > Now, the interesting thing is, this was not a permanent fix! Despite that it > > was able to sync twice in a row, a few days later when I tried to run a full > > backup using my script which is run the same way my test was, it got > > completely jammed > > If your chunks count in the repo grew significantly due to that, it > maybe was using much more than 96MB then. > > There is also a files cache eating some memory, see the docs for the > formula. > > The memory is needed on the machine running "borg create" (not: "borg > serve", there it only needs to hold the repo index in memory). Chunks count did not increase significantly; in fact the cache size did not change on the server side at all, not even one byte. Regardless you could be right about the ram being a problem in the future. For now, I did just think of something I hadn't before. In every occurence of it getting stuck recently that I can recall, it has been AFTER a different machine than the server modified the repo. The third machine is using an amd phenom 2, an intel compatible 64bit processor. The linux distro it has installed does not have python 3 so I have been using the statically linked binary from the download page for borg 1.0.3. It seems to work fine... but I think I should see what happens if I have the RPI try to sync BEFORE that computer does its thing, and AFTER the server has. Might be nothing, but an easy test to try - if memory serves, since my backup script has the server run LAST when I was doing my tests and having them fail the third machine had been the last one to modify the repo. All I should need to do to test this theory is to change the order that the script executes in. > I assume you mean sshfs as a source of backup data. yes, I mean that I am having the server mount the remote machines (RPI) root, and then backing it up that way. It is significantly faster to do it this way, which is a bit weird and annoying since the docs for sshfs claim sshfs is very cpu and I/O intensive on the machine that is mounted. > > I am still utterly clueless as to why this is happening. Ideas? > > Besides memory issues, it could be also an instance of the suspected > "hashtable performance breakdown" (see issue tracker) - this might > depend on the specific values stored into the hashtable. Hmm, yes from an uneducated perspective that does look suspiciously similar to what I'm seeing. Certainly if this IS the problem it would match the behavior I get. Any tunables for tweaking this yet? I'd love to make it try to allocate more ram just to see what it would do. Even if it OOMed itself I'd probably learn something. Thanks, Tim McGrath -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 465 bytes Desc: Digital signature URL: From public at enkore.de Thu Jun 30 03:08:06 2016 From: public at enkore.de (Marian Beermann) Date: Thu, 30 Jun 2016 09:08:06 +0200 Subject: [Borgbackup] Baffling behavior on low cpu system with borg backup In-Reply-To: <20160619202507.GA2020@raspberrypi> References: <20160619202507.GA2020@raspberrypi> Message-ID: <0da0c987-4dd1-a1df-7ccc-bfb56f89ff95@enkore.de> You can try enabling faulthandler. Set environment variable (export) PYTHONFAULTHANDLER to something, say, foobar. When it gets stuck you can send SIGABRT and should get a proper stack trace where it get stuck. Cheers, Marian