From giorgio.pimpa at gmail.com Mon Apr 1 09:19:43 2019 From: giorgio.pimpa at gmail.com (Giorgio Pintaudi) Date: Mon, 1 Apr 2019 22:19:43 +0900 Subject: [Borgbackup] borg 1.1.9 crashes on SL6 due to old glibc 2.12 Message-ID: <0a4d2fcf-b7a6-5d85-3a3e-2199bdc62755@gmail.com> Hello to everybody! I am PhD student currently developing the DAQ software for our experiment (experimental particle physics). We need to continuously backup our data to a remote server and the data, once backed up to the server, has to be deleted on the local DAQ PC to make space for new data. I am trying to use borgbackup but I have encountered an error that I wasn't able to solve by myself. The problem is that the remote server is running an old version of Scientific Linux 6.10 (very similar to CentOS 6). Whenever I try to initialize a repository I get the error: $ borg init --e none ./borgtest Remote: FATAL: this Python was compiled for a too old (g)libc and misses required functionality. The glibc version on the remote server is: $ rpm -q glibc glibc-2.12-1.212.el6.x86_64 glibc-2.12-1.212.el6.i686 What do you think is the best way to make borg work on the server? *Note that I don't have root access to the server.* My ultimate goal would be to completely control the backups on the server using borg on the local DAQ computer. Some useful info: * Scientific Linux 6.10 * Python 3.5.1 * glibc 2.12 * borg 1.1.9 * gcc 6.3.1 Thank you Giorgio -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Mon Apr 1 09:34:20 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 1 Apr 2019 15:34:20 +0200 Subject: [Borgbackup] borg 1.1.9 crashes on SL6 due to old glibc 2.12 In-Reply-To: <0a4d2fcf-b7a6-5d85-3a3e-2199bdc62755@gmail.com> References: <0a4d2fcf-b7a6-5d85-3a3e-2199bdc62755@gmail.com> Message-ID: > $ borg init --e none ./borgtest > Remote: FATAL: this Python was compiled for a too old (g)libc and > misses required functionality. This is the code which triggers this exception / error message: required_funcs = {os.stat, os.utime, os.chown} if not os.supports_follow_symlinks.issuperset(required_funcs): raise PythonLibcTooOld So, as you see, quite basic functionality like not following symlinks is not supported for all function where borg needs that. So far for the bad news. The good news are that these functions might be only required for using borg as a client, but not for "borg serve" (repo server part in client/server mode, if repo access is done via ssh:). So the question is now whether you also get the error message on the client. And also whether you need to run the borg *client* on the repo server machine, too? borg currently always does this check, but I guess it could be maybe omitted for the "borg serve" server part. Another option of course would be to run a non-stoneage OS on your server. :) -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mszpak at wp.pl Tue Apr 2 13:45:30 2019 From: mszpak at wp.pl (=?UTF-8?Q?Marcin_Zaj=c4=85czkowski?=) Date: Tue, 2 Apr 2019 19:45:30 +0200 Subject: [Borgbackup] Get know which files are damaged by: Data integrity error: Segment entry checksum mismatch Message-ID: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> Hi, The verification of newly created local backup (on another drive) revealed multiple errors: Data integrity error: Segment entry checksum mismatch [segment 90, offset XXX] I've read it may be caused by hardware malfunction (drive, memory, cpu), but at the moment I would like to get know which files were affected by that data corruption. How can I check it? The repository is quite large (600GB+) and it's problematic for me to make a copy of it before performing a repair operation. Two extra questions. 1. Is it possible to divide "borg check" into smaller "chunks" to do not have to wait long hours to get a result (and just run it with a given number of that smaller part - of course assuming to keep the repository itself untouched in the mean time)? 2. Would performing the backup again "fix" the problem (assuming the reason of data corruption is detected and fixed before that operation) or I need to "repair" it before? Marcin -- https://blog.solidsoft.info/ - Working code is not enough From tw at waldmann-edv.de Tue Apr 2 14:21:41 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 2 Apr 2019 20:21:41 +0200 Subject: [Borgbackup] Get know which files are damaged by: Data integrity error: Segment entry checksum mismatch In-Reply-To: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> Message-ID: > Data integrity error: Segment entry checksum mismatch [segment 90, > offset XXX] 90 is the segment number and also the file name of the affected file. have a look into repo_dir/data/... and you'll see. > 1. Is it possible to divide "borg check" into smaller "chunks" to do not > have to wait long hours to get a result (and just run it with a given > number of that smaller part - of course assuming to keep the repository > itself untouched in the mean time)? borg 1.1 supports this: - --repository-only (only check repo, low-level) - --archives-only (only check archives, high-level) borg 1.2 (not released yet) will support: - --max-duration (to limit how long it runs at a time, splitting the whole check into multiple runs. in this mode, it ONLY does the crc32 segment file entry check) > 2. Would performing the backup again "fix" the problem (assuming the > reason of data corruption is detected and fixed before that operation) > or I need to "repair" it before? Guess you need to first find and fix the root cause of the corruption (e.g. fix your hardware), then do a borg check --repair. Whether this is a case of data loss in the repo has to be seen, this depends on whether the issue is permanent or intermittent. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mszpak at wp.pl Tue Apr 2 15:34:10 2019 From: mszpak at wp.pl (=?UTF-8?Q?Marcin_Zaj=c4=85czkowski?=) Date: Tue, 2 Apr 2019 21:34:10 +0200 Subject: [Borgbackup] Get know which files are damaged by: Data integrity error: Segment entry checksum mismatch In-Reply-To: References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> Message-ID: <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> Thanks for you quick reply Thomas. On 2019-04-02 20:21, Thomas Waldmann wrote: > >> Data integrity error: Segment entry checksum mismatch [segment 90, >> offset XXX] > > 90 is the segment number and also the file name of the affected file. > have a look into repo_dir/data/... and you'll see. Is it possible with that knowledge determine which backed up file(s) are affected (e.g. to get know if I can take it from some other backup)? Marcin >> 1. Is it possible to divide "borg check" into smaller "chunks" to do not >> have to wait long hours to get a result (and just run it with a given >> number of that smaller part - of course assuming to keep the repository >> itself untouched in the mean time)? > > borg 1.1 supports this: > - --repository-only (only check repo, low-level) > - --archives-only (only check archives, high-level) > > borg 1.2 (not released yet) will support: > - --max-duration (to limit how long it runs at a time, splitting the > whole check into multiple runs. in this mode, it ONLY does the crc32 > segment file entry check) > >> 2. Would performing the backup again "fix" the problem (assuming the >> reason of data corruption is detected and fixed before that operation) >> or I need to "repair" it before? > > Guess you need to first find and fix the root cause of the corruption > (e.g. fix your hardware), then do a borg check --repair. > > Whether this is a case of data loss in the repo has to be seen, this > depends on whether the issue is permanent or intermittent. > -- https://blog.solidsoft.info/ - Working code is not enough From tw at waldmann-edv.de Tue Apr 2 16:52:19 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 2 Apr 2019 22:52:19 +0200 Subject: [Borgbackup] Get know which files are damaged by: Data integrity error: Segment entry checksum mismatch In-Reply-To: <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> Message-ID: <6c5de6f5-84b9-73c9-9e20-6e3700ce1768@waldmann-edv.de> >> 90 is the segment number and also the file name of the affected file. >> have a look into repo_dir/data/... and you'll see. > > Is it possible with that knowledge determine which backed up file(s) are > affected (e.g. to get know if I can take it from some other backup)? Not easily. It might be though that borg check outputs some hints (not sure, might depend on the specific case). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mszpak at wp.pl Tue Apr 2 17:02:10 2019 From: mszpak at wp.pl (=?UTF-8?Q?Marcin_Zaj=c4=85czkowski?=) Date: Tue, 2 Apr 2019 23:02:10 +0200 Subject: [Borgbackup] Get know which files are damaged by: Data integrity error: Segment entry checksum mismatch In-Reply-To: <6c5de6f5-84b9-73c9-9e20-6e3700ce1768@waldmann-edv.de> References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> <6c5de6f5-84b9-73c9-9e20-6e3700ce1768@waldmann-edv.de> Message-ID: On 2019-04-02 22:52, Thomas Waldmann wrote: > >>> 90 is the segment number and also the file name of the affected file. >>> have a look into repo_dir/data/... and you'll see. >> >> Is it possible with that knowledge determine which backed up file(s) are >> affected (e.g. to get know if I can take it from some other backup)? > > Not easily. > > It might be though that borg check outputs some hints (not sure, might > depend on the specific case). Ok. Changing a little bit my question, would I know after a repair operation or while getting my files from the backup (also after a repair operation) that those accessed files are corrupted? Or those files would be read as any other files, just having occasionally some zeros inside? Marcin -- https://blog.solidsoft.info/ - Working code is not enough From tw at waldmann-edv.de Tue Apr 2 17:47:48 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 2 Apr 2019 23:47:48 +0200 Subject: [Borgbackup] Get know which files are damaged by: Data integrity error: Segment entry checksum mismatch In-Reply-To: References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> <6c5de6f5-84b9-73c9-9e20-6e3700ce1768@waldmann-edv.de> Message-ID: <9f4dda1a-0906-b1d3-11cd-8f13937da48d@waldmann-edv.de> > Changing a little bit my question, would I know after a repair operation > or while getting my files from the backup (also after a repair > operation) that those accessed files are corrupted? > > Or those files would be read as any other files, just having > occasionally some zeros inside? If you try to extract stuff from a corrupt repo, you will get exceptions like ObjectNotFound or IntegrityError, so you'll definitely notice something is wrong. borg check --repair tries to get a repo into a consistent state. That doesn't mean that data which is lost can be magically brought back, but it will either delete corrupt archives or replace missing/corrupt content blocks in files by all-zero blocks of same size (and also it will remember the correct block hashes). Repo objects that have invalid contents (invalid crc or invalid MAC) will be removed from the repo. If you extract such a "zero-patched" file, borg will warn you about it. borg mount will reject reading such files, except when mounting with a special option. If you do a backup again after such a repair that reproduces objects which were lost / corrupted and you run borg check --repair again afterwards, borg might be able to heal some "patched" files (because it notices that lost blocks are there again and it still knows the correct hash of the previously missing blocks). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Tue Apr 2 17:53:46 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 2 Apr 2019 23:53:46 +0200 Subject: [Borgbackup] borg 1.1.9 crashes on SL6 due to old glibc 2.12 In-Reply-To: <0a4d2fcf-b7a6-5d85-3a3e-2199bdc62755@gmail.com> References: <0a4d2fcf-b7a6-5d85-3a3e-2199bdc62755@gmail.com> Message-ID: https://github.com/borgbackup/borg/pull/4487 This is a PR which omits the python/glibc check for "borg serve". It would be cool if some people could carefully test "borg serve" using that code on older servers, like SL6 / CentOS 6 / RHEL 6. The code change I did is rather harmless and I do not see a reason why it should not work, but OTOH such older servers aren't really practically proven because that check blocked borg serve running on them until now. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mszpak at wp.pl Tue Apr 2 18:46:24 2019 From: mszpak at wp.pl (=?UTF-8?Q?Marcin_Zaj=c4=85czkowski?=) Date: Wed, 3 Apr 2019 00:46:24 +0200 Subject: [Borgbackup] Get know which files are damaged by: Data integrity error: Segment entry checksum mismatch In-Reply-To: <9f4dda1a-0906-b1d3-11cd-8f13937da48d@waldmann-edv.de> References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> <6c5de6f5-84b9-73c9-9e20-6e3700ce1768@waldmann-edv.de> <9f4dda1a-0906-b1d3-11cd-8f13937da48d@waldmann-edv.de> Message-ID: <67acbad1-a14e-5b6d-4208-098452b026b3@wp.pl> Thanks for your comprehensive explanations! Marcin On 2019-04-02 23:47, Thomas Waldmann wrote: >> Changing a little bit my question, would I know after a repair operation >> or while getting my files from the backup (also after a repair >> operation) that those accessed files are corrupted? >> >> Or those files would be read as any other files, just having >> occasionally some zeros inside? > > If you try to extract stuff from a corrupt repo, you will get exceptions > like ObjectNotFound or IntegrityError, so you'll definitely notice > something is wrong. > > borg check --repair tries to get a repo into a consistent state. > > That doesn't mean that data which is lost can be magically brought back, > but it will either delete corrupt archives or replace missing/corrupt > content blocks in files by all-zero blocks of same size (and also it > will remember the correct block hashes). > > Repo objects that have invalid contents (invalid crc or invalid MAC) > will be removed from the repo. > > If you extract such a "zero-patched" file, borg will warn you about it. > borg mount will reject reading such files, except when mounting with a > special option. > > If you do a backup again after such a repair that reproduces objects > which were lost / corrupted and you run borg check --repair again > afterwards, borg might be able to heal some "patched" files (because it > notices that lost blocks are there again and it still knows the correct > hash of the previously missing blocks). From qzwx2007 at gmail.com Wed Apr 3 08:05:06 2019 From: qzwx2007 at gmail.com (JK) Date: Wed, 3 Apr 2019 15:05:06 +0300 Subject: [Borgbackup] Further development idea/proposal: Heal the damaged repository by replacing damaged blocks from other repositories created from same sources In-Reply-To: <67acbad1-a14e-5b6d-4208-098452b026b3@wp.pl> References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> <6c5de6f5-84b9-73c9-9e20-6e3700ce1768@waldmann-edv.de> <9f4dda1a-0906-b1d3-11cd-8f13937da48d@waldmann-edv.de> <67acbad1-a14e-5b6d-4208-098452b026b3@wp.pl> Message-ID: Hi, Further development idea/proposal: So borg clears damaged blocks to zero but keeps the original hash calculated from the original content. Could it be possible to "heal" the damaged repository by replacing these damaged blocks from other repositories created from same sources? I keep most of my repositories on USB disks. There are several USB disks, usually 3 which I switch daily. The most critical data is also backed up to a local disk repository. In these critical cases the backup always runs twice, first the local disk repo and then the USB disk repo. Less critical repositories are only on USB disks. So although the repositories on different USB disks and on the local disk are not identical, they have lots of common content because they all have identical source directory settings and are pruned with same policy. Now imagine I find that one (or more) of these repositories are partially corrupted. (Most likely the damage is only in one repository but if there are damages also in other repositories, they are most likely not concerning same source files or same source file areas.) Could it be possible to repair the damaged repository by replacing zeroed blocks in the damaged repository from other repository if the hashes in both repositories are identical? This way we could heal the damaged repository or atleast decrease the number of zeroed blocks even with another partially damaged repository. JK On 3.4.2019 1.46, Marcin Zaj?czkowski wrote: > Thanks for your comprehensive explanations! > > Marcin > > > On 2019-04-02 23:47, Thomas Waldmann wrote: >>> Changing a little bit my question, would I know after a repair operation >>> or while getting my files from the backup (also after a repair >>> operation) that those accessed files are corrupted? >>> >>> Or those files would be read as any other files, just having >>> occasionally some zeros inside? >> If you try to extract stuff from a corrupt repo, you will get exceptions >> like ObjectNotFound or IntegrityError, so you'll definitely notice >> something is wrong. >> >> borg check --repair tries to get a repo into a consistent state. >> >> That doesn't mean that data which is lost can be magically brought back, >> but it will either delete corrupt archives or replace missing/corrupt >> content blocks in files by all-zero blocks of same size (and also it >> will remember the correct block hashes). >> >> Repo objects that have invalid contents (invalid crc or invalid MAC) >> will be removed from the repo. >> >> If you extract such a "zero-patched" file, borg will warn you about it. >> borg mount will reject reading such files, except when mounting with a >> special option. >> >> If you do a backup again after such a repair that reproduces objects >> which were lost / corrupted and you run borg check --repair again >> afterwards, borg might be able to heal some "patched" files (because it >> notices that lost blocks are there again and it still knows the correct >> hash of the previously missing blocks). > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Wed Apr 3 08:11:15 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 3 Apr 2019 14:11:15 +0200 Subject: [Borgbackup] Further development idea/proposal: Heal the damaged repository by replacing damaged blocks from other repositories created from same sources In-Reply-To: References: <83c88d36-cda8-7a18-102f-e3296f9dab95@wp.pl> <1179a9c1-d6e2-39d2-5dad-c49d87fe8f48@wp.pl> <6c5de6f5-84b9-73c9-9e20-6e3700ce1768@waldmann-edv.de> <9f4dda1a-0906-b1d3-11cd-8f13937da48d@waldmann-edv.de> <67acbad1-a14e-5b6d-4208-098452b026b3@wp.pl> Message-ID: <1fcbec39-7505-9ad5-c68b-5615c8f57da4@waldmann-edv.de> > So borg clears damaged blocks to zero but keeps the original hash > calculated from the original content. > > Could it be possible to "heal" the damaged repository by replacing these > damaged blocks from other repositories created from same sources? That's not possible because the chunker seed and id-hash/mac secret will be different for another repo. So chunks are cut differently and even if they would be cut in the same way (small files), the id is computed differently, so does not match. What you could do though is to extract stuff from a good repo and back it up into the repo that needs healing. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mszpak at wp.pl Thu Apr 4 17:22:53 2019 From: mszpak at wp.pl (=?UTF-8?Q?Marcin_Zaj=c4=85czkowski?=) Date: Thu, 4 Apr 2019 23:22:53 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? Message-ID: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> Hi, I would like to consult one observation. Performing a 600GB+ repo check, the HDD read speed started at ~110MB/s (USB3 HDD drive), but over time tends to decease up until ~60MB/s after 2 hours. Disk transfer data from dstat. > ----system---- --dsk/sdc-- -net/total- ---system-- ----total-usage---- > time | read writ| recv send| int csw |usr sys idl wai stl ... > 02-04 23:28:20| 111M 0 | 213 269 |2620 5023 | 6 7 84 3 0 > 02-04 23:28:25| 110M 0 | 171 199 |4277 8007 | 13 8 72 7 0 > 02-04 23:28:30| 110M 0 | 342 199 |3926 8305 | 12 7 73 6 0 ... > 03-04 00:03:50| 95M 0 | 0 0 |2827 5536 | 7 6 81 5 0 > 03-04 00:03:55| 93M 0 | 0 0 |4306 14k| 9 7 80 4 0 > 03-04 00:04:00| 95M 0 | 0 0 |3366 6121 | 8 7 79 6 0 ... > 03-04 01:18:20| 57M 0 | 0 0 |1294 2557 | 3 4 91 2 0 > 03-04 01:18:25| 57M 0 | 0 0 |1184 2433 | 2 4 90 3 0 > 03-04 01:18:30| 57M 0 | 592 592 |1301 2339 | 3 4 92 1 0 It may caused by some other reasons (but rather not a fragmentation - it was after an initial test backup performed to an empty disk), but it could be also some kind of memory leak in Borg which leads to keeping more and more data internally over time causing that slowness. Do you have an idea what could it be caused by? I may try reproduce that behavior later on this month if needed. Borg 1.1.8, Fedora 29. Marcin -- https://blog.solidsoft.info/ - Working code is not enough From tw at waldmann-edv.de Thu Apr 4 17:57:16 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 4 Apr 2019 23:57:16 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? In-Reply-To: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> References: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> Message-ID: > I would like to consult one observation. Performing a 600GB+ repo check, > the HDD read speed started at ~110MB/s (USB3 HDD drive), but over time > tends to decease up until ~60MB/s after 2 hours. Disk transfer data from > dstat. Depending on size and internals of your target HDD, part of the problem might be that data transfer speed on outer tracks is faster than on inner tracks. > It may caused by some other reasons (but rather not a fragmentation - it > was after an initial test backup performed to an empty disk), but it > could be also some kind of memory leak in Borg which leads to keeping > more and more data internally over time causing that slowness. I don't think there is a memory leak (at least not a significant one). But when building the in-memory data structures, speed may vary depending e.g. on size and fill ratio of the hashtables. While doing that, it uses more and more memory, but that is used memory, not leaked memory. You might also observe some property of the python interpreter's memory manager: it does not like to give back memory to the OS memory management, but rather keeps it. > I may try reproduce that behavior later on this month if needed. Borg > 1.1.8, Fedora 29. Was it this borg version you used for your measurement? Long ago there was a bug in borg that led to performance problems in the hash tables while they filled up (if you want to search for it: "tombstones"). Also watch whether your RAM is enough. If it uses more than present, performance goes down the drain due to paging. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mszpak at wp.pl Thu Apr 4 18:08:19 2019 From: mszpak at wp.pl (=?UTF-8?Q?Marcin_Zaj=c4=85czkowski?=) Date: Fri, 5 Apr 2019 00:08:19 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? In-Reply-To: References: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> Message-ID: <6aea7040-7f29-8ddd-3606-0506ec095598@wp.pl> On 2019-04-04 23:57, Thomas Waldmann wrote: >> I would like to consult one observation. Performing a 600GB+ repo check, >> the HDD read speed started at ~110MB/s (USB3 HDD drive), but over time >> tends to decease up until ~60MB/s after 2 hours. Disk transfer data from >> dstat. > > Depending on size and internals of your target HDD, part of the problem > might be that data transfer speed on outer tracks is faster than on > inner tracks. It could be. Some sources points that data on outer cylinders could be up to 50% faster that the inner ones. > I don't think there is a memory leak (at least not a significant one). > > But when building the in-memory data structures, speed may vary > depending e.g. on size and fill ratio of the hashtables. While doing > that, it uses more and more memory, but that is used memory, not leaked > memory. That's something what I meant thinking about "some kind of memory leak" :). > You might also observe some property of the python interpreter's memory > manager: it does not like to give back memory to the OS memory > management, but rather keeps it. > >> I may try reproduce that behavior later on this month if needed. Borg >> 1.1.8, Fedora 29. > > Was it this borg version you used for your measurement? Yes, the backup was created and checked with Borg 1.1.8. > Long ago there was a bug in borg that led to performance problems in the > hash tables while they filled up (if you want to search for it: > "tombstones"). > > Also watch whether your RAM is enough. If it uses more than present, > performance goes down the drain due to paging. I don't have memory data collected, but I would say that was at least few GBs of memory available for Borg. I will check it in the another try. Marcin -- https://blog.solidsoft.info/ - Working code is not enough From devzero at web.de Fri Apr 5 06:37:32 2019 From: devzero at web.de (Roland @web.de) Date: Fri, 5 Apr 2019 12:37:32 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? In-Reply-To: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> References: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> Message-ID: i think "borg check" is cpu bound, not io bound. so, have a look at "top" - you should see borg process at 100% cpu for most of the time. why amount of read from disk varies is a good question.... but at least i do see cpu at 100% most of the time and memory consumption is constant, too Am 04.04.19 um 23:22 schrieb Marcin Zaj?czkowski: > Hi, > > I would like to consult one observation. Performing a 600GB+ repo check, > the HDD read speed started at ~110MB/s (USB3 HDD drive), but over time > tends to decease up until ~60MB/s after 2 hours. Disk transfer data from > dstat. > >> ----system---- --dsk/sdc-- -net/total- ---system-- ----total-usage---- >> time | read writ| recv send| int csw |usr sys idl wai stl > ... >> 02-04 23:28:20| 111M 0 | 213 269 |2620 5023 | 6 7 84 3 0 >> 02-04 23:28:25| 110M 0 | 171 199 |4277 8007 | 13 8 72 7 0 >> 02-04 23:28:30| 110M 0 | 342 199 |3926 8305 | 12 7 73 6 0 > ... >> 03-04 00:03:50| 95M 0 | 0 0 |2827 5536 | 7 6 81 5 0 >> 03-04 00:03:55| 93M 0 | 0 0 |4306 14k| 9 7 80 4 0 >> 03-04 00:04:00| 95M 0 | 0 0 |3366 6121 | 8 7 79 6 0 > ... >> 03-04 01:18:20| 57M 0 | 0 0 |1294 2557 | 3 4 91 2 0 >> 03-04 01:18:25| 57M 0 | 0 0 |1184 2433 | 2 4 90 3 0 >> 03-04 01:18:30| 57M 0 | 592 592 |1301 2339 | 3 4 92 1 0 > > It may caused by some other reasons (but rather not a fragmentation - it > was after an initial test backup performed to an empty disk), but it > could be also some kind of memory leak in Borg which leads to keeping > more and more data internally over time causing that slowness. > > Do you have an idea what could it be caused by? > > I may try reproduce that behavior later on this month if needed. Borg > 1.1.8, Fedora 29. > > Marcin > From public at enkore.de Fri Apr 5 07:16:51 2019 From: public at enkore.de (Marian Beermann) Date: Fri, 5 Apr 2019 13:16:51 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? In-Reply-To: References: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> Message-ID: Repository check _should be_ I/O bound unless the CPU is very slow. Archive check should generally be latency / CPU bound. -Marian Am 05.04.19 um 12:37 schrieb Roland @web.de: > i think "borg check" is cpu bound, not io bound. > > so, have a look at "top" - you should see borg process at 100% cpu for > most of the time. > > why amount of read from disk varies is a good question.... but at least > i do see > > cpu at 100% most of the time and memory consumption is constant, too > > > Am 04.04.19 um 23:22 schrieb Marcin Zaj?czkowski: >> Hi, >> >> I would like to consult one observation. Performing a 600GB+ repo check, >> the HDD read speed started at ~110MB/s (USB3 HDD drive), but over time >> tends to decease up until ~60MB/s after 2 hours. Disk transfer data from >> dstat. >> >>> ----system---- --dsk/sdc-- -net/total- ---system-- ----total-usage---- >>> ????? time???? | read? writ| recv? send| int?? csw |usr sys idl wai stl >> ... >>> 02-04 23:28:20| 111M??? 0 | 213?? 269 |2620? 5023 |? 6?? 7? 84?? 3?? 0 >>> 02-04 23:28:25| 110M??? 0 | 171?? 199 |4277? 8007 | 13?? 8? 72?? 7?? 0 >>> 02-04 23:28:30| 110M??? 0 | 342?? 199 |3926? 8305 | 12?? 7? 73?? 6?? 0 >> ... >>> 03-04 00:03:50|? 95M??? 0 |?? 0???? 0 |2827? 5536 |? 7?? 6? 81?? 5?? 0 >>> 03-04 00:03:55|? 93M??? 0 |?? 0???? 0 |4306??? 14k|? 9?? 7? 80?? 4?? 0 >>> 03-04 00:04:00|? 95M??? 0 |?? 0???? 0 |3366? 6121 |? 8?? 7? 79?? 6?? 0 >> ... >>> 03-04 01:18:20|? 57M??? 0 |?? 0???? 0 |1294? 2557 |? 3?? 4? 91?? 2?? 0 >>> 03-04 01:18:25|? 57M??? 0 |?? 0???? 0 |1184? 2433 |? 2?? 4? 90?? 3?? 0 >>> 03-04 01:18:30|? 57M??? 0 | 592?? 592 |1301? 2339 |? 3?? 4? 92?? 1?? 0 >> >> It may caused by some other reasons (but rather not a fragmentation - it >> was after an initial test backup performed to an empty disk), but it >> could be also some kind of memory leak in Borg which leads to keeping >> more and more data internally over time causing that slowness. >> >> Do you have an idea what could it be caused by? >> >> I may try reproduce that behavior later on this month if needed. Borg >> 1.1.8, Fedora 29. >> >> Marcin >> > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From devzero at web.de Fri Apr 5 08:45:40 2019 From: devzero at web.de (Roland @web.de) Date: Fri, 5 Apr 2019 14:45:40 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? In-Reply-To: References: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> Message-ID: <44f83f02-1872-06fd-b3c1-b1191e9d4ba1@web.de> could you define "very slow" ? my repo data is compressed with zstd and i have quite older L5630 cpu - but looking at https://www.cpubenchmark.net/compare/Intel-Xeon-L5630-vs-Intel-Xeon-E5-2679-v4/2086vs2805 which tells single thread rating 922 vs. 1854. seems you are right, i can confirm that repository check is i/o bound and repo-check is cpu bound, even with this old cpu. i wasn't aware of that difference. thanks for pointing it out. regards roland Am 05.04.19 um 13:16 schrieb Marian Beermann: > Repository check _should be_ I/O bound unless the CPU is very slow. > > Archive check should generally be latency / CPU bound. > > -Marian > > Am 05.04.19 um 12:37 schrieb Roland @web.de: >> i think "borg check" is cpu bound, not io bound. >> >> so, have a look at "top" - you should see borg process at 100% cpu for >> most of the time. >> >> why amount of read from disk varies is a good question.... but at least >> i do see >> >> cpu at 100% most of the time and memory consumption is constant, too >> >> >> Am 04.04.19 um 23:22 schrieb Marcin Zaj?czkowski: >>> Hi, >>> >>> I would like to consult one observation. Performing a 600GB+ repo check, >>> the HDD read speed started at ~110MB/s (USB3 HDD drive), but over time >>> tends to decease up until ~60MB/s after 2 hours. Disk transfer data from >>> dstat. >>> >>>> ----system---- --dsk/sdc-- -net/total- ---system-- ----total-usage---- >>>> ????? time???? | read? writ| recv? send| int?? csw |usr sys idl wai stl >>> ... >>>> 02-04 23:28:20| 111M??? 0 | 213?? 269 |2620? 5023 |? 6?? 7? 84?? 3?? 0 >>>> 02-04 23:28:25| 110M??? 0 | 171?? 199 |4277? 8007 | 13?? 8? 72?? 7?? 0 >>>> 02-04 23:28:30| 110M??? 0 | 342?? 199 |3926? 8305 | 12?? 7? 73?? 6?? 0 >>> ... >>>> 03-04 00:03:50|? 95M??? 0 |?? 0???? 0 |2827? 5536 |? 7?? 6? 81?? 5?? 0 >>>> 03-04 00:03:55|? 93M??? 0 |?? 0???? 0 |4306??? 14k|? 9?? 7? 80?? 4?? 0 >>>> 03-04 00:04:00|? 95M??? 0 |?? 0???? 0 |3366? 6121 |? 8?? 7? 79?? 6?? 0 >>> ... >>>> 03-04 01:18:20|? 57M??? 0 |?? 0???? 0 |1294? 2557 |? 3?? 4? 91?? 2?? 0 >>>> 03-04 01:18:25|? 57M??? 0 |?? 0???? 0 |1184? 2433 |? 2?? 4? 90?? 3?? 0 >>>> 03-04 01:18:30|? 57M??? 0 | 592?? 592 |1301? 2339 |? 3?? 4? 92?? 1?? 0 >>> It may caused by some other reasons (but rather not a fragmentation - it >>> was after an initial test backup performed to an empty disk), but it >>> could be also some kind of memory leak in Borg which leads to keeping >>> more and more data internally over time causing that slowness. >>> >>> Do you have an idea what could it be caused by? >>> >>> I may try reproduce that behavior later on this month if needed. Borg >>> 1.1.8, Fedora 29. >>> >>> Marcin >>> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup From liori at exroot.org Fri Apr 5 12:51:50 2019 From: liori at exroot.org (Tomasz Melcer) Date: Fri, 5 Apr 2019 18:51:50 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? In-Reply-To: References: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> Message-ID: <56e080a8-05ee-69fe-06ce-14dcf3e84b1f@exroot.org> On 05.04.2019 13:16, Marian Beermann wrote: > Repository check _should be_ I/O bound unless the CPU is very slow. > > Archive check should generally be latency / CPU bound. If so, would it make sense to run them in parallel? -- Tomasz Melcer From public at enkore.de Fri Apr 5 14:30:13 2019 From: public at enkore.de (Marian Beermann) Date: Fri, 5 Apr 2019 20:30:13 +0200 Subject: [Borgbackup] Decreasing speed of borg check command? In-Reply-To: <56e080a8-05ee-69fe-06ce-14dcf3e84b1f@exroot.org> References: <27b13557-fc1c-caf2-4609-4e32db03143c@wp.pl> <56e080a8-05ee-69fe-06ce-14dcf3e84b1f@exroot.org> Message-ID: <31ffe51f-3e14-54c6-0507-3d4a8b46b607@enkore.de> Depends on the medium. On hard drives running them in parallel would devastate the throughput of the repository check due to the random reads done by the archive check (hence latency bound). On SSDs this is more complicated but basically a function of "depends on whether Rep.Check saturates the bus interface or just one channel and how many other channels the SSD has and how much load the controller experiences and how much CPU resources are available and..." I am assuming you mean a hypothetical case were you'd run either check in independent processes. Running them in one process but multiple threads would almost certainly be much slower. -Marian Am 05.04.19 um 18:51 schrieb Tomasz Melcer: > On 05.04.2019 13:16, Marian Beermann wrote: >> Repository check _should be_ I/O bound unless the CPU is very slow. >> >> Archive check should generally be latency / CPU bound. > > If so, would it make sense to run them in parallel? > > From eric at in3x.io Mon Apr 8 20:04:49 2019 From: eric at in3x.io (Eric S. Johansson) Date: Mon, 8 Apr 2019 20:04:49 -0400 Subject: [Borgbackup] rate limiting local backups. Message-ID: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> I am backing up to a local NAS via NFS in addition to a remote repository.? When I backup locally, it saturates the local Ethernet and netdata sends me many warnings of Ethernet full.? I see there is a remote-ratelimit option. How can I rate limit local borg runs? One option would be defining localhost as a "remote" location and backup over ssh.? Are there any less ugly options? Could we have a local-ratelimit option sometime in the future? -- Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 From fw at deneb.enyo.de Tue Apr 9 07:36:00 2019 From: fw at deneb.enyo.de (Florian Weimer) Date: Tue, 09 Apr 2019 13:36:00 +0200 Subject: [Borgbackup] borg 1.1.9 crashes on SL6 due to old glibc 2.12 In-Reply-To: (Thomas Waldmann's message of "Mon, 1 Apr 2019 15:34:20 +0200") References: <0a4d2fcf-b7a6-5d85-3a3e-2199bdc62755@gmail.com> Message-ID: <87y34j76n3.fsf@mid.deneb.enyo.de> * Thomas Waldmann: >> $ borg init --e none ./borgtest >> Remote: FATAL: this Python was compiled for a too old (g)libc and >> misses required functionality. > > This is the code which triggers this exception / error message: > > required_funcs = {os.stat, os.utime, os.chown} > if not os.supports_follow_symlinks.issuperset(required_funcs): > raise PythonLibcTooOld > > So, as you see, quite basic functionality like not following symlinks is > not supported for all function where borg needs that. That must be an issue with the Python build. I just checked the Python 3.6 from the rh-python36 software collection on Red Hat Enterprise Linux 6.10, and it has stat/utime/chown/access support, according to os.supports_follow_symlinks. From gmatht at gmail.com Wed Apr 10 23:15:36 2019 From: gmatht at gmail.com (John McCabe-Dansted) Date: Thu, 11 Apr 2019 11:15:36 +0800 Subject: [Borgbackup] rate limiting local backups. In-Reply-To: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> References: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> Message-ID: Have you tried using the "trickle" wrapper? https://github.com/mariusae/trickle On Tue, 9 Apr 2019 at 08:10, Eric S. Johansson wrote: > I am backing up to a local NAS via NFS in addition to a remote > repository. When I backup locally, it saturates the local Ethernet and > netdata sends me many warnings of Ethernet full. I see there is a > remote-ratelimit option. How can I rate limit local borg runs? One > option would be defining localhost as a "remote" location and backup > over ssh. Are there any less ugly options? Could we have a > local-ratelimit option sometime in the future? > > -- > Eric S. Johansson > eric at in3x.io > http://www.in3x.io > 978-512-0272 > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- John C. McCabe-Dansted -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at in3x.io Thu Apr 11 10:12:46 2019 From: eric at in3x.io (Eric S. Johansson) Date: Thu, 11 Apr 2019 10:12:46 -0400 Subject: [Borgbackup] rate limiting local backups. In-Reply-To: References: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> Message-ID: Not sure it applies. The local borg run uses an nfs mounted NAS and it is the NFS traffic that triggers netdata alarms. I need to try to localhost loopback to see if rate limiting works. On 4/10/2019 11:15 PM, John McCabe-Dansted wrote: > Have you tried using the "trickle" wrapper? > > https://github.com/mariusae/trickle > > On Tue, 9 Apr 2019 at 08:10, Eric S. Johansson > wrote: > > I am backing up to a local NAS via NFS in addition to a remote > repository.? When I backup locally, it saturates the local > Ethernet and > netdata sends me many warnings of Ethernet full.? I see there is a > remote-ratelimit option. How can I rate limit local borg runs? One > option would be defining localhost as a "remote" location and backup > over ssh.? Are there any less ugly options? Could we have a > local-ratelimit option sometime in the future? > > -- > Eric S. Johansson > eric at in3x.io > http://www.in3x.io > 978-512-0272 > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > > > > -- > John C. McCabe-Dansted -- Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 -------------- next part -------------- An HTML attachment was scrubbed... URL: From clickwir at gmail.com Thu Apr 11 12:18:14 2019 From: clickwir at gmail.com (Zack Coffey) Date: Thu, 11 Apr 2019 10:18:14 -0600 Subject: [Borgbackup] rate limiting local backups. In-Reply-To: References: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> Message-ID: Is it a problem that it runs full speed or you just don't want netdata to complain? Honesty answering that will give you direction. On Thu, Apr 11, 2019, 8:12 AM Eric S. Johansson wrote: > Not sure it applies. The local borg run uses an nfs mounted NAS and it is > the NFS traffic that triggers netdata alarms. I need to try to localhost > loopback to see if rate limiting works. > On 4/10/2019 11:15 PM, John McCabe-Dansted wrote: > > Have you tried using the "trickle" wrapper? > > https://github.com/mariusae/trickle > > On Tue, 9 Apr 2019 at 08:10, Eric S. Johansson wrote: > >> I am backing up to a local NAS via NFS in addition to a remote >> repository. When I backup locally, it saturates the local Ethernet and >> netdata sends me many warnings of Ethernet full. I see there is a >> remote-ratelimit option. How can I rate limit local borg runs? One >> option would be defining localhost as a "remote" location and backup >> over ssh. Are there any less ugly options? Could we have a >> local-ratelimit option sometime in the future? >> >> -- >> Eric S. Johansson >> eric at in3x.io >> http://www.in3x.io >> 978-512-0272 >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > > > -- > John C. McCabe-Dansted > > -- > Eric S. Johanssoneric at in3x.iohttp://www.in3x.io > 978-512-0272 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matej.kovacic at telefoncek.si Thu Apr 11 14:21:24 2019 From: matej.kovacic at telefoncek.si (=?UTF-8?B?TWF0ZWogS292YcSNacSN?=) Date: Thu, 11 Apr 2019 20:21:24 +0200 Subject: [Borgbackup] rate limiting local backups. In-Reply-To: References: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> Message-ID: <820d1c03-0f54-ed84-ca8c-ac40f8f04058@telefoncek.si> Hi, > Is it a problem that it runs full speed or you just don't want netdata > to complain? For me, a problem is that when BorgBackup is running, it eats all my internet connection, so I am unable to use internet normally. So rate limiting would be good idea. Or even - prioritizing its traffic to a lowest rate. Regards, Matej -- PGP Fingerprint: CAB3 88B5 69F0 226C 7A5A 8C16 535C 4A5A 666F 1CCE PGP Key: https://keyserver.ubuntu.com/pks/lookup?search=0x535C4A5A666F1CCE&fingerprint=on&op=vindex Personal blog: https://telefoncek.si From matej.kovacic at telefoncek.si Thu Apr 11 16:29:03 2019 From: matej.kovacic at telefoncek.si (=?UTF-8?B?TWF0ZWogS292YcSNacSN?=) Date: Thu, 11 Apr 2019 22:29:03 +0200 Subject: [Borgbackup] rate limiting local backups. In-Reply-To: <820d1c03-0f54-ed84-ca8c-ac40f8f04058@telefoncek.si> References: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> <820d1c03-0f54-ed84-ca8c-ac40f8f04058@telefoncek.si> Message-ID: <10737e95-08d3-3303-6866-8b47600676af@telefoncek.si> Hi, anyway, you can use trickle (on Linux): trickle -d 500 -u 100 borg This will run borg a maximum of 500KB/s download and 100KB/s upload Regards, M. || -- PGP Fingerprint: CAB3 88B5 69F0 226C 7A5A 8C16 535C 4A5A 666F 1CCE PGP Key: https://keyserver.ubuntu.com/pks/lookup?search=0x535C4A5A666F1CCE&fingerprint=on&op=vindex Personal blog: https://telefoncek.si -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at in3x.io Thu Apr 11 21:56:22 2019 From: eric at in3x.io (Eric S. Johansson) Date: Thu, 11 Apr 2019 21:56:22 -0400 Subject: [Borgbackup] rate limiting local backups. In-Reply-To: References: <739564b4-553f-473e-52d9-8cf464690726@in3x.io> Message-ID: I don't want netdata to trigger when backups fill the link.? I need netdata to trigger when something other than backup fills the link (or something else breaks). right now, I get so many alerts I'm missing the real ones. On 4/11/2019 12:18 PM, Zack Coffey wrote: > Is it a problem that it runs full speed or you just don't want netdata > to complain?? > > Honesty answering that will give you direction.? > > On Thu, Apr 11, 2019, 8:12 AM Eric S. Johansson > wrote: > > Not sure it applies. The local borg run uses an nfs mounted NAS > and it is the NFS traffic that triggers netdata alarms. I need to > try to localhost loopback to see if rate limiting works. > > On 4/10/2019 11:15 PM, John McCabe-Dansted wrote: >> Have you tried using the "trickle" wrapper? >> >> https://github.com/mariusae/trickle >> >> On Tue, 9 Apr 2019 at 08:10, Eric S. Johansson > > wrote: >> >> I am backing up to a local NAS via NFS in addition to a remote >> repository.? When I backup locally, it saturates the local >> Ethernet and >> netdata sends me many warnings of Ethernet full.? I see there >> is a >> remote-ratelimit option. How can I rate limit local borg >> runs? One >> option would be defining localhost as a "remote" location and >> backup >> over ssh.? Are there any less ugly options? Could we have a >> local-ratelimit option sometime in the future? >> >> -- >> Eric S. Johansson >> eric at in3x.io >> http://www.in3x.io >> 978-512-0272 >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> >> >> >> -- >> John C. McCabe-Dansted > > -- > Eric S. Johansson > eric at in3x.io > http://www.in3x.io > 978-512-0272 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad at undercovergarys.com Fri Apr 12 10:15:27 2019 From: brad at undercovergarys.com (Brad Wilson) Date: Fri, 12 Apr 2019 16:15:27 +0200 Subject: [Borgbackup] Read-only filesystems Message-ID: <87pnpr5myo.fsf@gadsden> Is there any way to borg mount a repository that is inside another mounted borg repository? Making backups of backups seems like a valid usage for borg, especially when bundling the backups of multiple borg users, and where some are encrypted and others are not. The current behavior seemingly prevents borg mount from working on recursive repos. If it's impossible to mount and browse nested borg backups, what's the preferred way of handling them? Here's the error: borg mount /root/mounty/backup01-2019-04-05T18\:36\:14.362268/srv/storage/backups/nfs01/ /root/mountymounty/ Failed to create/acquire the lock /root/mounty/backup01-2019-04-05T18:36:14.362268/srv/storage/backups/nfs01/lock.exclusive ([Errno 30] Read-only file system: '/root/mounty/backup01-2019-04-05T18:36:14.362268/srv/storage/backups/nfs01/lock.exclusive'). Traceback (most recent call last): File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4455, in main exit_code = archiver.run(args) File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4387, in run return set_ec(func(args)) File "/usr/lib/python3/dist-packages/borg/archiver.py", line 1355, in do_mount return self._do_mount(args) File "/usr/lib/python3/dist-packages/borg/archiver.py", line 139, in wrapper with repository: File "/usr/lib/python3/dist-packages/borg/repository.py", line 189, in __enter__ self.open(self.path, bool(self.exclusive), lock_wait=self.lock_wait, lock=self.do_lock) File "/usr/lib/python3/dist-packages/borg/repository.py", line 392, in open self.lock = Lock(os.path.join(path, 'lock'), exclusive, timeout=lock_wait, kill_stale_locks=hostname_is_unique()).acquire() File "/usr/lib/python3/dist-packages/borg/locking.py", line 353, in acquire with self._lock: File "/usr/lib/python3/dist-packages/borg/locking.py", line 114, in __enter__ return self.acquire() File "/usr/lib/python3/dist-packages/borg/locking.py", line 138, in acquire raise LockFailed(self.path, str(err)) from None borg.locking.LockFailed: Failed to create/acquire the lock /root/mounty/backup01-2019-04-05T18:36:14.362268/srv/storage/backups/nfs01/lock.exclusive ([Errno 30] Read-only file system: '/root/mounty/backup01-2019-04-05T18:36:14.362268/srv/storage/backups/nfs01/lock.exclusive'). Platform: Linux backup01 4.19.0-2-amd64 #1 SMP Debian 4.19.16-1 (2019-01-17) x86_64 Linux: debian buster/sid Borg: 1.1.9 Python: CPython 3.7.2+ PID: 15280 CWD: /root/mounty/backup01-2019-04-05T18:36:14.362268 sys.argv: ['/usr/bin/borg', 'mount', '/root/mounty/backup01-2019-04-05T18:36:14.362268/srv/storage/backups/nfs01/', '/root/mountymounty/'] SSH_ORIGINAL_COMMAND: None From imperator at jedimail.de Fri Apr 12 11:01:05 2019 From: imperator at jedimail.de (Sascha Ternes) Date: Fri, 12 Apr 2019 17:01:05 +0200 Subject: [Borgbackup] Read-only filesystems In-Reply-To: <87pnpr5myo.fsf@gadsden> References: <87pnpr5myo.fsf@gadsden> Message-ID: <06b0ee0e-7711-1f7f-9d6f-41253e155385@jedimail.de> Hey Brad, Am 12.04.19 um 16:15 schrieb Brad Wilson: > Is there any way to borg mount a repository that is inside another mounted borg repository? > > Here's the error: > > borg mount /root/mounty/backup01-2019-04-05T18\:36\:14.362268/srv/storage/backups/nfs01/ /root/mountymounty/ > Failed to create/acquire the lock /root/mounty/backup01-2019-04-05T18:36:14.362268/srv/storage/backups/nfs01/lock.exclusive ([Errno 30] > Read-only file system: '/root/mounty/backup01-2019-04-05T18:36:14.362268/srv/storage/backups/nfs01/lock.exclusive'). borgfs is a read-only file system. Every time borg opens a repository it writes some data (e.g. locks) into it. So you won't be successful mounting a borg repo within a borg repo the direct way. It may be possible if you create a second directory, copy the repo base files there and create a symbolic link to the data directory. Then you may borg mount the subrepo from it. borg will write to the copy and can create locks in the second directory. From liori at exroot.org Sat Apr 13 04:25:00 2019 From: liori at exroot.org (Tomasz Melcer) Date: Sat, 13 Apr 2019 10:25:00 +0200 Subject: [Borgbackup] Read-only filesystems In-Reply-To: <06b0ee0e-7711-1f7f-9d6f-41253e155385@jedimail.de> References: <87pnpr5myo.fsf@gadsden> <06b0ee0e-7711-1f7f-9d6f-41253e155385@jedimail.de> Message-ID: <9c3a4ee5-49a6-e826-098e-4a29242490dc@exroot.org> On 12.04.2019 17:01, Sascha Ternes wrote: > borgfs is a read-only file system. Every time borg opens a repository it > writes some data (e.g. locks) into it. So you won't be successful > mounting a borg repo within a borg repo the direct way. Looks like using some kind of overlayfs/aufs/unionfs on top of the first repository would help here: https://en.wikipedia.org/wiki/OverlayFS https://en.wikipedia.org/wiki/Aufs https://en.wikipedia.org/wiki/UnionFS ?whichever is available to you. -- Tomasz Melcer From tw at waldmann-edv.de Mon Apr 22 14:44:44 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 22 Apr 2019 20:44:44 +0200 Subject: [Borgbackup] borgbackup 1.2.0a6 released! Message-ID: <5b31af8c-ab75-2fde-0ebc-15db7ae26d70@waldmann-edv.de> borgbackup 1.2.0a6 released! This is the first alpha release that includes pyinstaller-made fat binaries for Linux, FreeBSD and macOS (see the README about the binaries). Please help testing it! https://github.com/borgbackup/borg/releases/tag/1.2.0a6 BTW, when discussing stuff that is NOT related to this release, please change the subject line. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tve at voneicken.com Fri Apr 26 17:38:26 2019 From: tve at voneicken.com (Thorsten von Eicken) Date: Fri, 26 Apr 2019 21:38:26 +0000 Subject: [Borgbackup] speeding up borg check Message-ID: <0100016a5b95d094-6021f0a8-58b7-4cfd-96b5-fff8d198dcbf-000000@email.amazonses.com> I've read the docs on borg check 3 times and I'm still lost :-(... It says in detail what --archives-only and --repository-only do, but I don't know what that means... I'm backing up a ~1TB filesystem daily where almost nothing changes and I want to run a prune and a check after the create. The check currently takes 11 hours (slow disk & cpu). Ideally, what I'd like is for check to do the maximum number of verifications short of reading the whole TB. So check any indexes, directories, presence of files, etc. Just don't read the full TB to verify each bit, I'd do that explicitly once a week or once a month. Is this possible? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From imperator at jedimail.de Sat Apr 27 09:37:27 2019 From: imperator at jedimail.de (Sascha Ternes) Date: Sat, 27 Apr 2019 15:37:27 +0200 Subject: [Borgbackup] speeding up borg check In-Reply-To: <0100016a5b95d094-6021f0a8-58b7-4cfd-96b5-fff8d198dcbf-000000@email.amazonses.com> References: <0100016a5b95d094-6021f0a8-58b7-4cfd-96b5-fff8d198dcbf-000000@email.amazonses.com> Message-ID: Hi Thorsten, I assume you're using the latest stable 1.1.9. borg check running time heavily depends on the link between your 1 TB file system and your backup space. If it is mounted locally or accessed via ssh:/ over fast connection (and borg is running remotely), the check will be much faster. If you only have a storage repo without borg running on it, checking will be much slower. Am 26.04.19 um 23:38 schrieb Thorsten von Eicken: > I've read the docs on borg check 3 times and I'm still lost :-(... It > says in detail what --archives-only and --repository-only do, but I > don't know what that means... In short: --repository-only will do CRC and size checks of all data chunks in the repo. It won't check the files in the archives. --archives-only will check if for every file in every archive if the chunk(s) are available in the repo. > I'm backing up a ~1TB filesystem daily where almost nothing changes and > I want to run a prune and a check after the create. The check currently > takes 11 hours (slow disk & cpu). Sounds normal, since you did a full check (both repo and archives) I suppose. Doc says: "The archive checks can be time consuming, they can be skipped using the --repository-only option." > Ideally, what I'd like is for check to > do the maximum number of verifications short of reading the whole TB. So > check any indexes, directories, presence of files, etc. Just don't read > the full TB to verify each bit, I'd do that explicitly once a week or > once a month. Is this possible? I think you don't need to check after every create, esp. if "amonst nothing changes". You may check only the latest backup that was just created, via --archives-only and "--last 1" options (see doc). Once in a month you can run a full check, maybe just --repository-only. From dave at gasaway.org Sat Apr 27 14:59:10 2019 From: dave at gasaway.org (David Gasaway) Date: Sat, 27 Apr 2019 11:59:10 -0700 Subject: [Borgbackup] speeding up borg check In-Reply-To: References: <0100016a5b95d094-6021f0a8-58b7-4cfd-96b5-fff8d198dcbf-000000@email.amazonses.com> Message-ID: On Sat, Apr 27, 2019 at 11:42 AM Sascha Ternes wrote: --archives-only will check if for every file in every archive if the > chunk(s) are available in the repo. > Unless things have changed since I discussed this on the list a while back, it's important to note that this checks that the chunks exist in the chunk index. It will not detect missing segment files in the repo. > > Ideally, what I'd like is for check to > > do the maximum number of verifications short of reading the whole TB. So > > check any indexes, directories, presence of files, etc. Just don't read > > the full TB to verify each bit, I'd do that explicitly once a week or > > once a month. Is this possible? > Assuming "presence of files" means check for the expected segment files, then neither --archives-only nor --repository-only meets these requirements. I'd be happy to learn if this is no longer correct. :) -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From avkaplmkt at gmail.com Fri May 3 23:12:14 2019 From: avkaplmkt at gmail.com (Advrk Aplmrkt) Date: Sat, 4 May 2019 03:12:14 +0000 Subject: [Borgbackup] borg mount fails with "Local Exception" error Message-ID: Hello, I am running borg 1.1.8 on a fully updated Manjaro Linux system with Python 3.7. I've been backing up my local files to an external hard drive with no errors (I also monitor backups with the `--progress` argument and everything looks fine). However, when I try to use `borg mount` to mount and browse the contents of an archive, the command fails with the following traceback: https://framabin.org/p/?f6a227cfe1915b54#b19ZPT3Jqg/mAERDqkgb17DefePUTCHfHOx2N1v5etY= The key line in the error seems to be: ``` File "/usr/lib/python3.7/site-packages/borg/fuse.py", line 160, in iter_archive_items item = unpacker.unpack(write_bytes) TypeError: unpack() takes no arguments (1 given) ``` I've tried mounting archives from different repositories and get the same error. I read the documentation on `borg mount` and the FAQ. Am I missing something obvious? I'd appreciate any help to troubleshoot this problem. Thank you! From florian at whnr.de Sat May 4 01:46:30 2019 From: florian at whnr.de (Florian Wehner) Date: Sat, 4 May 2019 01:46:30 -0400 Subject: [Borgbackup] borg mount fails with "Local Exception" error In-Reply-To: References: Message-ID: <5D927DDE-2130-4907-9C9A-0E7AF028CCBF@whnr.de> Hi, > TypeError: unpack() takes no arguments (1 given) I think you can start here: https://github.com/borgbackup/borg/issues/4245 There might be a solution in the future. ?Flo -------------- next part -------------- An HTML attachment was scrubbed... URL: From sammy at posteo.de Mon May 6 11:47:43 2019 From: sammy at posteo.de (Samuel) Date: Mon, 6 May 2019 17:47:43 +0200 Subject: [Borgbackup] Off-Site Backup Message-ID: <2f7002ea-56bb-3264-d42a-aea4efe32f10@posteo.de> Hello, I guess I have a somehow specific question. I run a homeserver which does local Backups on an internal Hard Drive. Additionally some Laptops are doing their Backups in the same Borg Repository. Now I want to do an Off-Site Backup but am not sure how to get the data from the laptops to the Off-Site Backup. For the data on the home server I'm just going to follow the instructions here: https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-copy-or-synchronize-my-repo-to-another-location and create two backups in seperate locations. For the data on the laptops this is inconvenient as I want the backup to be called automatically and on the laptop the backups are called manually. Is there a way to backup the already backupped data on the server to the off site location? Thank you very much Samuel From felix.schwarz at oss.schwarz.eu Mon May 6 12:04:32 2019 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Mon, 6 May 2019 18:04:32 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution Message-ID: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> Hi, how comprehensive is borgbackup's test suite when it comes to msgpack? If the test suite does not report any errors may I assume that the used version of msgpack is likely working correctly? Background: Unfortunately Fedora 30 ships msgpack 0.6.1 but there is no released version of borgbackup which can handle this version of msgpack. I'm looking for stop-gap measures to get borgbackup running in Fedora 30 again (must be able to distribute this via Fedora's package manager so no local one-off builds). I assume the simplest way to achieve this is to bundle msgpack 0.5.6 into borgbackup for distribution. During packaging we run borg's test suite. Assuming the test suite passes may I assume that borg will be fine-ish? (I know that is a hack and I don't like doing that to such an important package like borg. But on the other hand borg is unusable on Fedora 30 right now. The only thing which could be worse is if borg starts eating the users backup data.) Felix From sitaramc at gmail.com Tue May 7 00:09:43 2019 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Tue, 7 May 2019 09:39:43 +0530 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> Message-ID: On 06/05/2019 21.34, Felix Schwarz wrote: > Hi, > > how comprehensive is borgbackup's test suite when it comes to msgpack? > If the test suite does not report any errors may I assume that the used > version of msgpack is likely working correctly? > > Background: > Unfortunately Fedora 30 ships msgpack 0.6.1 but there is no released version > of borgbackup which can handle this version of msgpack. > > I'm looking for stop-gap measures to get borgbackup running in Fedora 30 again > (must be able to distribute this via Fedora's package manager so no local > one-off builds). > > I assume the simplest way to achieve this is to bundle msgpack 0.5.6 into > borgbackup for distribution. During packaging we run borg's test suite. > Assuming the test suite passes may I assume that borg will be fine-ish? > > (I know that is a hack and I don't like doing that to such an important > package like borg. But on the other hand borg is unusable on Fedora 30 right > now. The only thing which could be worse is if borg starts eating the users > backup data.) Are you the Fedora package maintainer for this tool? If so, you probably know that this has happened once in the past -- though I cannot quite recall exactly when. Anyway, I definitely got a sense of deja vu seeing those errors, as well as seeing the patch at https://github.com/borgbackup/borg/commit/0ebfaa5b61a675c22cda301bc20d0b00372dd181 I'm just mentioning it because you said "looking for stop-gap measures". Maybe it's better to think about bundling msgpack with borg as a matter of routine; i.e., ignore whatever msgpack is installed in the system and use its own, on a regular basis. (My current solution is to downgrade msgpack, since I found nothing else was using it; credit to https://bugzilla.redhat.com/show_bug.cgi?id=1669083#c11 for this) From tw at waldmann-edv.de Tue May 7 09:26:27 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 7 May 2019 15:26:27 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> Message-ID: <5a78a3fd-0ab9-2919-0907-19052f764114@waldmann-edv.de> > how comprehensive is borgbackup's test suite when it comes to msgpack? Good question. borg internally uses msgpack quite a lot, so any major breakage would blow up things quite spectacularly. otoh, the test suite is not made to somehow test for the slightly incompatible changes in the msgpack api - borg expects to be run with the msgpack versions as required in setup.py and it always installs correct versions when using pip / tox / etc. > If the test suite does not report any errors may I assume that the used > version of msgpack is likely working correctly? For the existing msgpack versions, I guess this could be found out be digging through issue tracker (incl. closed issues) and mailing list and see if all the problems encountered also will be encountered when running the tests. You can't really be sure for future msgpack changes, though. For borg 1.2, I wrapped the whole msgpack api to avoid such pain in future (or at least be able to more quickly / easily adapt to it), but that doesn't help us for 1.1. > Unfortunately Fedora 30 ships msgpack 0.6.1 but there is no released version > of borgbackup which can handle this version of msgpack. What I was wondering about: Is there no process in linux distributions that holds maintainers back from upgrading to new package versions, if the upgrade breaks existing and well-declared dependencies? Also, is there something else actually requiring msgpack >= 0.6.0 so the upgrade was needed for that? Or is it just for the fun of having the latest version? :) > I assume the simplest way to achieve this is to bundle msgpack 0.5.6 into > borgbackup for distribution. Sounds good (assuming borg actually only loads the bundled code and not the system wide one, if also installed). Make sure the bundled stuff also has the shared libs compiled from msgpack C code or borg will be slow and emit a "using slow msgpack" warning, if it falls back to pure python msgpack code. borg 1.1.9 release has a runtime check for the msgpack version and exits if it does not meet requirements. > During packaging we run borg's test suite. > Assuming the test suite passes may I assume that borg will be fine-ish? It's better than not running the tests, but I am not totally sure it is sufficient. > (I know that is a hack and I don't like doing that to such an important > package like borg. But on the other hand borg is unusable on Fedora 30 right > now. The only thing which could be worse is if borg starts eating the users > backup data.) Assuming you must keep the "too new" msgpack package, bundling is maybe the best option - please give feedback about that on our issue tracker, so maybe we could implement it upstream or at least other package maintainers can find it there. Another option (for users, independent of the distribution) is to use our "fat binary" from github releases page. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Tue May 7 09:39:39 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 7 May 2019 15:39:39 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> Message-ID: > Anyway, I definitely got a > sense of deja vu seeing those errors, as well as seeing the patch at > https://github.com/borgbackup/borg/commit/0ebfaa5b61a675c22cda301bc20d0b00372dd181 That patch is just for borg master branch, which already has been adapted before to work with recent msgpack versions. If you dig in issue tracker and git history, you'll see a lot of msgpack related changes in master branch (but not in 1.1-maint branch). msgpack releases are of quite varying quality, there has been a lot of buggy releases a while ago. also there are sometimes problematic api changes. otoh, having a working msgpack is quite essential for borg working correctly and for people having working backups and restores. this is why i got rather picky about msgpack releases and why the requirements in borg's setup.py are that strict and new msgpack releases only get added there after some testing. > (My current solution is to downgrade msgpack, since I found nothing else > was using it; credit to > https://bugzilla.redhat.com/show_bug.cgi?id=1669083#c11 for this) The question here is: - is nothing else requiring the newest msgpack ON YOUR SYSTEM - is nothing else requiring it considering all existing dist packages -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From sitaramc at gmail.com Tue May 7 20:15:37 2019 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Wed, 8 May 2019 05:45:37 +0530 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> Message-ID: <20190508001537.GA28830@sita-dell> On Tue, May 07, 2019 at 03:39:39PM +0200, Thomas Waldmann wrote: > > (My current solution is to downgrade msgpack, since I found nothing else > > was using it; credit to > > https://bugzilla.redhat.com/show_bug.cgi?id=1669083#c11 for this) > > The question here is: > - is nothing else requiring the newest msgpack ON YOUR SYSTEM > - is nothing else requiring it considering all existing dist packages Absolutely nothing else on my system even uses msgpack, regardless of version. Checked by "dnf remove" and looking at the list of dependent packages that would be removed. However, this may not be true for everyone, so it won't help people who are doing package maintenance for distros. From ndbecker2 at gmail.com Tue May 7 20:40:32 2019 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 7 May 2019 20:40:32 -0400 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: <20190508001537.GA28830@sita-dell> References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <20190508001537.GA28830@sita-dell> Message-ID: Dnf repoquery --whatrequires should provide information, IIRC On Tue, May 7, 2019, 8:16 PM Sitaram Chamarty wrote: > On Tue, May 07, 2019 at 03:39:39PM +0200, Thomas Waldmann wrote: > > > > (My current solution is to downgrade msgpack, since I found nothing > else > > > was using it; credit to > > > https://bugzilla.redhat.com/show_bug.cgi?id=1669083#c11 for this) > > > > The question here is: > > - is nothing else requiring the newest msgpack ON YOUR SYSTEM > > - is nothing else requiring it considering all existing dist packages > > Absolutely nothing else on my system even uses msgpack, > regardless of version. Checked by "dnf remove" and looking at > the list of dependent packages that would be removed. > > However, this may not be true for everyone, so it won't help > people who are doing package maintenance for distros. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sitaramc at gmail.com Tue May 7 23:32:11 2019 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Wed, 8 May 2019 09:02:11 +0530 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <20190508001537.GA28830@sita-dell> Message-ID: <9efd9ed0-0fb7-773f-fd07-5d6fc7bb8e27@gmail.com> On 08/05/2019 06.10, Neal Becker wrote: > Dnf repoquery --whatrequires should provide information, IIRC unfortunately that's not recursive so you could miss something if there's an A -> B -> C dependency. From felix.schwarz at oss.schwarz.eu Fri May 10 02:59:52 2019 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Fri, 10 May 2019 08:59:52 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> Message-ID: <8a5d0a78-5521-bf39-2a8a-e0cc07c8fbbf@oss.schwarz.eu> Am 07.05.19 um 06:09 schrieb Sitaram Chamarty: > Are you the Fedora package maintainer for this tool? Yes and no. Technically I'm a co-maintainer. Practically speaking I participated only a bit in the initial Fedora review. > I'm just mentioning it because you said "looking for stop-gap measures". > Maybe it's better to think about bundling msgpack with borg as a matter > of routine; i.e., ignore whatever msgpack is installed in the system and > use its own, on a regular basis. I don't think this would be a good idea. I'm not strictly opposed to bundling but only when we have no other choice. In the end borg does play nice with system libraries and does not require custom patches for half of its dependencies (e.g. chromium). But having some infrastructure in place to bundle msgpack if necessary seems to be a good idea. Felix From felix.schwarz at oss.schwarz.eu Fri May 10 03:01:58 2019 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Fri, 10 May 2019 09:01:58 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: <8a5d0a78-5521-bf39-2a8a-e0cc07c8fbbf@oss.schwarz.eu> References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <8a5d0a78-5521-bf39-2a8a-e0cc07c8fbbf@oss.schwarz.eu> Message-ID: PS: As you are already aware of the Fedora issue you probably also got my posts about a Koji build which bundles msgpack: https://koji.fedoraproject.org/koji/taskinfo?taskID=34754384 I'd be glad if you could run try running it and report the findings. Caveats apply - do not use this on your most important backup data :-) From sitaramc at gmail.com Fri May 10 03:07:17 2019 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Fri, 10 May 2019 12:37:17 +0530 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: <8a5d0a78-5521-bf39-2a8a-e0cc07c8fbbf@oss.schwarz.eu> References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <8a5d0a78-5521-bf39-2a8a-e0cc07c8fbbf@oss.schwarz.eu> Message-ID: <0829b04a-78f9-dcec-3033-34b0252ebb11@gmail.com> On 10/05/2019 12.29, Felix Schwarz wrote: > Am 07.05.19 um 06:09 schrieb Sitaram Chamarty: >> Are you the Fedora package maintainer for this tool? > > Yes and no. Technically I'm a co-maintainer. Practically speaking I > participated only a bit in the initial Fedora review. > >> I'm just mentioning it because you said "looking for stop-gap measures". >> Maybe it's better to think about bundling msgpack with borg as a matter >> of routine; i.e., ignore whatever msgpack is installed in the system and >> use its own, on a regular basis. > > I don't think this would be a good idea. I'm not strictly opposed to bundling > but only when we have no other choice. In the end borg does play nice with > system libraries and does not require custom patches for half of its > dependencies (e.g. chromium). I suggested bundling, not because Borg does not play nice, but because msgpack is not "nice enough", so to speak, and so borg has to "[get] rather picky" about msgpack versions [1]. [1]: https://mail.python.org/pipermail/borgbackup/2019q2/001364.html As such, it is fair to consider msgpack *version* a much more tightly coupled dependency for borg, albeit not the fault of borg itself. However, I agree this is somewhat subjective. regards sitaram > But having some infrastructure in place to bundle msgpack if necessary seems > to be a good idea. > > Felix > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- sitaram From felix.schwarz at oss.schwarz.eu Fri May 10 03:24:04 2019 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Fri, 10 May 2019 09:24:04 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: <5a78a3fd-0ab9-2919-0907-19052f764114@waldmann-edv.de> References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <5a78a3fd-0ab9-2919-0907-19052f764114@waldmann-edv.de> Message-ID: Hi Thomas, thank you very much for your response. I built a RPM with a bundled version of msgpack 0.5.6 (which is the current version in Fedora 29 anyway). Am 07.05.19 um 15:26 schrieb Thomas Waldmann: > borg internally uses msgpack quite a lot, so any major breakage would > blow up things quite spectacularly. Tests were run with py.test-3 -x -vk "not test_non_ascii_acl and not test_fuse and not benchmark and not test_dash_open and not test_mount_hardlinks" $PYTHONPATH/borg/testsuite/*.py and everything passed so I hope everything is fine. One thing I noticed was that I got test failures when running 1.1.9 with a pure-python msgpack. That costed me a bit of time until I noticed this seems to be a problem in upstream borg (not my initial RPM packaging). Are you interested in a bug report? (I don't care too much now as I'm looking forward to 1.2). > What I was wondering about: > > Is there no process in linux distributions that holds maintainers back > from upgrading to new package versions, if the upgrade breaks existing > and well-declared dependencies? Short answer "no". Basically the current breakage seems to be a volunteer/maintainer problem: There is a maintainer for python-msgpack who wants to provide the latest msgpack which should fix a security issue. I guess this is not relevant for borgbackup as it does not work on untrusted data. However there are also other dependent packages in Fedora (+ user code which might use python-msgpack) which might be affected. On the other side borgbackup in Fedora is really only maintained by a single volunteer. The breakage was spotted early on (in January) but for whatever reason no action was taken for the Fedora borgbackup package. As I was not doing "maintainer stuff" for borgbackup in years I did not prioritize the bug until F30 was published and it became clear to me that nobody but me was willing to invest time in that. Yes, Fedora's tooling is inadequate for that - allowing maintainers to push packages which break the upgrade path or other packages. However in this situation it would have probably resulted in borgbackup being removed altogether from Fedora 30. > Assuming you must keep the "too new" msgpack package, bundling is maybe > the best option - please give feedback about that on our issue tracker, > so maybe we could implement it upstream or at least other package > maintainers can find it there. I'm attaching my patches here - not sure if they really help upstream. 1. I patched python-msgpack to use only relative imports. That way I can move the msgpack sources into src/borg/_msgpack (Python package "borg._msgpack"). I don't know why upstream only uses absolute imports but that part might actually go upstream. 2. Adding the two Cython files to borg's setup.py. msgpack uses Cython's C++ compiler so I did the same. 3. Patching borg to use msgpack from borg._msgpack - that patch would be much smaller with borg 1.2 as you added a msgpack "shim" module. So if you like to supporting bundling inside of borgbackup I could try finding time to refine my patch #2. Felix -------------- next part -------------- A non-text attachment was scrubbed... Name: 0010-msgpack-use-relative-imports.patch Type: text/x-patch Size: 3021 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0011-also-build-msgpack.patch Type: text/x-patch Size: 4234 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0012-use-bundled-msgpack.patch Type: text/x-patch Size: 5446 bytes Desc: not available URL: From tw at waldmann-edv.de Sun May 12 15:39:24 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 12 May 2019 21:39:24 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <5a78a3fd-0ab9-2919-0907-19052f764114@waldmann-edv.de> Message-ID: > I built a RPM with a bundled version of msgpack 0.5.6 (which is the current > version in Fedora 29 anyway). Great, that's the latest supported version. > py.test-3 -x -vk "not test_non_ascii_acl and not test_fuse and not > benchmark and not test_dash_open and not test_mount_hardlinks" > $PYTHONPATH/borg/testsuite/*.py Hmm, what's the problem with test_dash_open? Guess I'm gonna rename test_mount_hardlinks to test_fuse_mount_hardlinks to make excluding it easier. > One thing I noticed was that I got test failures when running 1.1.9 with a > pure-python msgpack. That costed me a bit of time until I noticed this seems > to be a problem in upstream borg (not my initial RPM packaging). > > Are you interested in a bug report? Yes, please! >> What I was wondering about: >> >> Is there no process in linux distributions that holds maintainers back >> from upgrading to new package versions, if the upgrade breaks existing >> and well-declared dependencies? > > Short answer "no". That's a pity, guess there could be some automated process that detects breakage when it happens and notifies the one who broke it. > Basically the current breakage seems to be a volunteer/maintainer problem: > > There is a maintainer for python-msgpack who wants to provide the latest > msgpack which should fix a security issue. IIRC it is not really a security fix, rather safer defaults. Not relevant for borg, as we don't use the defaults at critical places anyway. > As I was not doing "maintainer stuff" for borgbackup in years I did not > prioritize the bug until F30 was published and it became clear to me that > nobody but me was willing to invest time in that. Thanks for jumping in! > I'm attaching my patches here - not sure if they really help upstream. Thanks, I'll have a look. Not sure yet if I want to bundle that, we'll see. > 1. I patched python-msgpack to use only relative imports. That way I can move > the msgpack sources into src/borg/_msgpack (Python package > "borg._msgpack"). > I don't know why upstream only uses absolute imports but that part might > actually go upstream. Guess that would be useful. > 2. Adding the two Cython files to borg's setup.py. msgpack uses Cython's C++ > compiler so I did the same. > 3. Patching borg to use msgpack from borg._msgpack - that patch would be much > smaller with borg 1.2 as you added a msgpack "shim" module. > > So if you like to supporting bundling inside of borgbackup I could try finding > time to refine my patch #2. I'll have a look at the patches. Guess what might be a nice way would be to do it like for the other libs / bundled sources: - default to "system" (lib) - have a switch for "bundled" - also fallback to bundled if not found on the system Guess that way everybody would be happy. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Sun May 12 15:56:46 2019 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Sun, 12 May 2019 21:56:46 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <5a78a3fd-0ab9-2919-0907-19052f764114@waldmann-edv.de> Message-ID: <2c186a65-d4ca-eb90-3d0f-772cc28d3abc@oss.schwarz.eu> Am 12.05.19 um 21:39 schrieb Thomas Waldmann: >> py.test-3 -x -vk "not test_non_ascii_acl and not test_fuse and not >> benchmark and not test_dash_open and not test_mount_hardlinks" >> $PYTHONPATH/borg/testsuite/*.py > > Hmm, what's the problem with test_dash_open? I did not add the exclusion but the spec file has a comment: # exclude test_dash_open: pytest stub has a bug and is fixed in 3.0.2 (epel7 uses 2.8.5) I think the spec file is shared between Fedora + CentOS 7 (= EPEL) and just excluding that test might make the spec simpler. (EPEL 7 now uses 2.9.2) Right now I see a hang in the test suite on Fedora's s390x builder: https://koji.fedoraproject.org/koji/taskinfo?taskID=34754384 Jerry James mentioned that it hangs in RemoteArchiverTestCase::test_extract_hardlinks see also: https://lists.fedoraproject.org/archives/list/devel at lists.fedoraproject.org/thread/SDBGO7IEEMFW3VHMOVZQ73RGISNOJ6CU/ Just in case you have some idea from the top of your head, please let me know. >> 1. I patched python-msgpack to use only relative imports. That way I can move >> the msgpack sources into src/borg/_msgpack (Python package >> "borg._msgpack"). >> I don't know why upstream only uses absolute imports but that part might >> actually go upstream. > > Guess that would be useful. Most of that patch was merged by upstream today: https://github.com/msgpack/msgpack-python/pull/357 > Guess what might be a nice way would be to do it like for the other libs > / bundled sources: > - default to "system" (lib) > - have a switch for "bundled" > - also fallback to bundled if not found on the system > > Guess that way everybody would be happy. Yes, that would be best way for sure. Felix From tw at waldmann-edv.de Sun May 12 19:57:17 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 13 May 2019 01:57:17 +0200 Subject: [Borgbackup] bundling msgpack for borgback distribution In-Reply-To: <2c186a65-d4ca-eb90-3d0f-772cc28d3abc@oss.schwarz.eu> References: <46679f43-1b2d-c8f6-3447-d9b74a5775d9@oss.schwarz.eu> <5a78a3fd-0ab9-2919-0907-19052f764114@waldmann-edv.de> <2c186a65-d4ca-eb90-3d0f-772cc28d3abc@oss.schwarz.eu> Message-ID: <6e9b6581-0439-a999-8dbc-69e74be14c32@waldmann-edv.de> >> Hmm, what's the problem with test_dash_open? > > I did not add the exclusion but the spec file has a comment: > # exclude test_dash_open: pytest stub has a bug and is fixed in 3.0.2 (epel7 > uses 2.8.5) > > I think the spec file is shared between Fedora + CentOS 7 (= EPEL) and just > excluding that test might make the spec simpler. (EPEL 7 now uses 2.9.2) Ah, ok. > Right now I see a hang in the test suite on Fedora's s390x builder: > https://koji.fedoraproject.org/koji/taskinfo?taskID=34754384 build/lib.linux-s390x-3.7/borg/testsuite/archiver.py::RemoteArchiverTestCase::test_extract_capabilities SKIPPED [ 15%] That is the last line in the log. But as it says skipped, I suspect in hangs in the test afterwards. > Jerry James mentioned that it hangs in > RemoteArchiverTestCase::test_extract_hardlinks > see also: > https://lists.fedoraproject.org/archives/list/devel at lists.fedoraproject.org/thread/SDBGO7IEEMFW3VHMOVZQ73RGISNOJ6CU/ If it is that, it will be fixed in borg 1.1.10. Somehow this is a bug that only bit sometimes, just rerunning the test can make it pass. The real fix will be in 1.1.10, it was a problem when the master hardlink content chunks did not get preloaded (because not selected for extraction), but the (selected) slave hardlink expected it to be preloaded... > Most of that patch was merged by upstream today: > https://github.com/msgpack/msgpack-python/pull/357 Cool. :) I've made a PR from your patches and will try to have that in 1.1.10 also. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Thu May 16 00:00:08 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 16 May 2019 06:00:08 +0200 Subject: [Borgbackup] borgbackup 1.1.10 released! Message-ID: <5f087a71-08ec-65db-eae9-84c82b9a6bfb@waldmann-edv.de> borgbackup 1.1.10 released with bug fixes and bundled msgpack. https://github.com/borgbackup/borg/releases/tag/1.1.10 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From delovan at gmail.com Fri May 17 04:56:19 2019 From: delovan at gmail.com (Damien Gustave) Date: Fri, 17 May 2019 10:56:19 +0200 Subject: [Borgbackup] Backup process of a huge archive is sometimes fast, and sometimes very slow Message-ID: Hello all, I backup a huge folder of +700 GB and daily backups of this. Most of the time, the process is quick enough, only take 15 minutes to complete: Time (start): Thu, 2019-05-16 03:53:42 Time (end): Thu, 2019-05-16 10:55:10 Duration: 7 hours 1 minutes 27.98 seconds Number of files: 1650426 Utilization of max. archive size: 0% --------- Original size Compressed size Deduplicated This archive: 726.59 GB 706.02 GB 4.60 All archives: 45.60 TB 44.16 TB 729.28 Unique chunks Total chunks Chunk index: 1692888 127730487 But other times, it takes several hours like this: Time (start): Thu, 2019-05-16 03:53:42 Time (end): Thu, 2019-05-16 10:55:10 Duration: 7 hours 1 minutes 27.98 seconds Number of files: 1650426 Utilization of max. archive size: 0% --------- Original size Compressed size Deduplicated This archive: 726.59 GB 706.02 GB 4.60 All archives: 45.60 TB 44.16 TB 729.28 Unique chunks Total chunks Chunk index: 1692888 127730487 I obviously do not control what files/folder are changed in the tree. I run two borg backups in parallel to save to two different backup server. They process the files at the same time wether the result is fast or slow. The source server is on AWS, it's an EBS running on a m5.large server. Do you have any idea why such a difference exists ? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri May 17 05:46:29 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 17 May 2019 11:46:29 +0200 Subject: [Borgbackup] Backup process of a huge archive is sometimes fast, and sometimes very slow In-Reply-To: References: Message-ID: > Most of the time, the process is quick enough, only take 15 minutes to > complete: > > Time (start): Thu, 2019-05-16 03:53:42 > Time (end):? ?Thu, 2019-05-16 10:55:10 > Duration: 7 hours 1 minutes 27.98 seconds Guess you inserted the wrong log here, this was also a slow run. > This archive:? ? ? ? ? ? ? 726.59 GB? ? ? ? ? ? 706.02 GB? ? ? ? ? ? ? 4.60? Also, the unit for the rightmost column got truncated, but would be important. > I obviously do not control what files/folder are changed in the tree.? Run the borg create with --list option, so you'll see the status for each file. I suspect that for the slow runs, borg detects many (all?) files as potentially changed. That can be either a content change or a metadata change size / ctime / inode number (size of course also means content change in any case, ctime could be also just a metadata change like acls or xattrs and inodes could just be unstable due to the filesystem you are using - network filesystems often have unstable inodes). ls -i shows the inode number for a file. > I run two borg backups in parallel to save to two different backup > server. They process the files at the same time wether the result is > fast or slow. Not the best setup for time measurements, though. Also, when you have a cloud server, there might be a lot of other circumstances influencing your measurement. Maybe not the root cause for this huge difference, just saying. > The source server is on AWS, it's an EBS running on a m5.large server. source filesystem with the data you backup is ...? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From delovan at gmail.com Fri May 17 06:45:24 2019 From: delovan at gmail.com (Damien Gustave) Date: Fri, 17 May 2019 12:45:24 +0200 Subject: [Borgbackup] Backup process of a huge archive is sometimes fast, and sometimes very slow In-Reply-To: References: Message-ID: Hello Thomas, thank you for your answer ! Guess you inserted the wrong log here, this was also a slow run. > I indeed copy/paste the exact same log, sorry for that. This quicker one is this one : Time (start): Wed, 2019-05-15 09:09:13 Time (end): Wed, 2019-05-15 09:21:23 Duration: 12 minutes 9.79 seconds Number of files: 1648114 Utilization of max. archive size: 0% ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 722.00 GB 701.46 GB 3.90 GB All archives: 44.88 TB 43.45 TB 724.68 GB Unique chunks Total chunks Chunk index: 1687853 125875061 ------------------------------------------------------------------------------ In both cases the unit of deduplicated column was in GB, so not so much difference in size. Run the borg create with --list option, so you'll see the status for > each file. > Great idea ! I'll try with that, hoping the log size will not kill my system :). > I suspect that for the slow runs, borg detects many (all?) files as > potentially changed. > > That can be either a content change or a metadata change size / ctime / > inode number (size of course also means content change in any case, > ctime could be also just a metadata change like acls or xattrs and > inodes could just be unstable due to the filesystem you are using - > network filesystems often have unstable inodes). > Once I got the result of --list option, it would be easier to check these parameters. I may also try to map all the inodes from the tree to check if they have changed > Not the best setup for time measurements, though. > Also, when you have a cloud server, there might be a lot of other > circumstances influencing your measurement. > Actually I have one borg repo for each of my clients. This one is the bigger one and only this one is causing me trouble. I mean, maybe the others can have huge time difference too, but they are too small compared to this fat one to be noticeable in the overall process. I have others repos of 500GB or 400GB and I have not noticed any performances issues with these ones So the most likely is there could be a huge change of metadata with this specific client source filesystem with the data you backup is ...? > It's ext4 over LVM. Thank you again for your time. On Fri, May 17, 2019 at 11:46 AM Thomas Waldmann wrote: > > Most of the time, the process is quick enough, only take 15 minutes to > > complete: > > > > Time (start): Thu, 2019-05-16 03:53:42 > > Time (end): Thu, 2019-05-16 10:55:10 > > Duration: 7 hours 1 minutes 27.98 seconds > > Guess you inserted the wrong log here, this was also a slow run. > > > This archive: 726.59 GB 706.02 GB > 4.60 > > Also, the unit for the rightmost column got truncated, but would be > important. > > > I obviously do not control what files/folder are changed in the tree. > > Run the borg create with --list option, so you'll see the status for > each file. > > I suspect that for the slow runs, borg detects many (all?) files as > potentially changed. > > That can be either a content change or a metadata change size / ctime / > inode number (size of course also means content change in any case, > ctime could be also just a metadata change like acls or xattrs and > inodes could just be unstable due to the filesystem you are using - > network filesystems often have unstable inodes). > > ls -i shows the inode number for a file. > > > I run two borg backups in parallel to save to two different backup > > server. They process the files at the same time wether the result is > > fast or slow. > > Not the best setup for time measurements, though. > Also, when you have a cloud server, there might be a lot of other > circumstances influencing your measurement. > > Maybe not the root cause for this huge difference, just saying. > > > The source server is on AWS, it's an EBS running on a m5.large server. > > source filesystem with the data you backup is ...? > > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri May 17 13:30:33 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 17 May 2019 19:30:33 +0200 Subject: [Borgbackup] Backup process of a huge archive is sometimes fast, and sometimes very slow In-Reply-To: References: Message-ID: <6f12de4b-90ec-4098-9942-365444832321@waldmann-edv.de> > It's ext4 over LVM. If the source data is on a locally mounted ext4, inodes should be stable, so no problem there. So you need to check if something touches the ctime, xattrs, acls or the file contents. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From hpj at urpla.net Thu May 23 07:03:55 2019 From: hpj at urpla.net (Hans-Peter Jansen) Date: Thu, 23 May 2019 13:03:55 +0200 Subject: [Borgbackup] borgbackup 1.1.10 released! In-Reply-To: <5f087a71-08ec-65db-eae9-84c82b9a6bfb@waldmann-edv.de> References: <5f087a71-08ec-65db-eae9-84c82b9a6bfb@waldmann-edv.de> Message-ID: <3306782.T9Sq7Ne3th@xrated> Dear Thomas, dear Felix, Am Donnerstag, 16. Mai 2019, 06:00:08 CEST schrieb Thomas Waldmann: > borgbackup 1.1.10 released with bug fixes and bundled msgpack. > > https://github.com/borgbackup/borg/releases/tag/1.1.10 Thanks Thomas (as always) and Felix (welcome) for bundling msgpack. This is the sanest solution to borgs pickiness in this regard and eliminates a big packaging headache. I've submitted this version to openSUSE Archiving:Backup, where it builds for a couple of archs (including ARM, PowerPC, and zSystems) and distributions with tests enabled, before submitting it to Factory. If all production tests goes well, it appears in Tumbleweed sometimes next week. @Thomas: unfortunately, build specs typically aren't done too specific, when it comes to package version dependencies (apart from the essentials), for a couple of reasons: * any package version limitation is source for more work * quite often, developers just nail versions out of laziness (not in your case, of course, given the number of your attempts to get this straight) * by running the provided tests, it is expected to be operational in the wild (again unfortunate in this case, because I haven't found a way to run fuse based tests successfully in OBS) Therefor, you will find quite often packaging patches, that going to raise limits, in order to have a consistent distribution, and don't fall apart in the multiple version dependency hell, as you will find in the rubygem eco system, where a package like GitLab easily depends on multiple versions of the same package for dozens of its dependencies. This is, why I love the Python eco system. We build a lot of PyPI packages in OBS with only a handful of of version dependencies, mostly stemming from the fact, that a Py3 port is missing still. OTOH, the rest of those 1600+ packages is building for Python 2 and Python 3 from a single spec, with tests enabled for the majority of them. Since I'm guilty for being misguided in patching the borgbackup package build, I feel very sorry. Sorry. At the same time, I'm happy with the solution, you provided with this version (and the continuous support, you deliver). Much appreciated. Assimilation wasn't more serious fun, ever! Cheers, Pete From hpj at urpla.net Fri May 24 08:07:19 2019 From: hpj at urpla.net (Hans-Peter Jansen) Date: Fri, 24 May 2019 14:07:19 +0200 Subject: [Borgbackup] borgbackup 1.1.10 released! In-Reply-To: <3306782.T9Sq7Ne3th@xrated> References: <5f087a71-08ec-65db-eae9-84c82b9a6bfb@waldmann-edv.de> <3306782.T9Sq7Ne3th@xrated> Message-ID: <2969495.nW2buZ5Doq@xrated> Am Donnerstag, 23. Mai 2019, 13:03:55 CEST schrieb Hans-Peter Jansen: > I've submitted this version to openSUSE Archiving:Backup, where it builds > for a couple of archs (including ARM, PowerPC, and zSystems) and > distributions with tests enabled, before submitting it to Factory. If all > production tests goes well, it appears in Tumbleweed sometimes next week. All production tests were fine. It's in Factory right now, and will be part of the next Tumbleweed Release. Thanks again. Cheers, Pete From dudman8 at gmail.com Thu May 30 16:41:17 2019 From: dudman8 at gmail.com (Dudman8) Date: Thu, 30 May 2019 22:41:17 +0200 Subject: [Borgbackup] Permission denied- Problem accessing files from mounted archive Message-ID: <8c25c997e224b4925029492738fa96ddd3ff1e55.camel@gmail.com> I've mounted an archive, but can't cat files with permissions like -rw-r----- 1 root root 449 Jan 16 10:23 media.mount (ug+r) However the following I can ( ugo+r ) -rw-r--r-- 1 root root 330 Jan 3 18:54 zmencfs.service I tried "sudo cat media.mount" or mounting the archive as root and directly catting the file, but still get permision denied. Obviously its read-only so I can't change the file permision. Am I missing something obvious :) ? Thanks Neil From ndbecker2 at gmail.com Fri May 31 07:30:43 2019 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 31 May 2019 07:30:43 -0400 Subject: [Borgbackup] How do I tell if backup completed? Message-ID: If my client locked up while running backup, how do I know if the backup has completed OK? Thanks, Neal -- Those who don't understand recursion are doomed to repeat it From public at enkore.de Fri May 31 07:44:30 2019 From: public at enkore.de (Marian Beermann) Date: Fri, 31 May 2019 13:44:30 +0200 Subject: [Borgbackup] How do I tell if backup completed? In-Reply-To: References: Message-ID: When you see the archive in "borg list" output. Am 31.05.19 um 13:30 schrieb Neal Becker: > If my client locked up while running backup, how do I know if the > backup has completed OK? > > Thanks, > Neal > From w at swtk.info Tue Jun 4 11:49:10 2019 From: w at swtk.info (Wojtek Swiatek) Date: Tue, 4 Jun 2019 17:49:10 +0200 Subject: [Borgbackup] Are mounted filesystems skipped? Message-ID: Hello everyone I am trying to backup some local and mounted filesystems (mounted partitions, not network ones) and the following command /usr/bin/borg create --filter AME --list --stats /services/backup/borg::srv-test1 /mnt/1TB1/seafile-client/Misc /mnt/1TB1/seafile-client/dev-perso /etc only backs up /etc. There is no single line for /mnt/... despite having files which changed. The command runs so fast that I am almost sure that the /mnt/... directories were not checked Is there a special way to backup files in mounted filesystems? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Jun 4 12:13:01 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 4 Jun 2019 18:13:01 +0200 Subject: [Borgbackup] Are mounted filesystems skipped? In-Reply-To: References: Message-ID: <4800bc39-ea85-4a53-a7b6-42cf936b4a41@waldmann-edv.de> > I am trying to backup some local and mounted filesystems (mounted > partitions, not network ones) and the following command > > /usr/bin/borg create --filter AME --list --stats > /services/backup/borg::srv-test1 /mnt/1TB1/seafile-client/Misc > /mnt/1TB1/seafile-client/dev-perso /etc > > only backs up /etc. There is no single line for /mnt/... despite having > files which changed. The command runs so fast that I am almost sure that > the /mnt/... directories were not checked Strange. Guess the easiest explanation could be some typo in the pathes (also be careful with uppercase/lowercase). > Is there a special way to backup files in mounted filesystems? borg does not care about filesystem boundaries and cross mountpoints EXCEPT if you use the --on-file-system option. you could remove the --filter AME so it displays all the files / their status until you have found the problem. borg can check unchanged files very fast (and even faster if you run multiple tries one directly after the others). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From w at swtk.info Tue Jun 4 13:01:47 2019 From: w at swtk.info (Wojtek Swiatek) Date: Tue, 4 Jun 2019 19:01:47 +0200 Subject: [Borgbackup] Are mounted filesystems skipped? In-Reply-To: <4800bc39-ea85-4a53-a7b6-42cf936b4a41@waldmann-edv.de> References: <4800bc39-ea85-4a53-a7b6-42cf936b4a41@waldmann-edv.de> Message-ID: Le mar. 4 juin 2019 ? 18:19, Thomas Waldmann a ?crit : > > you could remove the --filter AME so it displays all the files / their > status until you have found the problem. > > Thank you - this helped me to realize that I was looking at the files in one place, while trying to backup them in another (this a a file synchronization system replica). Everything works fine. Thanks again and sorry for the noise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimstr at gmail.com Tue Jun 18 01:18:21 2019 From: jimstr at gmail.com (Jim S.) Date: Mon, 17 Jun 2019 22:18:21 -0700 Subject: [Borgbackup] BORG_CACHE_DIR with multiple backup copies Message-ID: Hi, Read all the docs thoroughly I think; sorry if this question has already been addressed there. Re: BORG_CACHE_DIR (default ~/.cache/borg). I am backing up a single dataset to three separate external drives, each in weekly rotation (so each media contains a full backup of the dataset.) Can they share a common BORG_CACHE_DIR or should I define a different cache directory for each external media? I suppose another way to ask the question: does BORG_CACHE_DIR reflect the content of the dataset being backed-up, or the the content of the media being written to? Many thanks for your help, JimS -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at gasaway.org Tue Jun 18 13:06:36 2019 From: dave at gasaway.org (David Gasaway) Date: Tue, 18 Jun 2019 10:06:36 -0700 Subject: [Borgbackup] BORG_CACHE_DIR with multiple backup copies In-Reply-To: References: Message-ID: On Mon, Jun 17, 2019 at 10:19 PM Jim S. wrote: > Hi, > > Read all the docs thoroughly I think; sorry if this question has already > been addressed there. > > Re: BORG_CACHE_DIR (default ~/.cache/borg). I am backing up a single > dataset to three separate external drives, each in weekly rotation (so each > media contains a full backup of the dataset.) Can they share a common > BORG_CACHE_DIR or should I define a different cache directory for each > external media? > > I suppose another way to ask the question: does BORG_CACHE_DIR reflect the > content of the dataset being backed-up, or the the content of the media > being written to? > The borg cache directory should have a subdirectory for each destination repository. Yes, two backups on the same client can have the BORG_CACHE_DIR, but they aren't actually going to be sharing local resources. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimstr at gmail.com Tue Jun 18 16:32:48 2019 From: jimstr at gmail.com (Jim S.) Date: Tue, 18 Jun 2019 13:32:48 -0700 Subject: [Borgbackup] BORG_CACHE_DIR with multiple backup copies In-Reply-To: References: Message-ID: On Tue, Jun 18, 2019 at 10:06 AM David Gasaway wrote: > > The borg cache directory should have a subdirectory for each destination > repository. Yes, two backups on the same client can have the > BORG_CACHE_DIR, but they aren't actually going to be sharing local > resources. > Ah-ha! Makes perfect sense. Many thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Jun 25 05:55:10 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 25 Jun 2019 11:55:10 +0200 Subject: [Borgbackup] Permission denied- Problem accessing files from mounted archive In-Reply-To: <8c25c997e224b4925029492738fa96ddd3ff1e55.camel@gmail.com> References: <8c25c997e224b4925029492738fa96ddd3ff1e55.camel@gmail.com> Message-ID: <8b1347da-fa1f-5974-9c55-800dc122499b@waldmann-edv.de> (Replying late to an older post as I've noted there was no reply.) On 5/30/19 10:41 PM, Dudman8 wrote: > I've mounted an archive, but can't cat files with permissions like > > -rw-r----- 1 root root 449 Jan 16 10:23 media.mount (ug+r) > > However the following I can ( ugo+r ) > > -rw-r--r-- 1 root root 330 Jan 3 18:54 zmencfs.service > > I tried "sudo cat media.mount" or mounting the archive as root and > directly catting the file, but still get permision denied. Obviously > its read-only so I can't change the file permision. > > Am I missing something obvious :) ? FUSE is a bit special about permissions... Read "man mount.fuse" about FUSE configuration options, esp. about allow_other and allow_root (and related). There was also a recent change in borg 1.1.x (see changelog) about FUSE permission handling. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From billk at iinet.net.au Thu Jun 27 09:13:58 2019 From: billk at iinet.net.au (Bill Kenworthy) Date: Thu, 27 Jun 2019 21:13:58 +0800 Subject: [Borgbackup] best way to stop borgbackup? Message-ID: <90d76e35-7b47-090d-342a-ff5266a6eb3d@iinet.net.au> Hi all, ??? what is the best way to interrupt a running backup? I sometimes need to do a quick stop to a running backup - usually via ctrl-c, or a system shutdown.? Sometimes its a clean restart to the backups, but often there is a lock left behind, and also other forms of corruption.? Is there a less abrupt way of stopping? BillK From eluther at smartleaf.com Thu Jun 27 11:13:12 2019 From: eluther at smartleaf.com (Eric Luther) Date: Thu, 27 Jun 2019 11:13:12 -0400 Subject: [Borgbackup] best way to stop borgbackup? In-Reply-To: <90d76e35-7b47-090d-342a-ff5266a6eb3d@iinet.net.au> References: <90d76e35-7b47-090d-342a-ff5266a6eb3d@iinet.net.au> Message-ID: <2fb662ac-462c-0fba-ee5e-4aacfa1c1c93@smartleaf.com> Per the documentation borg supports resuming backups. https://borgbackup.readthedocs.io/en/stable/faq.html?highlight=safely%20stop#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there As the documentation says checkpoints are created by default every 30 mins. No matter how borg is stopped the archive created is valid up to the last checkpoint. We have only been using borg for a few months but when we need to stop a running backup I will generally run a `ps aux | grep borg` to find the pid of the running borg process, followed by `kill $PID` where $PID is the process id we identified in the first command. This does not abruptly end the process, it will usually wrap itself up within 5 minutes or so. In our limited experience this does not leave a lock on the repository, and is perfectly valid up to the last checkpoint. Hope that helps, Eric On 6/27/19 9:13 AM, Bill Kenworthy wrote: > Hi all, > > ??? what is the best way to interrupt a running backup? > > > I sometimes need to do a quick stop to a running backup - usually via > ctrl-c, or a system shutdown.? Sometimes its a clean restart to the > backups, but often there is a lock left behind, and also other forms of > corruption.? Is there a less abrupt way of stopping? > > BillK > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Eric Luther Ops Smartleaf Inc. eluther at smartleaf.com Work: (617) 453-2597 From billk at iinet.net.au Thu Jun 27 18:25:48 2019 From: billk at iinet.net.au (Bill Kenworthy) Date: Fri, 28 Jun 2019 06:25:48 +0800 Subject: [Borgbackup] best way to stop borgbackup? In-Reply-To: <2fb662ac-462c-0fba-ee5e-4aacfa1c1c93@smartleaf.com> References: <90d76e35-7b47-090d-342a-ff5266a6eb3d@iinet.net.au> <2fb662ac-462c-0fba-ee5e-4aacfa1c1c93@smartleaf.com> Message-ID: <86baf462-7fe8-42cb-4e8b-7427ea4a000f@iinet.net.au> On 27/6/19 11:13 pm, Eric Luther wrote: > Per the documentation borg supports resuming backups. > > https://borgbackup.readthedocs.io/en/stable/faq.html?highlight=safely%20stop#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there > > > As the documentation says checkpoints are created by default every 30 > mins. No matter how borg is stopped the archive created is valid up to > the last checkpoint. We have only been using borg for a few months but > when we need to stop a running backup I will generally run a `ps aux | > grep borg` to find the pid of the running borg process, followed by > `kill $PID` where $PID is the process id we identified in the first > command. This does not abruptly end the process, it will usually wrap > itself up within 5 minutes or so. In our limited experience this does > not leave a lock on the repository, and is perfectly valid up to the > last checkpoint. > > Hope that helps, > > Eric > > On 6/27/19 9:13 AM, Bill Kenworthy wrote: >> Hi all, >> >> ???? what is the best way to interrupt a running backup? >> >> >> I sometimes need to do a quick stop to a running backup - usually via >> ctrl-c, or a system shutdown.? Sometimes its a clean restart to the >> backups, but often there is a lock left behind, and also other forms of >> corruption.? Is there a less abrupt way of stopping? >> >> BillK >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > I am aware of this, its how to avoid an unclean shutdown that I am asking about.? Accidental terminations will occur (accidents) but I am asking if there is a way to ensure a clean requested shutdown of a borg instance before normal completion.? A kill will leads to locks left behind, and recently other errors requiring time consuming investigation and recovery. BillK From borgbackup at aluaces.fastmail.com Sat Jun 29 05:50:26 2019 From: borgbackup at aluaces.fastmail.com (Alberto Luaces) Date: Sat, 29 Jun 2019 11:50:26 +0200 Subject: [Borgbackup] best way to stop borgbackup? In-Reply-To: <86baf462-7fe8-42cb-4e8b-7427ea4a000f@iinet.net.au> References: <90d76e35-7b47-090d-342a-ff5266a6eb3d@iinet.net.au> <2fb662ac-462c-0fba-ee5e-4aacfa1c1c93@smartleaf.com> <86baf462-7fe8-42cb-4e8b-7427ea4a000f@iinet.net.au> Message-ID: <1a69348e-1902-47fc-b9e9-f3bca5dd1864@www.fastmail.com> On Fri, Jun 28, 2019, at 00:29, Bill Kenworthy wrote: > I am aware of this, its how to avoid an unclean shutdown that I am > asking about.? Accidental terminations will occur (accidents) but I am > asking if there is a way to ensure a clean requested shutdown of a borg > instance before normal completion.? A kill will leads to locks left > behind, and recently other errors requiring time consuming investigation > and recovery. In my experience, CTRL+C works ok, and lets me resume the backup afterwards, without leaving any locks behind. It is the equivalent of "kill -SIGINT ", right? -- Alberto