From cata at geniusnet.ro Sat Jan 9 04:30:08 2021 From: cata at geniusnet.ro (Catalin Bucur) Date: Sat, 9 Jan 2021 11:30:08 +0200 Subject: [Borgbackup] borg adds same files every time Message-ID: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> Hello, I have some files mounted with davfs(ro) in linux. These files are archived every day with borg on some repository. Even the files are not changing over time, borg adds every day those files as they are new. In borg logs, some of the lines are with A in front (for those files that are not modified) and some with M (those which have been modified in the meantime). With a large number of files, this becomes very annoying and time-consuming. Does anyone know, it's a bug from davfs+borg combination? Because I don't have the same situation with files mounted with samba or sshfs. Should I verify something else to make borg ignore unmodified files? Thank you for your time. Best regards, -- Catalin Bucur -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Sun Jan 10 20:02:29 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 11 Jan 2021 02:02:29 +0100 Subject: [Borgbackup] borg adds same files every time In-Reply-To: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> References: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> Message-ID: <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> > Even the files are not changing over time, borg adds every day those > files as they are new. IIRC, this is covered by an FAQ entry. > Does anyone know, it's a bug from davfs+borg combination? In general, borg expects a filesystem to work correctly. There are some fs, that are not great in that respect, with local filesystems usually being better than LAN-style remote filesystems being better than cloud/WAN-style remote filesystems. > Should I verify something else to make borg ignore unmodified files? IIRC, the FAQ has a workaround and also some infos about how to debug such issues in general. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From cata at geniusnet.ro Tue Jan 12 03:31:10 2021 From: cata at geniusnet.ro (Catalin Bucur) Date: Tue, 12 Jan 2021 10:31:10 +0200 Subject: [Borgbackup] borg adds same files every time In-Reply-To: <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> References: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> Message-ID: <974d9a0c-ff79-f8cf-f73e-5eb5bfb2b7fa@geniusnet.ro> Yes, you are right, there are some information on FAQ, but it's difficult to debug, even with that. I know that it's not borg fault on this story, but I am trying to solve somehow. As far I can see, the information from borg cache seems to be different from the davfs mounted filesystem. Is there any way to "decompile" borg cache files, to be able to make some comparisons about files size, ctime, mtime etc. of the files? Or to make write cache files somehow in "clear-mode" to be human readable? Thank you for your help. Best regards, Catalin Bucur On 11/01/2021 03:02, Thomas Waldmann wrote: >> Even the files are not changing over time, borg adds every day those >> files as they are new. > > IIRC, this is covered by an FAQ entry. > >> Does anyone know, it's a bug from davfs+borg combination? > > In general, borg expects a filesystem to work correctly. > > There are some fs, that are not great in that respect, with local > filesystems usually being better than LAN-style remote filesystems > being better than cloud/WAN-style remote filesystems. > >> Should I verify something else to make borg ignore unmodified files? > > IIRC, the FAQ has a workaround and also some infos about how to debug > such issues in general. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Jan 12 05:49:13 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 12 Jan 2021 11:49:13 +0100 Subject: [Borgbackup] borg adds same files every time In-Reply-To: <974d9a0c-ff79-f8cf-f73e-5eb5bfb2b7fa@geniusnet.ro> References: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> <974d9a0c-ff79-f8cf-f73e-5eb5bfb2b7fa@geniusnet.ro> Message-ID: > As far I can see, the information from borg cache seems to be different > from the davfs mounted filesystem. Is there any way to "decompile" borg > cache files, There is no existing code to do that. > to be able to make some comparisons about files size, > ctime, mtime etc. of the files? You can do that just directly on the file system, using "stat", as the FAQ mentions. The FAQ also mentions the misc. special cases and conditions that have to be considered. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From lazyvirus at gmx.com Tue Jan 12 18:24:36 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Wed, 13 Jan 2021 00:24:36 +0100 Subject: [Borgbackup] Suggestion Message-ID: <20210113002436.3f463193@msi.defcon1.lan> Hi listers, I wonder if it could be possible to have the possibility to comment each backup - may be with a switch allowing to see either the comment or the backup reference number. This would be very welcome, especially on test machines where I sometimes execute a backup with only half the configuration working as wanted (Thomas, not on the head? nor the balls!) Jean-Yves From public at enkore.de Tue Jan 12 19:17:35 2021 From: public at enkore.de (Marian Beermann) Date: Wed, 13 Jan 2021 01:17:35 +0100 Subject: [Borgbackup] Suggestion In-Reply-To: <20210113002436.3f463193@msi.defcon1.lan> References: <20210113002436.3f463193@msi.defcon1.lan> Message-ID: Has been there since 1.1.0 or so borg create Archive options --comment COMMENT add a comment text to the archive -Marian > Hi listers, > > I wonder if it could be possible to have the possibility to comment each > backup - may be with a switch allowing to see either the comment or the > backup reference number. > > This would be very welcome, especially on test machines where I sometimes > execute a backup with only half the configuration working as wanted > (Thomas, not on the head? nor the balls!) > > Jean-Yves > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From lazyvirus at gmx.com Tue Jan 12 19:45:03 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Wed, 13 Jan 2021 01:45:03 +0100 Subject: [Borgbackup] Suggestion In-Reply-To: References: <20210113002436.3f463193@msi.defcon1.lan> Message-ID: <20210113014503.2300fefc@msi.defcon1.lan> On Wed, 13 Jan 2021 01:17:35 +0100 Marian Beermann wrote: Whoops, my very bad (not ready enough to type without watching the keyboard :/) Thanks. > Has been there since 1.1.0 or so From sebastian at silef.de Wed Jan 13 03:32:35 2021 From: sebastian at silef.de (Sebastian Felis) Date: Wed, 13 Jan 2021 09:32:35 +0100 Subject: [Borgbackup] Recreate segments after changing max_segment_size Message-ID: <1378bb3d-d101-2dde-5592-ab9abf1f939f@silef.de> Hi, first of all: Thank you so much for this awesome backup tool. It's really a joy using it. I have several borg repos with small segment sizes of 5 MB. I would like to recreate the segments to a larger size for better repo backup with rsync/rclone. I am aware of the recreate command and its documentation and I am using borg 1.1.14 on debian buster. A conversion was successful for a smaller repo with "recreate --recompress always" after changing the max_segment_size. For my TB sized media repo it seems to be a bit slow. Without the option "--recompress always" the smaller segments are not repacked to larger segments. My questions: 1) What is the best way just to repack all chunks to a new segment size for a repo? 2) Does the recreate command on a repo honor deduplication and repack the unique chunks in segments only once? Even with "--recompress always" option? BR Sebastian From sebastian at silef.de Thu Jan 14 12:35:09 2021 From: sebastian at silef.de (Sebastian Felis) Date: Thu, 14 Jan 2021 18:35:09 +0100 Subject: [Borgbackup] =?utf-8?q?Recreate_segments_after_changing_max=5Fse?= =?utf-8?q?gment=5Fsize?= In-Reply-To: <1378bb3d-d101-2dde-5592-ab9abf1f939f@silef.de> Message-ID: <54-60008100-13b-5aedd880@28746545> Hi again, I stumbled accross issue #3631 borg recreate optimisations [1] which explains my slow speed of recreate a whole repo. It says that the current algorithm reads archive per archive, byte by byte. So deduplicated files are read several times from the repo. Just to clarify my usecase of concatenating segements and to speed up things: Regarding the data structures [2] the segements are just a successive list of log entries prefixed with a 'BORG_SEG' magic header. So it should be possible to concat serveal segements by removing the magic header except the first segement. According to the index doc, the index, hints and integrity files can be deleted and are rebuilt on the next run [3] It should be also possible to rename the segments as long as the sequence of log entries remains. e.g. given sements 0, 1, 2, 3 will be concatenated to segement 0 tail -c +9 data/0/1 >> data/0/0 tail -c +9 data/0/2 >> data/0/0 tail -c +9 data/0/3 >> data/0/0 rm data/0/[123] rm hints.* index.* integrity.* Question: Is there any risk doeing this? Sebastian [1] https://github.com/borgbackup/borg/issues/3631 [2] https://borgbackup.readthedocs.io/en/stable/internals/data-structures.html#segments [3] https://borgbackup.readthedocs.io/en/stable/internals/data-structures.html#index-hints-and-integrity Am Mittwoch, Januar 13, 2021 09:32 CET, schrieb Sebastian Felis via Borgbackup : ?Hi, first of all: Thank you so much for this awesome backup tool. It's really a joy using it. I have several borg repos with small segment sizes of 5 MB. I would like to recreate the segments to a larger size for better repo backup with rsync/rclone. I am aware of the recreate command and its documentation and I am using borg 1.1.14 on debian buster. A conversion was successful for a smaller repo with "recreate --recompress always" after changing the max_segment_size. For my TB sized media repo it seems to be a bit slow. Without the option "--recompress always" the smaller segments are not repacked to larger segments. My questions: 1) What is the best way just to repack all chunks to a new segment size for a repo? 2) Does the recreate command on a repo honor deduplication and repack the unique chunks in segments only once? Even with "--recompress always" option? BR Sebastian _______________________________________________ Borgbackup mailing list Borgbackup at python.org https://mail.python.org/mailman/listinfo/borgbackup ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Jan 14 13:12:35 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 14 Jan 2021 19:12:35 +0100 Subject: [Borgbackup] Recreate segments after changing max_segment_size In-Reply-To: <54-60008100-13b-5aedd880@28746545> References: <54-60008100-13b-5aedd880@28746545> Message-ID: <9e65b799-beb6-3882-556e-df9d42036087@waldmann-edv.de> > e.g. given sements 0, 1, 2, 3 will be concatenated to segement 0 > > tail -c +9 data/0/1 >> data/0/0 > tail -c +9 data/0/2 >> data/0/0 > tail -c +9 data/0/3 >> data/0/0 > rm data/0/[123] > rm hints.* index.* integrity.* A borg repo is not supposed to be used like that. Also, I don't think anyone wants to support dealing with a borg repo on that level - at least not me. :-) > Is there any risk doing this? The risk is that you accidentally damage your repo, lose data, cause inconsistencies. The other risk is that other people read this and try this (maybe less carefully / knowledgeable than you). >> I have several borg repos with small segment sizes of 5 MB. I would like >> to recreate the segments to a larger size for better repo backup with >> rsync/rclone. Why should it be "better" with a larger size? The 500MiB default of borg 1.1.x is primarily optimized for local repos. Doing some sync-to-remote could rather improve by using a smaller-than-default segment size. >> Without the option "--recompress always" the smaller segments are not >> repacked to larger segments. Not sure if it really needs to be "always" (and not sure if "always" causes a major slowdown with many archives, really recompressing the same stuff again and again). You could try just going to another algorithm than the one you used for the 5MB segments. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From sebastian at silef.de Thu Jan 14 16:19:44 2021 From: sebastian at silef.de (Sebastian Felis) Date: Thu, 14 Jan 2021 22:19:44 +0100 Subject: [Borgbackup] Recreate segments after changing max_segment_size In-Reply-To: <9e65b799-beb6-3882-556e-df9d42036087@waldmann-edv.de> References: <54-60008100-13b-5aedd880@28746545> <9e65b799-beb6-3882-556e-df9d42036087@waldmann-edv.de> Message-ID: <9f0f5544-f912-cdb6-4f08-078d74e05dc7@silef.de> On 1/14/21 7:12 PM, Thomas Waldmann wrote: >> e.g. given sements 0, 1, 2, 3 will be concatenated to segement 0 >> >> tail -c +9 data/0/1 >> data/0/0 >> tail -c +9 data/0/2 >> data/0/0 >> tail -c +9 data/0/3 >> data/0/0 >> rm data/0/[123] >> rm hints.* index.* integrity.* > > A borg repo is not supposed to be used like that. > > Also, I don't think anyone wants to support dealing with a borg repo > on that level - at least not me. :-) > >> Is there any risk doing this? > > The risk is that you accidentally damage your repo, lose data, cause > inconsistencies. The other risk is that other people read this and try > this (maybe less carefully / knowledgeable than you). For sure, it is a low level operation with I-know-what-I-do attitude covered by a backup of the backup. And your answers don't say any no-s ;-) >>> I have several borg repos with small segment sizes of 5 MB. I would >>> like >>> to recreate the segments to a larger size for better repo backup with >>> rsync/rclone. > > Why should it be "better" with a larger size? > > The 500MiB default of borg 1.1.x is primarily optimized for local repos. > > Doing some sync-to-remote could rather improve by using a > smaller-than-default segment size. I target 128MiB segment sizes for "better" remote backups. 128MiB is more a gut decision than a technical evaluated optimum. Thank you for your answers Sebastian From lazyvirus at gmx.com Sat Jan 16 09:48:41 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Sat, 16 Jan 2021 15:48:41 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair Message-ID: <20210116154841.418cd217@msi.defcon1.lan> borgbackup 1.1.9-2+deb10u1 armhf (distro package) ==================================== Hi list, As of each 4 months, I launch a check/repair on all machines : borg check --verify --repair --progress --show-rc /BORG/ (/BORG being a NFS mount). On my recent raspberry pi 4 8GB, I've got a weird behavior, after ~5-6 h (still in the checking segment phase), it reboots without any warning and not a trace in log files :/ @33.3% of checking segments, memory isn't very much impacted : root 3374 0.9 0.9 90644 78896 pts/6 D+ 14:01 0:55 | \_ /usr/bin/python3 /usr/bin/borg check --verify --repair --progress --show-rc /BORG/ Two attempts have lead to the same reboot - at this point, the only option I see would be to wipe the rpi4 repo and recreate it from scratch. What could trigger such an event ? And is there a way to avoid that ? Jean-Yves From lazyvirus at gmx.com Sat Jan 16 17:37:28 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Sat, 16 Jan 2021 23:37:28 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <20210116154841.418cd217@msi.defcon1.lan> References: <20210116154841.418cd217@msi.defcon1.lan> Message-ID: <20210116233728.3f40f7e3@msi.defcon1.lan> On Sat, 16 Jan 2021 15:48:41 +0100 Bzzzz wrote: There's something new and it is bad. I enabled the intrinsic watchdog and restarted the procedure, leaving the machine display always on to survey what was happening. In fact, it breaks when into the second phase (verifying data) @~45% and the watchdog had the time to send me an e-mail, saying it was rebooting because the system cannot allocate memory (armhf being a 32-bit OS, this means borg is claiming more than 3 GB of RAM :/) Sooo, the question has change to : is there a way to limit the memory quantity borg want when in check/repair mode ? Jean-Yves > borgbackup 1.1.9-2+deb10u1 armhf > (distro package) > ==================================== > > Hi list, > > As of each 4 months, I launch a check/repair on all machines : > borg check --verify --repair --progress --show-rc /BORG/ > (/BORG being a NFS mount). > > On my recent raspberry pi 4 8GB, I've got a weird behavior, after > ~5-6 h (still in the checking segment phase), it reboots without any > warning and not a trace in log files :/ > > @33.3% of checking segments, memory isn't very much impacted : > root 3374 0.9 0.9 90644 78896 pts/6 D+ 14:01 > 0:55 | \_ /usr/bin/python3 /usr/bin/borg > check --verify --repair --progress --show-rc /BORG/ > > Two attempts have lead to the same reboot - at this point, the only > option I see would be to wipe the rpi4 repo and recreate it from > scratch. > > What could trigger such an event ? > And is there a way to avoid that ? > > Jean-Yves > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From l0f4r0 at tuta.io Sun Jan 17 04:30:22 2021 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Sun, 17 Jan 2021 10:30:22 +0100 (CET) Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <20210116233728.3f40f7e3@msi.defcon1.lan> References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> Message-ID: Hi, 16 janv. 2021 ? 23:37 de lazyvirus at gmx.com: > In fact, it breaks when into the second phase (verifying data) @~45% and > the watchdog had the time to send me an e-mail, saying it was rebooting > because the system cannot allocate memory (armhf being a 32-bit OS, > this means borg is claiming more than 3 GB of RAM :/) > > Sooo, the question has change to : is there a way to limit the memory > quantity borg want when in check/repair mode ? > Maybe you could have a look at ulimit Best regards, l0f4r0 From tw at waldmann-edv.de Sun Jan 17 07:50:39 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 17 Jan 2021 13:50:39 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <20210116233728.3f40f7e3@msi.defcon1.lan> References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> Message-ID: <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> >> borgbackup 1.1.9-2+deb10u1 armhf >> (distro package) Can you use a more recent borg? Maybe one from backports or the arm fat binary (see github issues / borg community resources)? 1.1.15 would be good. Then try to reproduce. I've recently fixed some memory leaks (not specifically in borg check though, iirc). >> borg check --verify --repair --progress --show-rc /BORG/ >> (/BORG being a NFS mount). borg does not have a "--verify" parameter. So please give the full command exactly as you used it. In case you actually have used --verify-data, you could try to reproduce without that. Also, try to specifically watch the memory consumption of borg to see if it really is the borg process. A reboot is a bit strange, isn't the OOM killer usually just shooting the biggest process? If you do not have swap, maybe add some. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From lazyvirus at gmx.com Sun Jan 17 08:04:05 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Sun, 17 Jan 2021 14:04:05 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> Message-ID: <20210117140405.43cfd876@msi.defcon1.lan> On Sun, 17 Jan 2021 10:30:22 +0100 (CET) l0f4r0--- via Borgbackup wrote: Doesn't work, I use firejail with a limit of 2 GB RAM and the only result was it crashed faster than ever. > > Sooo, the question has change to : is there a way to limit the memory > > quantity borg want when in check/repair mode ? > > > Maybe you could have a look at ulimit > > Best regards, > l0f4r0 > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From lazyvirus at gmx.com Sun Jan 17 08:33:11 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Sun, 17 Jan 2021 14:33:11 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> Message-ID: <20210117143311.4d03c0f3@msi.defcon1.lan> On Sun, 17 Jan 2021 13:50:39 +0100 Thomas Waldmann wrote: > >> borgbackup 1.1.9-2+deb10u1 armhf > >> (distro package) > > Can you use a more recent borg? > > Maybe one from backports or the arm fat binary (see github issues / > borg community resources)? > > 1.1.15 would be good. Then try to reproduce. Nope, I downloaded the 1.1.15-1 pkg, but its installation failed because of Python3 version : Preparing to unpack .../borgbackup_1.1.15-1_armhf.deb ... Unpacking borgbackup (1.1.15-1) over (1.1.9-2+deb10u1) ... dpkg: dependency problems prevent configuration of borgbackup: borgbackup depends on python3 (>= 3.9~); however: Version of python3 on system is 3.7.3-1. borgbackup depends on libgcc-s1 (>= 3.5); however: Package libgcc-s1 is not installed. Ah, pip install borgbackup agreed to set 1.1.15 up, testing now. > I've recently fixed some memory leaks (not specifically in borg check > though, iirc). > > >> borg check --verify --repair --progress --show-rc /BORG/ > >> (/BORG being a NFS mount). > > borg does not have a "--verify" parameter. So please give the full > command exactly as you used it. Strange, I don't know where I pick that, corrected to --verify-data > In case you actually have used --verify-data, you could try to > reproduce without that. > > Also, try to specifically watch the memory consumption of borg to see > if it really is the borg process. A reboot is a bit strange, isn't the > OOM killer usually just shooting the biggest process? I have to dig into the watchdog configuration possibilities (a reboot being not particularly a problem at this time, but will be later). > If you do not have swap, maybe add some. I don't have one, due to a SSD. From ndbecker2 at gmail.com Sun Jan 17 08:58:31 2021 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 17 Jan 2021 08:58:31 -0500 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <20210117143311.4d03c0f3@msi.defcon1.lan> References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> <20210117143311.4d03c0f3@msi.defcon1.lan> Message-ID: Perhaps you can temporarily add file swap On Sun, Jan 17, 2021, 8:33 AM Bzzzz wrote: > On Sun, 17 Jan 2021 13:50:39 +0100 > Thomas Waldmann wrote: > > > >> borgbackup 1.1.9-2+deb10u1 armhf > > >> (distro package) > > > > Can you use a more recent borg? > > > > Maybe one from backports or the arm fat binary (see github issues / > > borg community resources)? > > > > 1.1.15 would be good. Then try to reproduce. > > Nope, I downloaded the 1.1.15-1 pkg, but its installation failed because > of Python3 version : > > Preparing to unpack .../borgbackup_1.1.15-1_armhf.deb ... > Unpacking borgbackup (1.1.15-1) over (1.1.9-2+deb10u1) ... > dpkg: dependency problems prevent configuration of borgbackup: > borgbackup depends on python3 (>= 3.9~); however: > Version of python3 on system is 3.7.3-1. > borgbackup depends on libgcc-s1 (>= 3.5); however: > Package libgcc-s1 is not installed. > > Ah, pip install borgbackup agreed to set 1.1.15 up, testing now. > > > I've recently fixed some memory leaks (not specifically in borg check > > though, iirc). > > > > >> borg check --verify --repair --progress --show-rc /BORG/ > > >> (/BORG being a NFS mount). > > > > borg does not have a "--verify" parameter. So please give the full > > command exactly as you used it. > > Strange, I don't know where I pick that, corrected to --verify-data > > > In case you actually have used --verify-data, you could try to > > reproduce without that. > > > > Also, try to specifically watch the memory consumption of borg to see > > if it really is the borg process. A reboot is a bit strange, isn't the > > OOM killer usually just shooting the biggest process? > > I have to dig into the watchdog configuration possibilities (a reboot > being not particularly a problem at this time, but will be later). > > > If you do not have swap, maybe add some. > > I don't have one, due to a SSD. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Sun Jan 17 09:03:08 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 17 Jan 2021 15:03:08 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <20210117143311.4d03c0f3@msi.defcon1.lan> References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> <20210117143311.4d03c0f3@msi.defcon1.lan> Message-ID: <564c4d07-6ca9-4c90-0cd5-7250b47d7b12@waldmann-edv.de> >> If you do not have swap, maybe add some. > > I don't have one, due to a SSD. There recently was a very good article on the web why one always should have swap. Not that ancient kind of tip from the 1990ies, but very recent and very detailled. That you have a SSD is no problem, just do not use a too aggressive swappiness. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From lazyvirus at gmx.com Sun Jan 17 09:45:07 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Sun, 17 Jan 2021 15:45:07 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> <20210117143311.4d03c0f3@msi.defcon1.lan> Message-ID: <20210117154507.7539d392@msi.defcon1.lan> On Sun, 17 Jan 2021 08:58:31 -0500 Neal Becker wrote: Rahhh, sorry for the PM @#\! ML new behavior :( > Perhaps you can temporarily add file swap Ziziz what the raspbian OS does natively, however, with only a few daemons and several Erlang VMs running on it, I never even reached half of the memory (4 GB) - so I lowered swapiness to 3, removed the 1 GB swap file and, at this time, never had a problem until BB triggered one. > On Sun, Jan 17, 2021, 8:33 AM Bzzzz wrote: > > > On Sun, 17 Jan 2021 13:50:39 +0100 > > Thomas Waldmann wrote: > > > > > >> borgbackup 1.1.9-2+deb10u1 armhf > > > >> (distro package) > > > > > > Can you use a more recent borg? > > > > > > Maybe one from backports or the arm fat binary (see github issues / > > > borg community resources)? > > > > > > 1.1.15 would be good. Then try to reproduce. > > > > Nope, I downloaded the 1.1.15-1 pkg, but its installation failed > > because of Python3 version : > > > > Preparing to unpack .../borgbackup_1.1.15-1_armhf.deb ... > > Unpacking borgbackup (1.1.15-1) over (1.1.9-2+deb10u1) ... > > dpkg: dependency problems prevent configuration of borgbackup: > > borgbackup depends on python3 (>= 3.9~); however: > > Version of python3 on system is 3.7.3-1. > > borgbackup depends on libgcc-s1 (>= 3.5); however: > > Package libgcc-s1 is not installed. > > > > Ah, pip install borgbackup agreed to set 1.1.15 up, testing now. > > > > > I've recently fixed some memory leaks (not specifically in borg > > > check though, iirc). > > > > > > >> borg check --verify --repair --progress --show-rc /BORG/ > > > >> (/BORG being a NFS mount). > > > > > > borg does not have a "--verify" parameter. So please give the full > > > command exactly as you used it. > > > > Strange, I don't know where I pick that, corrected to --verify-data > > > > > In case you actually have used --verify-data, you could try to > > > reproduce without that. > > > > > > Also, try to specifically watch the memory consumption of borg to > > > see if it really is the borg process. A reboot is a bit strange, > > > isn't the OOM killer usually just shooting the biggest process? > > > > I have to dig into the watchdog configuration possibilities (a reboot > > being not particularly a problem at this time, but will be later). > > > > > If you do not have swap, maybe add some. > > > > I don't have one, due to a SSD. > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > > From lazyvirus at gmx.com Sun Jan 17 09:54:07 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Sun, 17 Jan 2021 15:54:07 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <564c4d07-6ca9-4c90-0cd5-7250b47d7b12@waldmann-edv.de> References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> <20210117143311.4d03c0f3@msi.defcon1.lan> <564c4d07-6ca9-4c90-0cd5-7250b47d7b12@waldmann-edv.de> Message-ID: <20210117155407.169f89c5@msi.defcon1.lan> On Sun, 17 Jan 2021 15:03:08 +0100 Thomas Waldmann wrote: > >> If you do not have swap, maybe add some. > > > > I don't have one, due to a SSD. > > There recently was a very good article on the web why one always > should have swap. > > Not that ancient kind of tip from the 1990ies, but very recent and > very detailled. > > That you have a SSD is no problem, just do not use a too aggressive > swappiness. > Found it, but I don't want my Erlang VMs to be swapped, even after a long sleep, because they must awake at once when solicited. Their RAM consumption sometimes peak (not that much), but ASA processes ends, the GC kicks if needed and everything is coming back to normal. But thanks for the article, it is interesting - bookmarked. JY From lazyvirus at gmx.com Sun Jan 17 09:59:53 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Sun, 17 Jan 2021 15:59:53 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> References: <20210116154841.418cd217@msi.defcon1.lan> <20210116233728.3f40f7e3@msi.defcon1.lan> <64d1eeae-4876-f7c0-87f1-b2813b03c71c@waldmann-edv.de> Message-ID: <20210117155953.056e3181@msi.defcon1.lan> On Sun, 17 Jan 2021 13:50:39 +0100 Thomas Waldmann wrote: > 1.1.15 would be good. Then try to reproduce. > > I've recently fixed some memory leaks (not specifically in borg check > though, iirc). It seems the problem is solved w/ 1.1.15 as a watch of 'ps aux|grep borg' shows that VSZ/RSS never got over 149M/141M - it was indeed a memory leak problem. At this time, it is in the Verifying data phase and has reached +90% :) Thanks. > >> borg check --verify --repair --progress --show-rc /BORG/ > >> (/BORG being a NFS mount). > > borg does not have a "--verify" parameter. So please give the full > command exactly as you used it. It is now: borg check --verify-data --repair --progress --show-rc /BORG/ JY From devzero at web.de Sun Jan 17 10:20:21 2021 From: devzero at web.de (Roland privat) Date: Sun, 17 Jan 2021 16:20:21 +0100 Subject: [Borgbackup] Reboot with no warning when check/repair In-Reply-To: References: Message-ID: i also got a reboot some days ago with a larger prune run. anyhow, borg may trigger that reboot but imho, userspace app never can be the root cause for a reboot Von meinem iPhone gesendet > Am 17.01.2021 um 14:58 schrieb Neal Becker : > > ? > Perhaps you can temporarily add file swap > >> On Sun, Jan 17, 2021, 8:33 AM Bzzzz wrote: >> On Sun, 17 Jan 2021 13:50:39 +0100 >> Thomas Waldmann wrote: >> >> > >> borgbackup 1.1.9-2+deb10u1 armhf >> > >> (distro package) >> > >> > Can you use a more recent borg? >> > >> > Maybe one from backports or the arm fat binary (see github issues / >> > borg community resources)? >> > >> > 1.1.15 would be good. Then try to reproduce. >> >> Nope, I downloaded the 1.1.15-1 pkg, but its installation failed because >> of Python3 version : >> >> Preparing to unpack .../borgbackup_1.1.15-1_armhf.deb ... >> Unpacking borgbackup (1.1.15-1) over (1.1.9-2+deb10u1) ... >> dpkg: dependency problems prevent configuration of borgbackup: >> borgbackup depends on python3 (>= 3.9~); however: >> Version of python3 on system is 3.7.3-1. >> borgbackup depends on libgcc-s1 (>= 3.5); however: >> Package libgcc-s1 is not installed. >> >> Ah, pip install borgbackup agreed to set 1.1.15 up, testing now. >> >> > I've recently fixed some memory leaks (not specifically in borg check >> > though, iirc). >> > >> > >> borg check --verify --repair --progress --show-rc /BORG/ >> > >> (/BORG being a NFS mount). >> > >> > borg does not have a "--verify" parameter. So please give the full >> > command exactly as you used it. >> >> Strange, I don't know where I pick that, corrected to --verify-data >> >> > In case you actually have used --verify-data, you could try to >> > reproduce without that. >> > >> > Also, try to specifically watch the memory consumption of borg to see >> > if it really is the borg process. A reboot is a bit strange, isn't the >> > OOM killer usually just shooting the biggest process? >> >> I have to dig into the watchdog configuration possibilities (a reboot >> being not particularly a problem at this time, but will be later). >> >> > If you do not have swap, maybe add some. >> >> I don't have one, due to a SSD. >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From cata at geniusnet.ro Fri Jan 22 09:52:54 2021 From: cata at geniusnet.ro (Catalin Bucur) Date: Fri, 22 Jan 2021 16:52:54 +0200 Subject: [Borgbackup] borg adds same files every time In-Reply-To: References: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> <974d9a0c-ff79-f8cf-f73e-5eb5bfb2b7fa@geniusnet.ro> Message-ID: On 12/01/2021 12:49, Thomas Waldmann wrote: > You can do that just directly on the file system, using "stat", as the > FAQ mentions. > > The FAQ also mentions the misc. special cases and conditions that have > to be considered. I finished the tests, it seems that there is a problem when mounting davfs file system. Every time borg create a new backup, the atime of each file changes and as a result borg will continue to backup it next time again. Another thing that brought major improvements was to create a new (empty) file on that mount point. Thank you for your information. Best regards, Catalin Bucur From tw at waldmann-edv.de Fri Jan 22 12:56:31 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 22 Jan 2021 18:56:31 +0100 Subject: [Borgbackup] borg adds same files every time In-Reply-To: References: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> <974d9a0c-ff79-f8cf-f73e-5eb5bfb2b7fa@geniusnet.ro> Message-ID: >> You can do that just directly on the file system, using "stat", as the >> FAQ mentions. >> >> The FAQ also mentions the misc. special cases and conditions that have >> to be considered. > > I finished the tests, it seems that there is a problem when mounting > davfs file system. Every time borg create a new backup, the atime of > each file changes and as a result borg will continue to backup it next > time again. For change detection, borg does NOT use the atime (but ctime [default in borg 1.1+] or mtime [if you tell so]). -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From cata at geniusnet.ro Fri Jan 22 13:18:43 2021 From: cata at geniusnet.ro (Catalin Bucur) Date: Fri, 22 Jan 2021 20:18:43 +0200 Subject: [Borgbackup] borg adds same files every time In-Reply-To: References: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> <974d9a0c-ff79-f8cf-f73e-5eb5bfb2b7fa@geniusnet.ro> Message-ID: <1d402eb3-5c33-15b9-2805-e9bc7ec0de76@geniusnet.ro> On 22/01/2021 19:56, Thomas Waldmann wrote: > For change detection, borg does NOT use the atime (but ctime [default > in borg 1.1+] or mtime [if you tell so]). You're right, I forgot that the atime option has been removed from newest version of borg. This means that adding a new file solved the situation: https://borgbackup.readthedocs.io/en/stable/faq.html#i-am-seeing-a-added-status-for-an-unchanged-file "If you want to avoid unnecessary chunking, just create or touch a small or empty file in your backup source file set [...]" Catalin Bucur From amuza at riseup.net Fri Feb 5 15:39:46 2021 From: amuza at riseup.net (amuza) Date: Fri, 5 Feb 2021 21:39:46 +0100 Subject: [Borgbackup] Borg through Tor Message-ID: Hello everyone! I'm new to Borg. I am playing with it trying to make it work through Tor. I have tried everything without Tor and it works great. Then I installed Tor on both computers and configured an onion SSH service in the remote Borg server. I can SSH to it through Tor from the local computer without a problem. However, when trying to use Borg through Tor, I am getting the following errors (as well as a success status): ------------------ terminating with success status, rc 0 Fri Feb 5 20:41:40 2021 Pruning repository Remote: ssh: Could not resolve hostname theverymuchlongversion3onionaddress.onion: Name or service not known Connection closed by remote host. Is borg working on the server? terminating with error status, rc 2 Fri Feb 5 20:41:43 CET 2021 Backup and/or Prune finished with errors ------------------ Why do I get those messages? What do they mean? You can find below the script I did (I simply tell Borg to use Torsocks and give it the onion address). Please let me know if you have any suggestion. Thank you! ------------------ #!/bin/sh export BORG_REPO=ssh://remoteuser at theverymuchlongversion3onionaddress.onion:2222/path/to/repo export BORG_PASSPHRASE='my-passphrase-here' export BORG_RSH='ssh -i /home/localuser/.ssh/id_rsa' info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; } trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM info "Starting backup" torsocks borg create \ --verbose \ --filter AME \ --list \ --stats \ --show-rc \ --compression lz4 \ --exclude-caches \ --exclude '/home/*/.cache/*' \ --exclude '/var/cache/*' \ \ ::'{hostname}-{now}' \ /home \ backup_exit=$? info "Pruning repository" borg prune \ --list \ --prefix '{hostname}-' \ --show-rc \ --keep-daily 7 \ --keep-weekly 4 \ --keep-monthly 6 \ prune_exit=$? if [ ${global_exit} -eq 0 ]; then info "Backup and Prune finished successfully" elif [ ${global_exit} -eq 1 ]; then info "Backup and/or Prune finished with warnings" else info "Backup and/or Prune finished with errors" fi exit ${global_exit} ------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From sebastian at silef.de Fri Feb 5 16:55:15 2021 From: sebastian at silef.de (Sebastian Felis) Date: Fri, 5 Feb 2021 22:55:15 +0100 Subject: [Borgbackup] Recreate segments after changing max_segment_size In-Reply-To: <9f0f5544-f912-cdb6-4f08-078d74e05dc7@silef.de> References: <54-60008100-13b-5aedd880@28746545> <9e65b799-beb6-3882-556e-df9d42036087@waldmann-edv.de> <9f0f5544-f912-cdb6-4f08-078d74e05dc7@silef.de> Message-ID: <28c6b68c-8b67-b568-e1dc-f67314656710@silef.de> Hi, On 14.01.21 22:19, Sebastian Felis wrote: > On 1/14/21 7:12 PM, Thomas Waldmann wrote: >>> e.g. given sements 0, 1, 2, 3 will be concatenated to segement 0 >>> >>> tail -c +9 data/0/1 >> data/0/0 >>> tail -c +9 data/0/2 >> data/0/0 >>> tail -c +9 data/0/3 >> data/0/0 >>> rm data/0/[123] >>> rm hints.* index.* integrity.* >> >> A borg repo is not supposed to be used like that. >> >> Also, I don't think anyone wants to support dealing with a borg repo >> on that level - at least not me. :-) >> >>> Is there any risk doing this? >> >> The risk is that you accidentally damage your repo, lose data, cause >> inconsistencies. The other risk is that other people read this and try >> this (maybe less carefully / knowledgeable than you). > > For sure, it is a low level operation with I-know-what-I-do attitude > covered by a backup of the backup. > > And your answers don't say any no-s ;-) > >>>> I have several borg repos with small segment sizes of 5 MB. I would >>>> like >>>> to recreate the segments to a larger size for better repo backup with >>>> rsync/rclone. >> >> Why should it be "better" with a larger size? >> >> The 500MiB default of borg 1.1.x is primarily optimized for local repos. >> >> Doing some sync-to-remote could rather improve by using a >> smaller-than-default segment size. > I target 128MiB segment sizes for "better" remote backups. 128MiB is > more a gut decision than a technical evaluated optimum. In the meanwhile I was able to concatenate successfully my 5 MB segments to 128MB and borg check ran fine. For reference I created a bash script which can be found here https://gist.github.com/xemle/725b817b6fc485dfc231ff7c99868f0f Again: Thank you for building such a great backup tool! BR Sebastian From tw at waldmann-edv.de Sat Feb 6 08:39:11 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 6 Feb 2021 14:39:11 +0100 Subject: [Borgbackup] Borg through Tor In-Reply-To: References: Message-ID: > Remote: ssh: Could not resolve hostname > theverymuchlongversion3onionaddress.onion: Name or service not known ssh tries to do a dns lookup and fails - not a borg problem. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sat Feb 6 08:41:18 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 6 Feb 2021 14:41:18 +0100 Subject: [Borgbackup] borg adds same files every time In-Reply-To: <1d402eb3-5c33-15b9-2805-e9bc7ec0de76@geniusnet.ro> References: <84bc404f-8234-a1d6-b906-34c3ad589d42@geniusnet.ro> <6ce97912-cbe7-78e4-6a95-38c6e69dd162@waldmann-edv.de> <974d9a0c-ff79-f8cf-f73e-5eb5bfb2b7fa@geniusnet.ro> <1d402eb3-5c33-15b9-2805-e9bc7ec0de76@geniusnet.ro> Message-ID: >> For change detection, borg does NOT use the atime (but ctime [default >> in borg 1.1+] or mtime [if you tell so]). > > You're right, I forgot that the atime option has been removed from > newest version of borg. That's yet another unrelated thing. The change detection is unrelated to what borg writes into an archive (by default). > This means that adding a new file solved the > situation: > https://borgbackup.readthedocs.io/en/stable/faq.html#i-am-seeing-a-added-status-for-an-unchanged-file > > "If you want to avoid unnecessary chunking, just create or touch a small > or empty file in your backup source file set [...]" Yup, that's it! -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sat Feb 6 08:31:45 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 6 Feb 2021 14:31:45 +0100 Subject: [Borgbackup] borgbackup release 1.2.0b2 Message-ID: <9b1cc16e-4755-9a54-ae47-a8487a76995d@waldmann-edv.de> Released borgbackup 1.2.0b2 with some fixes and new features: Please help testing: https://github.com/borgbackup/borg/releases/tag/1.2.0b2 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From ndbecker2 at gmail.com Sat Feb 6 09:03:32 2021 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 6 Feb 2021 09:03:32 -0500 Subject: [Borgbackup] not running borg as root Message-ID: After a disk disaster in which I had backed up my home directory, but not /etc, I decided I should add /etc to my backup. Problem is I run backups as my normal user and many things in /etc are not readable by that user. I suppose to read the files on the client side borg would need to run as root, but the server side does not need root. What do you suggest? -- Those who don't understand recursion are doomed to repeat it From tw at waldmann-edv.de Sat Feb 6 09:10:34 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 6 Feb 2021 15:10:34 +0100 Subject: [Borgbackup] not running borg as root In-Reply-To: References: Message-ID: <3f72e805-d57e-32cc-da86-fb262893c6ea@waldmann-edv.de> > After a disk disaster in which I had backed up my home directory, but > not /etc, I decided I should add /etc to my backup. Yup. Alternatively, make 2 backups: - hostname-system-timestamp with all except home (also exclude /proc /sys) - hostname-home-timestamp with just your home(s) When pruning that, you MUST use --prefix accordingly (use --dry-run until you are sure). > Problem is I run > backups as my normal user and many things in /etc are not readable by > that user. You can run the borg client as root and still connect to the borg server as non-root, just use ssh://borg at borgserver or so. > I suppose to read the files on the client side borg would need to run > as root, but the server side does not need root. Exactly. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sat Feb 6 09:16:48 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 6 Feb 2021 15:16:48 +0100 Subject: [Borgbackup] not running borg as root In-Reply-To: <3f72e805-d57e-32cc-da86-fb262893c6ea@waldmann-edv.de> References: <3f72e805-d57e-32cc-da86-fb262893c6ea@waldmann-edv.de> Message-ID: > Alternatively, make 2 backups: > > - hostname-system-timestamp with all except home (also exclude /proc /sys) > - hostname-home-timestamp with just your home(s) The point of that is to have 2 different prune policies: - get rid of the system backups rather quickly - keep the home backups for a long time -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From amuza at riseup.net Sun Feb 7 19:20:19 2021 From: amuza at riseup.net (amuza) Date: Mon, 8 Feb 2021 01:20:19 +0100 Subject: [Borgbackup] Borg through Tor In-Reply-To: References: Message-ID: <3ea236c2-b9a1-7c21-9615-46262a3410ab@riseup.net> Thomas Waldmann: >> Remote: ssh: Could not resolve hostname >> theverymuchlongversion3onionaddress.onion: Name or service not known > > ssh tries to do a dns lookup and fails - not a borg problem. > Thank you for the answer. Yes, Borg does its job, the backup is ok. From ndbecker2 at gmail.com Mon Feb 8 08:27:28 2021 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 8 Feb 2021 08:27:28 -0500 Subject: [Borgbackup] not running borg as root In-Reply-To: References: <3f72e805-d57e-32cc-da86-fb262893c6ea@waldmann-edv.de> Message-ID: Thanks for the suggestions! I did implement a scheme using a single repo but with backups named hostname-system and hostname-home as you suggested. The client and server run as myself for backup of my home, but the client runs as root for system (while server runs as myself). This requires (of course) allowing root at client to ssh to myself at server without password. All of this seems to work, except the minor annoyance that each time backup starts I get Synchronizing chunks cache... I suppose this is because on the client side there are 2 different caches, one for myself and one for root. But that's OK, it doesn't take very long and I think it's probably harmless. I guess an alternative would have been to have separate repos for home and system. Perhaps that would have been better. A small annoyance was that I needed to rename all the existing backups from hostname-date to hostname-home-date. There wasn't an obvious way to automate this that wouldn't have been more work than just manually doing this one-by-one for each backup. On Sat, Feb 6, 2021 at 9:17 AM Thomas Waldmann wrote: > > > Alternatively, make 2 backups: > > > > - hostname-system-timestamp with all except home (also exclude /proc /sys) > > - hostname-home-timestamp with just your home(s) > > The point of that is to have 2 different prune policies: > > - get rid of the system backups rather quickly > - keep the home backups for a long time > > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Those who don't understand recursion are doomed to repeat it From tw at waldmann-edv.de Mon Feb 8 08:49:15 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 8 Feb 2021 14:49:15 +0100 Subject: [Borgbackup] not running borg as root In-Reply-To: References: <3f72e805-d57e-32cc-da86-fb262893c6ea@waldmann-edv.de> Message-ID: <585dc217-cf28-ab21-993e-84ee0777d382@waldmann-edv.de> > All of this seems to work, except the minor annoyance that each time > backup starts I get > Synchronizing chunks cache... > I suppose this is because on the client side there are 2 different > caches, one for myself and one for root. Exactly. > But that's OK, it doesn't > take very long and I think it's probably harmless. You could also just backup everything as root on the client side. The repo side borg can still run as a non-privileged user if you like. That would avoid the resync. > A small annoyance was that I needed to rename all the existing backups > from hostname-date to hostname-home-date. There wasn't an obvious way > to automate this that wouldn't have been more work than just manually > doing this one-by-one for each backup. There is no batch rename, but borg list and borg rename and shell or python scripting. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From andrea.gelmini at gmail.com Thu Mar 11 07:23:15 2021 From: andrea.gelmini at gmail.com (Andrea Gelmini) Date: Thu, 11 Mar 2021 13:23:15 +0100 Subject: [Borgbackup] Stat of looots of files Message-ID: Dear developers, thanks a lot for your work on Borg! I kindly ask you some advice about my setup. I have a repository (usual filesharing) of ~65 TB spread over ~45 million files. Borg works perfectly! My worries are about the weekly backup. To stat all the files it takes days. Reading the new/changed files is super fast, of course (more than 200MB/s). Traversing all the tree can take more than time shift "friday evening -> sunday evening". So, is it possible to parallelize the scan part of the filesystem? I found reference on old threads and tickets on GitHub, but I didn't understand if they fit with my need. At the moment I tried to push ZFS using caching drives and moving Borg cache in tmpfs (I know the risk, I take care in case of reboot). But, no significant improvement. A quick glance/try with Restic seems to fix it, because of parallel scan. Sorry, I still don't have completed the benchmarks. It takes week. But maybe I am on the wrong path and I can avoid to waste time and resources. Thanks a lot again (really), Andrea From tw at waldmann-edv.de Thu Mar 11 07:48:17 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 11 Mar 2021 13:48:17 +0100 Subject: [Borgbackup] Stat of looots of files In-Reply-To: References: Message-ID: <2e639d79-9da2-7671-e281-c0a30d4a1f74@waldmann-edv.de> Hi Andrea, > I have a repository (usual filesharing) of ~65 TB spread over ~45 > million files. That's a lot. You could consider partitioning that set and using multiple repositories. Then you could also run multiple borg in parallel. Whether that is faster of course depends on the CPU and especially the I/O speed of your systems. Not more borgs than CPU cores. Not more borgs than your I/O is capable before saturating. Also, of course make sure borg's files cache based "no change" detection works, that should usually give a processing speed of multiple 1000 files per second (if unchanged). > So, is it possible to parallelize the scan part of the filesystem? A borg process is internally not parallelized yet, no multithreading. See the multithreading github ticket (to be addressed after crypto changes, will take quite some time). But you can run multiple in parallel, you just need multiple repos. > At the moment I tried to push ZFS using caching drives and moving > Borg cache in tmpfs (I know the risk, I take care in case of reboot). > But, no significant improvement. If you use ZFS, you could just make a snapshot at some stable mountpoint and then do a backup of that. As the contents of the snapshot will be stable, it could even take longer than the weekend. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From andrea.gelmini at gmail.com Thu Mar 11 09:59:40 2021 From: andrea.gelmini at gmail.com (Andrea Gelmini) Date: Thu, 11 Mar 2021 15:59:40 +0100 Subject: [Borgbackup] Stat of looots of files In-Reply-To: <2e639d79-9da2-7671-e281-c0a30d4a1f74@waldmann-edv.de> References: <2e639d79-9da2-7671-e281-c0a30d4a1f74@waldmann-edv.de> Message-ID: Il giorno gio 11 mar 2021 alle ore 13:56 Thomas Waldmann ha scritto: > > Hi Andrea, Thanks a lot for your quick and detailed reply. > You could consider partitioning that set and using multiple > repositories. Then you could also run multiple borg in parallel. Yeap, I guess so. But I would like to avoid this because I have ~17% of data deduplicated. These are machine learning datasets, and a lot of data is in common. > Whether that is faster of course depends on the CPU and especially the > I/O speed of your systems. Not more borgs than CPU cores. Not more borgs > than your I/O is capable before saturating. Yeap. My setup (just in case you have suggestion) is: Server: Dell R730xd cpu: dual Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz (so 24 core) hds: 16 ST10000NM0256 Seagate 10TB 7.2K SAS 12GBps 3.5 (excluding the 2 for ZFS cache) RAM: 48GB Controller: PERC H730 Mini with 2G of RAM battery cache While backup is in progress I have just one single core running Borg process. No work on disks from other processes (the Ubuntu installation is also on one partition of caching disks). > See the multithreading github ticket (to be addressed after crypto > changes, will take quite some time). Yeap, saw it weeks ago. I was just asking. We know it's voluntary effort, so we just have to say "thank you". No hurry. > If you use ZFS, you could just make a snapshot at some stable mountpoint > and then do a backup of that. As the contents of the snapshot will be > stable, it could even take longer than the weekend. True, but my worries are about I'm facing increasing time for backups at increasing number of files. I mean, my distance from "right now data" and "last complete backup" will be increasing (I'm in confort on one week, less if it's month). I also explored the "zfs diff" to compare snapshots and try to submit to Borg just the difference to process. But it's the same story. ZFS diff walks the "two trees" in single thread way and compare them. So, no gain. I'm also thinking to write a stupid demon with inotify to keep a list to offer to Borg. Well, thanks a lot for your answers. It's important for me to know at the moment there are no other ways and stop investigate. I bet we can wait for the multithread improvement. If we can help with tests on our hardware, just write me. Thanks again, Andrea From tw at waldmann-edv.de Thu Mar 11 10:29:59 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 11 Mar 2021 16:29:59 +0100 Subject: [Borgbackup] Stat of looots of files In-Reply-To: References: <2e639d79-9da2-7671-e281-c0a30d4a1f74@waldmann-edv.de> Message-ID: > hds: 16 ST10000NM0256 Seagate 10TB 7.2K SAS 12GBps 3.5 (excluding the > 2 for ZFS cache) The 2 for ZFS cache are SSDs? > I also explored the "zfs diff" to compare snapshots and try to submit > to Borg just the difference to process. > But it's the same story. ZFS diff walks the "two trees" in single > thread way and compare them. So, no gain. Not sure if one could use "zfs send" somehow? > I'm also thinking to write a stupid demon with inotify to keep a list > to offer to Borg. borg always creates full backups - all files are contained in the backup archive (even if a file is unchanged, it will be in the archive [== it will have a metadata item in the archive metadata stream] - borg just will not store the unchanged content chunks of it again into the repo). Therefore, knowing a list of changed files does not help as borg will look at all files anyway to create that **full** metadata stream. If you'ld go away from that, it would totally change semantics from "always full" to "full or incremental/differential" and you'ld have to be very careful when pruning to not kill stuff you still need. But another thing that will come very soon (1.1.16 release) might help you a bit with speed: if you neither need to back up bsdflags, nor xattrs, nor ACLs, you can run borg create with --noxattrs --noacls --nobsdflags options and that will speed it up. How much very much depends, can be a little or much (see that winbind nss ticket). Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From billk at iinet.net.au Thu Mar 11 10:29:25 2021 From: billk at iinet.net.au (William Kenworthy) Date: Thu, 11 Mar 2021 23:29:25 +0800 Subject: [Borgbackup] Stat of looots of files In-Reply-To: References: <2e639d79-9da2-7671-e281-c0a30d4a1f74@waldmann-edv.de> Message-ID: <5de2d61e-a378-3e3a-4097-14f22bdb1ba5@iinet.net.au> On 11/3/21 10:59 pm, Andrea Gelmini wrote: > Il giorno gio 11 mar 2021 alle ore 13:56 Thomas Waldmann > ha scritto: >> Hi Andrea, > Thanks a lot for your quick and detailed reply. > >> You could consider partitioning that set and using multiple >> repositories. Then you could also run multiple borg in parallel. Rather than multiple repos, only backup the new data sets to the repo each time and dont iterate over the old/stable data sets? Restoration is a little more complex (i.e., restore sets from day 1, then add from day 2 etc.) BillK From liori at exroot.org Tue Mar 16 16:25:16 2021 From: liori at exroot.org (Tomasz Melcer) Date: Tue, 16 Mar 2021 21:25:16 +0100 Subject: [Borgbackup] Read-only filesystems In-Reply-To: <87pnpr5myo.fsf@gadsden> References: <87pnpr5myo.fsf@gadsden> Message-ID: Hi, I've recently found another use case for this functionality. The inevitable happened and I had to restore my /home from a backup. I didn't want to accidentally break my local backup (if something happened, I'd have to fetch my remote one, and that would take some time), so I initially mounted the filesystem with the borgbackup repository as read only. Only then I realized that, well, even for a restore operation, the backup directory needs to be writable. Maybe a switch like BORG_READ_ONLY_FILESYSTEM_IS_OK might actually be useful? -- Tomasz Melcer From tw at waldmann-edv.de Mon Mar 22 18:58:01 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 22 Mar 2021 23:58:01 +0100 Subject: [Borgbackup] borgbackup 1.1.16 released! Message-ID: Just released borgbackup 1.1.16 - for details please see there: https://github.com/borgbackup/borg/releases/tag/1.1.16 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From below at judiz.de Sun Mar 28 15:03:50 2021 From: below at judiz.de (Michael Below) Date: Sun, 28 Mar 2021 21:03:50 +0200 Subject: [Borgbackup] Backup to current QNAP NAS? Message-ID: <59ea5afeb7a6a2b55db8b25630f1cec6b566c28f.camel@judiz.de> Hi, I am a longtime Debian user and amateur photographer, and now I have acquired my first NAS and want to install a proper backup solution. The NAS is intended for local backup and also for media storage (RAW images, music, movies - obviously I need a separate backup strategy for images, but this is not my current issue). I am wondering what is currently the best way to send local backups to a QNAP NAS (TS-451D2, Intel Celeron, 4GB RAM). There seem to be a lot of different possibilities, thats a bit overwhelming, so I am looking for "best practice" pointers: 1. Should I install borg on the QNAP QTS system? a) As prepackaged by the QNAP community? (currently 1.1.14, https://www.qnapclub.eu/de/qpkg/488) b) Is there a package by the borg community, for use directly on the QNAP QTS system?? c) ?It seems possible to install it from source via Entware, but there were some issues (https://github.com/Entware/Entware-ng/issues/851) d) Should I install it in some kind of Container (using QNAP Container Station)? Docker or LXC? e) In some kind of VM (using QNAP Virtualization Station)? 2. Should I install openmediavault / Debian on the QNAP system and run borg on that? 3. Should I run borg on the local systems, and backup to a remote file system? a) Backup via NFS? b) Backup via SMB? c) Some other file system? 4. Some completely different solution, e.g. borg talking to QNAP Hybrid Backup Sync? I would like to set up a low maintenance system that keeps my data secure for some years. IMHO Debian has a good track record in this regard, and it's a known environment, so that would argue for #2. OTOH, I read about people booting Openmediavault permanently from a USB stick, that seems sketchy... Plus, QNAP QTS seems to be one of the better NAS operating systems for the media server part. So I guess borg in a container would be the preferred solution in that context? That seems to be less overhead than a VM? Or would you recommend using the NAS as a"dumb" file system? Some other solution? I am concerned about software updates - I would prefer a solution with some kind of easy path to updates/security fixes to a "homebuilt" / "fire and forget" solution that will become outdated unless I continuously put in some effort. I like the way Debian handles updates. I am not sure about QTS, or updates in a Docker image (I guess it would have to be rebuilt periodically? Is a VM able to update itself?) Any hints are welcome... Cheers Michael From tw at waldmann-edv.de Sun Mar 28 15:38:01 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 28 Mar 2021 21:38:01 +0200 Subject: [Borgbackup] Backup to current QNAP NAS? In-Reply-To: <59ea5afeb7a6a2b55db8b25630f1cec6b566c28f.camel@judiz.de> References: <59ea5afeb7a6a2b55db8b25630f1cec6b566c28f.camel@judiz.de> Message-ID: I don't use a QNAP neither do I develop/package for QNAP, so only some replies: > a QNAP NAS (TS-451D2, Intel Celeron, 4GB RAM) 4GB is sufficient to run borg on it, but it depends on the amount of data you have in a repo, see formula in our docs. > 1. Should I install borg on the QNAP QTS system? If possible and you do not have too much data (so the RAM would not suffice): yes Running client/server borg is often more efficient than just running the borg client and having the repo on a network filesystem. > a) As prepackaged by the QNAP community? > (currently 1.1.14, https://www.qnapclub.eu/de/qpkg/488) Not the latest, but recent enough. Maybe motivate the packager to package more often. :-) > 2. Should I install openmediavault / Debian on the QNAP system and run > borg on that? Debian (and Ubuntu) packages are well-maintained, so that would be a good way. But please note that once accepted into stable, the borg version won't be changed and will only receive rather few, rather critical fixes. So, if you want to have something recent for borg on debian, use the backports repo, for ubuntu use the maintainer's ppa. There, you will usually find the latest borg stable release. > 4. Some completely different solution, e.g. borg talking to QNAP Hybrid > Backup Sync? No idea about that. Our usual recommendation for redundancy is to make backups to multiple, separate repositories. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393