From bkborg at kirk.de Thu Jan 26 09:34:25 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Thu, 26 Jan 2023 15:34:25 +0100 Subject: [Borgbackup] first backup of large amounts Message-ID: Hello, I want to backup a somewhat larger amount of data, approximately 35 TB. My first tries let me assume that 70 GB need about 1 hour to backup. These 35 TB would take about 2~3 weeks in the given environment. That means that the server would not be available during an unacceptable long time. So I wonder if it would be possible to split the backup into pieces of max. 400 GB (~ 6 hours) while getting a single repository at the end. What would be the best way to begin the backup? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From borgbackup at aluaces.fastmail.com Thu Jan 26 10:10:59 2023 From: borgbackup at aluaces.fastmail.com (borgbackup at aluaces.fastmail.com) Date: Thu, 26 Jan 2023 16:10:59 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: (Boris Kirkorowicz's message of "Thu, 26 Jan 2023 15:34:25 +0100") References: Message-ID: <87sffxe3gc.fsf@eps142.cdf.udc.es> Boris Kirkorowicz writes: > So I wonder if it would be possible to > split the backup into pieces of max. 400 GB (~ 6 hours) while getting > a single repository at the end. I think you can just stop the process when that time limit (6 hours) is reached. A checkpoint is created, so the next time you start the backup, all the already processed files and chunks will be already there, and the backup will be resumed implicitly. Alberto From tschoening at am-soft.de Thu Jan 26 10:19:43 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Thu, 26 Jan 2023 16:19:43 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: References: Message-ID: <152822182.20230126161943@am-soft.de> Guten Tag Boris Kirkorowicz, am Donnerstag, 26. Januar 2023 um 15:34 schrieben Sie: > I want to backup a somewhat larger amount of data, approximately 35 > TB. My first tries let me assume that 70 GB need about 1 hour to > backup. These 35 TB would take about 2~3 weeks in the given > environment. That means that the server would not be available > during an unacceptable long time. That needs to be explained mor detailed: What kind of data and or use-case with that data makes you believe that you need to shut services down? Concepts like snapshots in file systems like BTRFS and ZFS or volume managers like LVM are used to keep files consistent even for a longer period of time. > So I wonder if it would be > possible to split the backup into pieces of max. 400 GB (~ 6 hours) > while getting a single repository at the end. That use-case is supported by creating checkpoint files as often as you like. Though, whenever backups are started, a complete directory listing will be done, which might take some additional overhead time for your. Things heavily depend on the number of files, their average size and stuff. > -c SECONDS, --checkpoint-interval SECONDS write checkpoint > every SECONDS seconds (Default: 1800) https://borgbackup.readthedocs.io/en/stable/usage/create.html https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From billk at iinet.net.au Thu Jan 26 22:25:17 2023 From: billk at iinet.net.au (William Kenworthy) Date: Fri, 27 Jan 2023 11:25:17 +0800 Subject: [Borgbackup] first backup of large amounts In-Reply-To: References: Message-ID: <157b5de4-0745-8683-8646-f51fdaa70908@iinet.net.au> 1. Create the repo. Backup a subset of the data, next time increase the size of the subset - the already backuped up data will use the fast heuristics, though any new data will be added at the normal first time rate.? Repeat until the whole dataset is covered. 2. create a largish number of smaller repos in parallel (ideally using a number of hosts to maximise throughput - I am doing ~20 at once for different hosts/data sets into separate repos on a single moosefs file system, MUCH faster than trying to do it serially) - extract and add back into a single main repo. This last bit will be slow. 3. create a largish number of smaller repos as above and keep them that way and manage by scripts - recommended.? Will be more reliable, faster and less susceptible to corruption across the whole data set - while corruption is rare, it DOES happen - and the time to recover a large, single repo is really really really large to the point its usually quicker to recreate it! BillK On 26/1/23 22:34, Boris Kirkorowicz wrote: > Hello, > I want to backup a somewhat larger amount of data, approximately 35 > TB. My first tries let me assume that 70 GB need about 1 hour to > backup. These 35 TB would take about 2~3 weeks in the given > environment. That means that the server would not be available during > an unacceptable long time. So I wonder if it would be possible to > split the backup into pieces of max. 400 GB (~ 6 hours) while getting > a single repository at the end. > > What would be the best way to begin the backup? > > > From bkborg at kirk.de Fri Jan 27 07:29:37 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 27 Jan 2023 13:29:37 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <87sffxe3gc.fsf@eps142.cdf.udc.es> References: <87sffxe3gc.fsf@eps142.cdf.udc.es> Message-ID: Hello, Am 26.01.23 um 16:10 schrieb borgbackup at aluaces.fastmail.com: > Boris Kirkorowicz writes: > >> So I wonder if it would be possible to >> split the backup into pieces of max. 400 GB (~ 6 hours) while getting >> a single repository at the end. > > I think you can just stop the process when that time limit (6 hours) is > reached. A checkpoint is created, so the next time you start the > backup, all the already processed files and chunks will be already > there, and the backup will be resumed implicitly. thx. That sounds easy -just kill -9 after 6 hours, or is there a better way? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From bkborg at kirk.de Fri Jan 27 07:41:33 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 27 Jan 2023 13:41:33 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <152822182.20230126161943@am-soft.de> References: <152822182.20230126161943@am-soft.de> Message-ID: <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> Hello, Am 26.01.23 um 16:19 schrieb Thorsten Sch?ning: > Guten Tag Boris Kirkorowicz, > am Donnerstag, 26. Januar 2023 um 15:34 schrieben Sie: > >> I want to backup a somewhat larger amount of data, approximately 35 >> TB. My first tries let me assume that 70 GB need about 1 hour to >> backup. These 35 TB would take about 2~3 weeks in the given >> environment. That means that the server would not be available >> during an unacceptable long time. > > That needs to be explained mor detailed: What kind of data and or > use-case with that data makes you believe that you need to shut > services down? The server shall be taken offline during backup 1. to prevent files changed, 2. for security reasons. > Concepts like snapshots in file systems like BTRFS and > ZFS or volume managers like LVM are used to keep files consistent even > for a longer period of time. Target file system is ext4, no snapshots so far. >> So I wonder if it would be >> possible to split the backup into pieces of max. 400 GB (~ 6 hours) >> while getting a single repository at the end. > > That use-case is supported by creating checkpoint files as often as > you like. Though, whenever backups are started, a complete directory > listing will be done, which might take some additional overhead time > for your. Things heavily depend on the number of files, their average > size and stuff. > >> -c SECONDS, --checkpoint-interval SECONDS write checkpoint >> every SECONDS seconds (Default: 1800) > > https://borgbackup.readthedocs.io/en/stable/usage/create.html > https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there Thx. As far as I understand, these checkpoints are created by default, and setting intervals is optional to adapt it to individual preferences -right? So if borg --create is stopped (killed?) during backing up files, the next time it is invoked it just checks hashes up to the last checkpoint, what is very fast, and then continues the normal way until it ends or is stopped again. Thus, I could simply start borg --create at night, stop it after 6 hours, and repeat this every night until it got all files (return code = 0). Did I get it right? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From bkborg at kirk.de Fri Jan 27 07:48:38 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 27 Jan 2023 13:48:38 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <157b5de4-0745-8683-8646-f51fdaa70908@iinet.net.au> References: <157b5de4-0745-8683-8646-f51fdaa70908@iinet.net.au> Message-ID: <9289aa97-c196-65f4-9e7e-bc6d366f0fbd@kirk.de> Hello, Am 27.01.23 um 04:25 schrieb William Kenworthy: > 1. Create the repo. Backup a subset of the data, next time increase the > size of the subset - the already backuped up data will use the fast > heuristics, though any new data will be added at the normal first time > rate.? Repeat until the whole dataset is covered. > > 2. create a largish number of smaller repos in parallel (ideally using a > number of hosts to maximise throughput - I am doing ~20 at once for > different hosts/data sets into separate repos on a single moosefs file > system, MUCH faster than trying to do it serially) - extract and add > back into a single main repo. This last bit will be slow. > > 3. create a largish number of smaller repos as above and keep them that > way and manage by scripts - recommended.? Will be more reliable, faster > and less susceptible to corruption across the whole data set - while > corruption is rare, it DOES happen - and the time to recover a large, > single repo is really really really large to the point its usually > quicker to recreate it! thanks, good point. I'll follow this and see how to split the data into reasonable chunks. But it looks like there will be some parts that should not be split and will still remain very large. > > BillK > > > On 26/1/23 22:34, Boris Kirkorowicz wrote: >> Hello, >> I want to backup a somewhat larger amount of data, approximately 35 >> TB. My first tries let me assume that 70 GB need about 1 hour to >> backup. These 35 TB would take about 2~3 weeks in the given >> environment. That means that the server would not be available during >> an unacceptable long time. So I wonder if it would be possible to >> split the backup into pieces of max. 400 GB (~ 6 hours) while getting >> a single repository at the end. >> >> What would be the best way to begin the backup? >> >> >> > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tw at waldmann-edv.de Fri Jan 27 08:45:34 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 27 Jan 2023 14:45:34 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> Message-ID: <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> > The server shall be taken offline during backup 1. to prevent files > changed, You could consider your first loooong backups "dirty" and just not care about that. After you transferred most data and things are going quicker, you could start with "clean" backups. And after the first clean backup has finished successfully, delete the dirty ones. > 2. for security reasons. Whatever that means? > Thx. As far as I understand, these checkpoints are created by default, > and setting intervals is optional to adapt it to individual preferences > -right? Yes. > So if borg --create is stopped (killed?) during backing up > files, the next time it is invoked it just checks hashes up to the last > checkpoint, what is very fast, and then continues the normal way until > it ends or is stopped again. That's not how it works. borg never transfers chunks it already has in the repo, that's all. it knows the hashes of all chunks it already has in the repo (via chunks index and repo index). > Thus, I could simply start borg --create at > night, stop it after 6 hours, and repeat this every night until it got > all files (return code = 0). Correct. The return code might as well be 1 (warning), which means that you have to check the logs. rc 2 would be an error. From tw at waldmann-edv.de Fri Jan 27 08:37:59 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 27 Jan 2023 14:37:59 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: References: <87sffxe3gc.fsf@eps142.cdf.udc.es> Message-ID: > thx. That sounds easy -just > > kill -9 No, please NOT -9. Just a normal, gentle kill. kill -9 terminates a process immediately, without the process having the chance to cleanly get to an end. From devzero at web.de Thu Feb 2 11:47:55 2023 From: devzero at web.de (Roland) Date: Thu, 2 Feb 2023 17:47:55 +0100 Subject: [Borgbackup] introducing new disk imaging concept with borg (including iotop weirdness with blocksync.py) Message-ID: Hello, i started using borg together with blocksync.py from https://github.com/shodanshok/blocksync for creating an intelligent new image backup tool, which is a space-saver and a disk-write-saver? on repetitive image save and restore (should be good when being used with flash/ssd , for example when restoring notebooks to a default state after workshop usage and then refreshing the images because of os/software updates). i know this question should perhaps better be brought up in iotop project, but i thought somebody could find this new backup imaging concept interesting. (see https://github.com/borgbackup/borg/issues/671 - maybe someone likes to peer-review and/or like help creating a ncurses gui or getting this integrated in clonezilla ) now my issue: does anybody have a clue, why for borg there is shown so much more (58% vs 0.08%) in IO colum in iostat in comparison to blocksync.py process ? (as blocksync is doing much more IO as it reads the whole disk /dev/sdc uncompressed , where borg reads the same disk image compressed and deduped from the disk with the repo , which is on /dev/sdb? ) bug in iostat ? different blocksize for read with borg? it looks totally weird, i have no clue... regards Roland Total DISK READ:?????? 159.44 M/s | Total DISK WRITE:??????? 19.67 K/s Current DISK READ:???? 159.44 M/s | Current DISK WRITE: 149.52 K/s ??? TID? PRIO? USER???? DISK READ? DISK WRITE? SWAPIN????? IO COMMAND ?? 2264 be/4 root??????? 452.83 M????? 0.00 B? 0.00 % 58.78 % borg extract --stdout /backup/test::second ?? 2263 be/4 root??????? 818.25 M????? 0.00 B? 0.00 %? 0.08 % python2 /root/blocksync2.py server /dev/sdc -a sha512 -b 131072 -k 0 ??? 195 be/3 root????????? 0.00 B???? 20.00 K? 0.00 %? 0.02 % [jbd2/sda1-8] ?? 2095 be/4 root????????? 0.00 B????? 0.00 B? 0.00 %? 0.01 % [kworker/u8:5-events_unbound] ?? 2235 be/4 root????????? 0.00 B????? 0.00 B? 0.00 %? 0.01 % [kworker/2:0-events] ??? 232 be/4 root???????? 64.00 K????? 8.00 K? 0.00 %? 0.00 % systemd-journald ????? 1 be/4 root????????? 0.00 B????? 0.00 B? 0.00 %? 0.00 % init # /root/borg extract --stdout /backup/test::second | mbuffer -q | /root/blocksync.py - localhost /dev/sdc Dry run???? : False Local?????? : True Block size? : 128.0 KB Skipped???? : 0 blocks Hash alg??? : sha512 Crypto alg? : aes128-cbc Compression : False Read cache? : True SRC command : /root/blocksync.py - localhost /dev/sdc DST command : /root/blocksync.py server /dev/sdc -a sha512 -b 131072 -k 0 Synching... skipped: 0, same: 262144, diff: 0, 262144/0, 150.7 MB/s Completed in 217 seconds From tschoening at am-soft.de Thu Feb 2 15:12:11 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Thu, 2 Feb 2023 21:12:11 +0100 Subject: [Borgbackup] introducing new disk imaging concept with borg (including iotop weirdness with blocksync.py) In-Reply-To: References: Message-ID: <1516741965.20230202211211@am-soft.de> Guten Tag Roland, am Donnerstag, 2. Februar 2023 um 17:47 schrieben Sie: > Total DISK READ:?????? 159.44 M/s | Total DISK WRITE:??????? 19.67 K/s > Current DISK READ:???? 159.44 M/s | Current DISK WRITE: 149.52 K/s > ??? TID? PRIO? USER???? DISK READ? DISK WRITE? SWAPIN????? IO COMMAND > ?? 2264 be/4 root??????? 452.83 M????? 0.00 B? 0.00 % 58.78 % borg > extract --stdout /backup/test::second > ?? 2263 be/4 root??????? 818.25 M????? 0.00 B? 0.00 %? 0.08 % python2 > /root/blocksync2.py server /dev/sdc -a sha512 -b 131072 -k 0 From my understanding the I/O column is actually showing how limited the process is by I/O because it needs to wait on I/O. So high numbers are a bad thing, your I/O is the bottleneck in those cases. You need to look at the actual physical discs behind /backup and /dev/sdc, most likely SSDs performing differently fast or /backup even being backed by seom HDD(-raid?). Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From devzero at web.de Fri Feb 3 06:39:29 2023 From: devzero at web.de (Roland) Date: Fri, 3 Feb 2023 12:39:29 +0100 Subject: [Borgbackup] introducing new disk imaging concept with borg (including iotop weirdness with blocksync.py) In-Reply-To: <1516741965.20230202211211@am-soft.de> References: <1516741965.20230202211211@am-soft.de> Message-ID: <03111db4-3913-24f5-655c-737f0ccc7bad@web.de> hello thorsten, you are right, this column is wait-io% and not "percentage of io volume". it's like you tell, the borg repo was on a slower disk and after moving appropriate virtual disk to the same storage like sdc, things look totally different. i should better read manpages... thanks roland Am 02.02.23 um 21:12 schrieb Thorsten Sch?ning: > Guten Tag Roland, > am Donnerstag, 2. Februar 2023 um 17:47 schrieben Sie: > >> Total DISK READ:?????? 159.44 M/s | Total DISK WRITE:??????? 19.67 K/s >> Current DISK READ:???? 159.44 M/s | Current DISK WRITE: 149.52 K/s >> ??? TID? PRIO? USER???? DISK READ? DISK WRITE? SWAPIN????? IO COMMAND >> ?? 2264 be/4 root??????? 452.83 M????? 0.00 B? 0.00 % 58.78 % borg >> extract --stdout /backup/test::second >> ?? 2263 be/4 root??????? 818.25 M????? 0.00 B? 0.00 %? 0.08 % python2 >> /root/blocksync2.py server /dev/sdc -a sha512 -b 131072 -k 0 > From my understanding the I/O column is actually showing how limited > the process is by I/O because it needs to wait on I/O. So high numbers > are a bad thing, your I/O is the bottleneck in those cases. You need > to look at the actual physical discs behind /backup and /dev/sdc, most > likely SSDs performing differently fast or /backup even being backed > by seom HDD(-raid?). > > Mit freundlichen Gr??en > > Thorsten Sch?ning > From bkborg at kirk.de Wed Feb 8 10:20:07 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Wed, 8 Feb 2023 16:20:07 +0100 Subject: [Borgbackup] Restore Message-ID: Hi, problem with restore... what I did: mount sshfs: > sshfs backup at 192.168.6.2:/homes/backup/backup /mnt/.rs1219a -o IdentityFile=/home/backup/.ssh/id_rsa,debug,sshfs_sync,allow_other,default_permissions,allow_root,auto_unmount &>>/dev/null & mount cryptfs: > printf "%s" $CRYPTFSPASS | ecryptfs-add-passphrase > mount -i $CRYPTDIR backup: > borg init -e=none $CRYPTDIR/Kalender > borg create -v -p --stats $CRYPTDIR/Kalender::'{now:%Y-%m-%d_%H-%M}' $HOME/Dokumente/Sicherungen/Kalender > sfo2205:~ # ls -l /$CRYPTDIR/ > insgesamt 8 > drwx------ 1 1027 users 4096 8. Feb 16:04 Kalender check: > sfo2205:~ # ls -l $CRYPTDIR/Kalender/ > insgesamt 104 > -rwx------ 1 1027 users 209 8. Feb 13:35 config > drwx------ 1 1027 users 4096 8. Feb 13:35 data > -rwx------ 1 1027 users 88 8. Feb 13:35 hints.5 > -rwx------ 1 1027 users 41258 8. Feb 13:35 index.5 > -rwx------ 1 1027 users 190 8. Feb 13:35 integrity.5 > -rwx------ 1 1027 users 73 8. Feb 13:35 README then: > borg list $CRYPTDIR/Kalender > 2023-02-08_13-32 Wed, 2023-02-08 13:32:22 [a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042] of course: > sfo2205:~ # borg mount /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 /mnt/borg > Repository /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 does not exist. What did I do wrong? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From borgbackup at aluaces.fastmail.com Wed Feb 8 11:03:33 2023 From: borgbackup at aluaces.fastmail.com (Alberto Luaces) Date: Wed, 08 Feb 2023 17:03:33 +0100 Subject: [Borgbackup] Restore In-Reply-To: (Boris Kirkorowicz's message of "Wed, 8 Feb 2023 16:20:07 +0100") References: Message-ID: <87v8kcm9ey.fsf@eps142.cdf.udc.es> Hi,Boris Kirkorowicz writes: [...] > backup: >> borg init -e=none $CRYPTDIR/Kalender >> borg create -v -p --stats $CRYPTDIR/Kalender::'{now:%Y-%m-%d_%H-%M}' $HOME/Dokumente/Sicherungen/Kalender > [...] > > then: >> borg list $CRYPTDIR/Kalender >> 2023-02-08_13-32 Wed, 2023-02-08 13:32:22 [a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042] > > of course: >> sfo2205:~ # borg mount /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 /mnt/borg >> Repository /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 does not exist. > > > What did I do wrong? If you are creating the archive named Kalender::2023-02-08_13-32, you have to refer to it as such: borg mount /path_to/Kalender::2023-02-08_13-32 /mnt/borg From bkborg at kirk.de Wed Feb 8 13:46:51 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Wed, 8 Feb 2023 19:46:51 +0100 Subject: [Borgbackup] Restore In-Reply-To: <87v8kcm9ey.fsf@eps142.cdf.udc.es> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> Message-ID: <396c382e-3893-241f-191a-810616a66afe@kirk.de> Hi, Am 08.02.23 um 17:03 schrieb Alberto Luaces: > Hi,Boris Kirkorowicz writes: > > [...] > >> backup: >>> borg init -e=none $CRYPTDIR/Kalender >>> borg create -v -p --stats $CRYPTDIR/Kalender::'{now:%Y-%m-%d_%H-%M}' $HOME/Dokumente/Sicherungen/Kalender >> > > [...] > >> >> then: >>> borg list $CRYPTDIR/Kalender >>> 2023-02-08_13-32 Wed, 2023-02-08 13:32:22 [a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042] >> >> of course: >>> sfo2205:~ # borg mount /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 /mnt/borg >>> Repository /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 does not exist. >> >> >> What did I do wrong? > > If you are creating the archive named Kalender::2023-02-08_13-32, you > have to refer to it as such: > > borg mount /path_to/Kalender::2023-02-08_13-32 /mnt/borg I see, THX. Do I have to mount and look into all existing repos one by one to find a certain file, e.g. created at a certain date/time? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From stephen.berg at nrlssc.navy.mil Wed Feb 8 14:04:44 2023 From: stephen.berg at nrlssc.navy.mil (Stephen Berg, Code 7309) Date: Wed, 8 Feb 2023 13:04:44 -0600 Subject: [Borgbackup] repo and archive size Message-ID: Is there an easy way to get 'borg info' to output just the size of a repo or an individual archive inside that repo?? I'm trying to script getting those two numbers and borg info returns all sorts of info that I'm not really interested in so I'd have to grep through all that to get the numbers I want. -- Stephen Berg, IT Specialist, Ocean Sciences Division, Code 7309 Naval Research Laboratory W: (228) 688-5738 DSN: (312) 823-5738 C: (228) 365-0162 Email: stephen.berg at nrlssc.navy.mil <- (Preferred contact) Flank Speed: stephen.p.berg.civ at us.navy.mil -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4581 bytes Desc: S/MIME Cryptographic Signature URL: From peter at crazymonkeys.de Wed Feb 8 15:12:32 2023 From: peter at crazymonkeys.de (Peter Albrecht) Date: Wed, 8 Feb 2023 21:12:32 +0100 Subject: [Borgbackup] Restore In-Reply-To: <396c382e-3893-241f-191a-810616a66afe@kirk.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> Message-ID: <5fb97d3b-1619-c6cd-5f78-da53112c5e39@crazymonkeys.de> Hello Boris, On 08.02.23 19:46, Boris Kirkorowicz wrote: > Hi, > > Am 08.02.23 um 17:03 schrieb Alberto Luaces: >> Hi,Boris Kirkorowicz writes: >> >> [...] >> >>> backup: >>>> borg init -e=none $CRYPTDIR/Kalender >>>> borg create -v -p --stats $CRYPTDIR/Kalender::'{now:%Y-%m-%d_%H-%M}' >>>> $HOME/Dokumente/Sicherungen/Kalender >>> >> >> [...] >> >>> >>> then: >>>> borg list $CRYPTDIR/Kalender >>>> 2023-02-08_13-32???????????????????? Wed, 2023-02-08 13:32:22 >>>> [a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042] >>> >>> of course: >>>> sfo2205:~ # borg mount >>>> /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 /mnt/borg >>>> Repository >>>> /mnt/rs1219a/a7a60d216705d749901f565e2acbb7d5af544dfbe41b3e9886467eafcaf4d042 does not exist. >>> >>> >>> What did I do wrong? >> >> If you are creating the archive named Kalender::2023-02-08_13-32, you >> have to refer to it as such: >> >> borg mount /path_to/Kalender::2023-02-08_13-32 /mnt/borg > > I see, THX. > > Do I have to mount and look into all existing repos one by one to find a certain > file, e.g. created at a certain date/time? You can omit the specific archiv ID while mounting: borg mount /path_to/Kalender /mnt/borg in this case, the whole repository with all archives will be mounted. And then you can look with a tool like "find" (https://manpages.org/find) in this directory for version of your file. Regards, Peter From jolson at kth.se Wed Feb 8 15:29:50 2023 From: jolson at kth.se (Jonas Olson) Date: Wed, 8 Feb 2023 21:29:50 +0100 Subject: [Borgbackup] repo and archive size In-Reply-To: References: Message-ID: <7162a398-3726-7f67-b4e0-dc67f242db63@kth.se> On 2023-02-08 20:04, Stephen Berg, Code 7309 via Borgbackup wrote: > Is there an easy way to get 'borg info' to output just the size of a > repo or an individual archive inside that repo?? I'm trying to script > getting those two numbers and borg info returns all sorts of info that > I'm not really interested in so I'd have to grep through all that to > get the numbers I want. Applying the "--json" flag, and then picking out the desired value using existing JSON tools, might be almost as easy. borg info --json $REPO | jq .cache.stats.unique_size Sincerely, Jonas Olson From bkborg at kirk.de Thu Feb 9 05:46:36 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Thu, 9 Feb 2023 11:46:36 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> Message-ID: <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> Hi again, Am 27.01.23 um 14:45 schrieb Thomas Waldmann: >> The server shall be taken offline during backup 1. to prevent files >> changed, > > You could consider your first loooong backups "dirty" and just not care > about that. > > After you transferred most data and things are going quicker, you could > start with "clean" backups. And after the first clean backup has > finished successfully, delete the dirty ones. > >> 2. for security reasons. > > Whatever that means? > >> Thx. As far as I understand, these checkpoints are created by default, >> and setting intervals is optional to adapt it to individual preferences >> -right? > > Yes. > >> So if borg --create is stopped (killed?) during backing up >> files, the next time it is invoked it just checks hashes up to the last >> checkpoint, what is very fast, and then continues the normal way until >> it ends or is stopped again. > > That's not how it works. > > borg never transfers chunks it already has in the repo, that's all. > it knows the hashes of all chunks it already has in the repo (via chunks > index and repo index). > >> Thus, I could simply start borg --create at >> night, stop it after 6 hours, and repeat this every night until it got >> all files (return code = 0). > > Correct. The return code might as well be 1 (warning), which means that > you have to check the logs. rc 2 would be an error. for now, I invoke several borg create tasks in parallel and kill them all after about 5 hours. That seems to work fine, and the first night, there were about 350 GB of borg data stored. To check, I connected again zu the backup NAS and mounted borg mount /mnt// /mnt/borg/ that also works without error messages. But while some of the /mnt/borg/ contain subdirs and data, some other /mnt/borg/ are empty. At the same time, the corresponding /mnt// dir contains e.g. hundreds of GB of data. What might be wrong here? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tschoening at am-soft.de Fri Feb 10 02:19:53 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Fri, 10 Feb 2023 08:19:53 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> Message-ID: <1905493134.20230210081953@am-soft.de> Guten Tag Boris Kirkorowicz, am Donnerstag, 9. Februar 2023 um 11:46 schrieben Sie: > for now, I invoke several borg create tasks in parallel and kill > them all after about 5 hours. That seems to work fine, and the first > night, there were about 350 GB of borg data stored. You can't backup concurrently into the same repo, borg uses file locks to prevent that. > [...]Also keep in mind that Borg will keep an exclusive lock on the > repository while creating or deleting archives, which may make > simultaneous backups fail. https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-backup-from-multiple-servers-into-a-single-repository > But while some of the /mnt/borg/ contain subdirs and data, > some other /mnt/borg/ are empty. At the same time, the > corresponding /mnt// dir contains e.g. hundreds of GB of data. From my understanding this is as expected for intermediate checkpoints, you are only guaranteed to see all files after a successful archive has been created at some point. OTOH, your concurrent process might simply be wrong and not behave as you expect. Possibly not backing up some parts of the directory tree when you expect it at all, e.g. because the target repo is locker or stuff. Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From bkborg at kirk.de Fri Feb 10 03:29:41 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 10 Feb 2023 09:29:41 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <1905493134.20230210081953@am-soft.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> Message-ID: <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> Hi, Am 10.02.23 um 08:19 schrieb Thorsten Sch?ning: > Guten Tag Boris Kirkorowicz, > am Donnerstag, 9. Februar 2023 um 11:46 schrieben Sie: > >> for now, I invoke several borg create tasks in parallel and kill >> them all after about 5 hours. That seems to work fine, and the first >> night, there were about 350 GB of borg data stored. > > You can't backup concurrently into the same repo, borg uses file locks > to prevent that. I know, and this is not what I did. Each task write to it's own repo: ## $1: Instance number NR ## $2: Source Q ## $3: Target Repository ZR ## $4: borg options BO > borgrun() > { > [...] > borg init -e=none $ZR &>> $LOG$NR > [...] > borg create $BO $ZR::'{now:%Y-%m-%d_%H-%M}' $Q &>> $LOG$NR > [...] > } [...] > borgrun "1" "/home/backup" "$CRYPTDIR/Backup" "-v --stats --show-rc" & > borgrun "2" "/home/bianca" "$CRYPTDIR/Bianca" "-v --stats --show-rc" & > borgrun "3" "/home/boris" "$CRYPTDIR/Boris" "-v --stats --show-rc" & > borgrun "4" "/home/de131567" "$CRYPTDIR/MBI" "-v --stats --show-rc" & > borgrun "5" "/home/groups" "$CRYPTDIR/Groups" "-v --stats --show-rc" & > borgrun "6" "/home/hv" "$CRYPTDIR/HV" "-v --stats --show-rc" & > borgrun "7" "/home/kirk" "$CRYPTDIR/Kirk" "-v --stats --show-rc" & > borgrun "8" "/home/robert" "$CRYPTDIR/Robert" "-v --stats --show-rc" & > borgrun "9" "/home/sabine" "$CRYPTDIR/Sabine" "-v --stats --show-rc" & I hope I understand it right -these tasks should not disturb each other. >> But while some of the /mnt/borg/ contain subdirs and data, >> some other /mnt/borg/ are empty. At the same time, the >> corresponding /mnt// dir contains e.g. hundreds of GB of data. > > From my understanding this is as expected for intermediate > checkpoints, you are only guaranteed to see all files after a > successful archive has been created at some point. Ah, OK. After the second nightly backup, some more borg mounted repos show dirs and files, and the repos that appear empty are those which are ended by SIGTERM (rc: 143). So I assume that I have to wait a few weeks until these tasks complete within the given time by themselves (rc: 0) without being stopped by a kill command. Right? Of course, nicer would be to get them usable earlier... > OTOH, your concurrent process might simply be wrong and not behave as > you expect. Possibly not backing up some parts of the directory tree > when you expect it at all, e.g. because the target repo is locker or > stuff. Therefore I posted the main lines from my script. Are they OK? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tschoening at am-soft.de Fri Feb 10 03:50:35 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Fri, 10 Feb 2023 09:50:35 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> Message-ID: <107591783.20230210095035@am-soft.de> Guten Tag Boris Kirkorowicz, am Freitag, 10. Februar 2023 um 09:29 schrieben Sie: > Of course, nicer would be to get them usable earlier... This isn't as easy as it sounds, because files might simply not have been transferred fully at all. But you might have a look at what is available so far: > Note: the checkpointing mechanism creates hidden, partial files in > an archive, so that checkpoints even work while a big file is being > processed. They are named .borg_part_ and all > operations usually ignore these files, but you can make them > considered by giving the option --consider-part-files. You usually > only need that option if you are really desperate (e.g. if you have > no completed backup of that file and you?ld rather get a partial > file extracted than nothing). You do not want to give that option > under any normal circumstances. https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there > Therefore I posted the main lines from my script. Are they OK? Looks good to me. Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From bkborg at kirk.de Fri Feb 10 10:40:16 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 10 Feb 2023 16:40:16 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <107591783.20230210095035@am-soft.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> <107591783.20230210095035@am-soft.de> Message-ID: <96dafe5e-c13d-009e-ee89-fd96ed09bf5b@kirk.de> Hi, Am 10.02.23 um 09:50 schrieb Thorsten Sch?ning: > This isn't as easy as it sounds, because files might simply not have > been transferred fully at all. But you might have a look at what is > available so far: > >> Note: the checkpointing mechanism creates hidden, partial files in >> an archive, so that checkpoints even work while a big file is being >> processed. They are named .borg_part_ and all >> operations usually ignore these files, but you can make them >> considered by giving the option --consider-part-files. You usually >> only need that option if you are really desperate (e.g. if you have >> no completed backup of that file and you?ld rather get a partial >> file extracted than nothing). You do not want to give that option >> under any normal circumstances. > > https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there since I am not desperate, just curious, I wonder if this would change anything in my repos. Does it harm or destroy anything? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tschoening at am-soft.de Fri Feb 10 11:21:04 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Fri, 10 Feb 2023 17:21:04 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <96dafe5e-c13d-009e-ee89-fd96ed09bf5b@kirk.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> <107591783.20230210095035@am-soft.de> <96dafe5e-c13d-009e-ee89-fd96ed09bf5b@kirk.de> Message-ID: <687275107.20230210172104@am-soft.de> Guten Tag Boris Kirkorowicz, am Freitag, 10. Februar 2023 um 16:40 schrieben Sie: >> > https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there > since I am not desperate, just curious, I wonder if this would > change anything in my repos. Does it harm or destroy anything? Not that I'm aware of, it's an option only additionally showing things available already or doing nothing if those special partly files aren't available for some reason. Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From bkborg at kirk.de Fri Feb 10 15:16:05 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 10 Feb 2023 21:16:05 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <687275107.20230210172104@am-soft.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> <107591783.20230210095035@am-soft.de> <96dafe5e-c13d-009e-ee89-fd96ed09bf5b@kirk.de> <687275107.20230210172104@am-soft.de> Message-ID: <39ae0d41-4008-0675-c84b-6b5b7175898c@kirk.de> Hello, Am 10.02.23 um 17:21 schrieb Thorsten Sch?ning: > Guten Tag Boris Kirkorowicz, > am Freitag, 10. Februar 2023 um 16:40 schrieben Sie: > >>>> https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there > >> since I am not desperate, just curious, I wonder if this would >> change anything in my repos. Does it harm or destroy anything? > > Not that I'm aware of, it's an option only additionally showing things > available already or doing nothing if those special partly files > aren't available for some reason. I tried > mkdir /mnt/borg/test > borg --consider-part-files mount /path/to/repo /mnt/borg/test but /mnt/borg/test is still empty. -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tschoening at am-soft.de Sat Feb 11 09:12:54 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Sat, 11 Feb 2023 15:12:54 +0100 Subject: [Borgbackup] Fwd: Re: first backup of large amounts In-Reply-To: <39ae0d41-4008-0675-c84b-6b5b7175898c@kirk.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> <107591783.20230210095035@am-soft.de> <96dafe5e-c13d-009e-ee89-fd96ed09bf5b@kirk.de> <687275107.20230210172104@am-soft.de> <39ae0d41-4008-0675-c84b-6b5b7175898c@kirk.de> Message-ID: <611696884.20230211151254@am-soft.de> Dies ist eine weitergeleitete Nachricht Von : Boris Kirkorowicz An : Thorsten Sch?ning Datum : Freitag, 10. Februar 2023, 21:16 Betreff: [Borgbackup] first backup of large amounts ===8<=================== Original Nachrichtentext =================== F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz Am 10.02.23 um 17:21 schrieb Thorsten Sch?ning: > Guten Tag Boris Kirkorowicz, > am Freitag, 10. Februar 2023 um 16:40 schrieben Sie: > >>>> https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there > >> since I am not desperate, just curious, I wonder if this would >> change anything in my repos. Does it harm or destroy anything? > > Not that I'm aware of, it's an option only additionally showing things > available already or doing nothing if those special partly files > aren't available for some reason. I tried > mkdir /mnt/borg/test > borg --consider-part-files mount /path/to/repo /mnt/borg/test but /mnt/borg/test is still empty. ===8<============== Ende des Original Nachrichtentextes ============= Hallo, Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska -------------- next part -------------- An embedded message was scrubbed... From: Boris Kirkorowicz Subject: Re: [Borgbackup] first backup of large amounts Date: Fri, 10 Feb 2023 21:16:05 +0100 Size: 5080 URL: From bkborg at kirk.de Fri Feb 10 15:16:05 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 10 Feb 2023 21:16:05 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <687275107.20230210172104@am-soft.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> <107591783.20230210095035@am-soft.de> <96dafe5e-c13d-009e-ee89-fd96ed09bf5b@kirk.de> <687275107.20230210172104@am-soft.de> Message-ID: <39ae0d41-4008-0675-c84b-6b5b7175898c@kirk.de> Hello, Am 10.02.23 um 17:21 schrieb Thorsten Sch?ning: > Guten Tag Boris Kirkorowicz, > am Freitag, 10. Februar 2023 um 16:40 schrieben Sie: > >>>> https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there > >> since I am not desperate, just curious, I wonder if this would >> change anything in my repos. Does it harm or destroy anything? > > Not that I'm aware of, it's an option only additionally showing things > available already or doing nothing if those special partly files > aren't available for some reason. I tried > mkdir /mnt/borg/test > borg --consider-part-files mount /path/to/repo /mnt/borg/test but /mnt/borg/test is still empty. -- Mit freundlichem Gru? Best regards ? Kirkorowicz From bkborg at kirk.de Sun Feb 12 16:14:10 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Sun, 12 Feb 2023 22:14:10 +0100 Subject: [Borgbackup] first backup of large amounts In-Reply-To: <39ae0d41-4008-0675-c84b-6b5b7175898c@kirk.de> References: <152822182.20230126161943@am-soft.de> <20b44d2b-1a56-679d-8d0a-a0b1ed5bfdd9@kirk.de> <6c14b9ea-09fb-8814-e832-4721bc55aa2e@waldmann-edv.de> <4c1a164d-286a-5320-0f51-542c1eee824f@kirk.de> <1905493134.20230210081953@am-soft.de> <50aeefb6-fe9f-0313-de84-17ce94849dc7@kirk.de> <107591783.20230210095035@am-soft.de> <96dafe5e-c13d-009e-ee89-fd96ed09bf5b@kirk.de> <687275107.20230210172104@am-soft.de> <39ae0d41-4008-0675-c84b-6b5b7175898c@kirk.de> Message-ID: Am 10.02.23 um 17:21 schrieb Thorsten Sch?ning: > Guten Tag Boris Kirkorowicz, > am Freitag, 10. Februar 2023 um 16:40 schrieben Sie: > >>>> https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there > >> since I am not desperate, just curious, I wonder if this would >> change anything in my repos. Does it harm or destroy anything? > > Not that I'm aware of, it's an option only additionally showing things > available already or doing nothing if those special partly files > aren't available for some reason. I tried > mkdir /mnt/borg/test > borg --consider-part-files mount /path/to/repo /mnt/borg/test but /mnt/borg/test is still empty. -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tw at waldmann-edv.de Wed Feb 15 11:48:21 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 15 Feb 2023 17:48:21 +0100 Subject: [Borgbackup] Restore In-Reply-To: <396c382e-3893-241f-191a-810616a66afe@kirk.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> Message-ID: <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> > Do I have to mount and look into all existing repos one by one to find a > certain file, e.g. created at a certain date/time? You can use -o versions mount options to get a "merged" view of all backups. Instead of a file, you'll see a directory and inside that all the distinct versions of that file. From bkborg at kirk.de Wed Feb 15 13:13:07 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Wed, 15 Feb 2023 19:13:07 +0100 Subject: [Borgbackup] Restore In-Reply-To: <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> Message-ID: <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> Hi, Am 15.02.23 um 17:48 schrieb Thomas Waldmann: >> Do I have to mount and look into all existing repos one by one to find a >> certain file, e.g. created at a certain date/time? > > You can use -o versions mount options to get a "merged" view of all > backups. Instead of a file, you'll see a directory and inside that all > the distinct versions of that file. not here: the mountdir still remains empty. > borg mount -o versions /mnt/rs1219/SRV /mnt/borg/SRV > > ls -l /mnt/borg/SRV/ > insgesamt 0 > > ls -l /mnt/rs1219a/SRV/ > insgesamt 2636 > -rwx------ 1 1027 users 209 14. Feb 00:16 config > drwx------ 1 1027 users 4096 14. Feb 00:16 data > -rwx------ 1 1027 users 1181 14. Feb 05:39 hints.111 > -rwx------ 1 1027 users 2621498 14. Feb 05:40 index.111 > -rwx------ 1 1027 users 190 14. Feb 05:40 integrity.111 > -rwx------ 1 1027 users 50 15. Feb 2023 lock.roster > -rwx------ 1 1027 users 73 14. Feb 00:16 README What might I do wrong? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From ndbecker2 at gmail.com Wed Feb 15 16:47:39 2023 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 15 Feb 2023 16:47:39 -0500 Subject: [Borgbackup] Restore In-Reply-To: <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> Message-ID: Interesting, let's try it: borg mount -o versions nbecker at nbecker8:BACKUP/nbecker0 ~/borgmnt/ Nope, I don't see multiple versions of files. [nbecker at nbecker0 s-band]$ borg --version borg 1.2.3 [nbecker at nbecker0 s-band]$ ssh nbecker8 borg --version borg 1.2.3 On Wed, Feb 15, 2023 at 1:13 PM Boris Kirkorowicz wrote: > Hi, > > Am 15.02.23 um 17:48 schrieb Thomas Waldmann: > >> Do I have to mount and look into all existing repos one by one to find a > >> certain file, e.g. created at a certain date/time? > > > > You can use -o versions mount options to get a "merged" view of all > > backups. Instead of a file, you'll see a directory and inside that all > > the distinct versions of that file. > > not here: the mountdir still remains empty. > > > borg mount -o versions /mnt/rs1219/SRV /mnt/borg/SRV > > > > ls -l /mnt/borg/SRV/ > > insgesamt 0 > > > > ls -l /mnt/rs1219a/SRV/ > > insgesamt 2636 > > -rwx------ 1 1027 users 209 14. Feb 00:16 config > > drwx------ 1 1027 users 4096 14. Feb 00:16 data > > -rwx------ 1 1027 users 1181 14. Feb 05:39 hints.111 > > -rwx------ 1 1027 users 2621498 14. Feb 05:40 index.111 > > -rwx------ 1 1027 users 190 14. Feb 05:40 integrity.111 > > -rwx------ 1 1027 users 50 15. Feb 2023 lock.roster > > -rwx------ 1 1027 users 73 14. Feb 00:16 README > > What might I do wrong? > > > -- > Mit freundlichem Gru? Best regards > Kirkorowicz > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- *Those who don't understand recursion are doomed to repeat it* -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkborg at kirk.de Thu Feb 16 11:47:43 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Thu, 16 Feb 2023 17:47:43 +0100 Subject: [Borgbackup] Restore In-Reply-To: References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> Message-ID: <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> Hi, now I tried the following: > sfo2205:~ # borg --show-rc check --repository-only /mnt/rs1219a/SRV > Old config file not securely erased on previous config update > Local Exception > Traceback (most recent call last): > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 5168, in main > exit_code = archiver.run(args) > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 5099, in run > return set_ec(func(args)) > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 183, in wrapper > return method(self, args, repository=repository, **kwargs) > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 343, in do_check > if not repository.check(repair=args.repair, save_space=args.save_space, max_duration=args.max_duration): > File "/usr/lib64/python3.10/site-packages/borg/repository.py", line 1026, in check > self.save_config(self.path, self.config) > File "/usr/lib64/python3.10/site-packages/borg/repository.py", line 332, in save_config > secure_erase(old_config_path, avoid_collateral_damage=True) > File "/usr/lib64/python3.10/site-packages/borg/helpers/fs.py", line 199, in secure_erase > with open(path, 'r+b') as fd: > OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' > > Platform: Linux sfo2205 6.1.7-1-default #1 SMP PREEMPT_DYNAMIC Wed Jan 18 11:12:34 UTC 2023 (872045c) x86_64 > Linux: Unknown Linux > Borg: 1.2.3 Python: CPython 3.10.9 msgpack: 1.0.4 fuse: pyfuse3 3.2.2 [pyfuse3,llfuse] > PID: 30721 CWD: /root > sys.argv: ['/usr/bin/borg', '--show-rc', 'check', '--repository-only', '/mnt/rs1219a/SRV'] > SSH_ORIGINAL_COMMAND: None > > terminating with error status, rc 2 > sfo2205:~ # borg --show-rc check --verify-data /mnt/rs1219a/SRV > Old config file not securely erased on previous config update > Local Exception > Traceback (most recent call last): > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 5168, in main > exit_code = archiver.run(args) > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 5099, in run > return set_ec(func(args)) > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 183, in wrapper > return method(self, args, repository=repository, **kwargs) > File "/usr/lib64/python3.10/site-packages/borg/archiver.py", line 343, in do_check > if not repository.check(repair=args.repair, save_space=args.save_space, max_duration=args.max_duration): > File "/usr/lib64/python3.10/site-packages/borg/repository.py", line 1026, in check > self.save_config(self.path, self.config) > File "/usr/lib64/python3.10/site-packages/borg/repository.py", line 332, in save_config > secure_erase(old_config_path, avoid_collateral_damage=True) > File "/usr/lib64/python3.10/site-packages/borg/helpers/fs.py", line 199, in secure_erase > with open(path, 'r+b') as fd: > OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' > > Platform: Linux sfo2205 6.1.7-1-default #1 SMP PREEMPT_DYNAMIC Wed Jan 18 11:12:34 UTC 2023 (872045c) x86_64 > Linux: Unknown Linux > Borg: 1.2.3 Python: CPython 3.10.9 msgpack: 1.0.4 fuse: pyfuse3 3.2.2 [pyfuse3,llfuse] > PID: 30724 CWD: /root > sys.argv: ['/usr/bin/borg', '--show-rc', 'check', '--verify-data', '/mnt/rs1219a/SRV'] > SSH_ORIGINAL_COMMAND: None > > terminating with error status, rc 2 What does this mean? The borg backup task was invoked using > borg create -v --stats --show-rc -C lzma,9 /mnt/rs1219a/SRV::'{now:%Y-%m-%d_%H-%M}' /srv -e /srv/home ending by killall (SIGTERM) with rc 143 after about 5 hours (start and kill by cron job). Am 15.02.23 um 22:47 schrieb Neal Becker: > Interesting, let's try it: > borg mount -o versions nbecker at nbecker8:BACKUP/nbecker0 ~/borgmnt/ > Nope, I don't see multiple versions of files. > [nbecker at nbecker0 s-band]$ borg --version > borg 1.2.3 > [nbecker at nbecker0 s-band]$ ssh nbecker8 borg --version > borg 1.2.3 > > On Wed, Feb 15, 2023 at 1:13 PM Boris Kirkorowicz > wrote: > > Hi, > > Am 15.02.23 um 17:48 schrieb Thomas Waldmann: > >> Do I have to mount and look into all existing repos one by one > to find a > >> certain file, e.g. created at a certain date/time? > > > > You can use -o versions mount options to get a "merged" view of all > > backups. Instead of a file, you'll see a directory and inside > that all > > the distinct versions of that file. > > not here: the mountdir still remains empty. > > > borg mount -o versions /mnt/rs1219/SRV /mnt/borg/SRV > > > > ls -l /mnt/borg/SRV/ > > insgesamt 0 > > > > ls -l /mnt/rs1219a/SRV/ > > insgesamt 2636 > > -rwx------ 1 1027 users? ? ?209 14. Feb 00:16 config > > drwx------ 1 1027 users? ? 4096 14. Feb 00:16 data > > -rwx------ 1 1027 users? ? 1181 14. Feb 05:39 hints.111 > > -rwx------ 1 1027 users 2621498 14. Feb 05:40 index.111 > > -rwx------ 1 1027 users? ? ?190 14. Feb 05:40 integrity.111 > > -rwx------ 1 1027 users? ? ? 50 15. Feb 2023? lock.roster > > -rwx------ 1 1027 users? ? ? 73 14. Feb 00:16 README > > What might I do wrong? > > > -- > Mit freundlichem Gru?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Best regards > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ?Kirkorowicz > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > > > > > -- > /Those who don't understand recursion are doomed to repeat it/ -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tw at waldmann-edv.de Wed Feb 22 18:47:26 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 23 Feb 2023 00:47:26 +0100 Subject: [Borgbackup] Restore In-Reply-To: <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> Message-ID: <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> >> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' That fs / device is not working correctly. That is an error "below" borg. From bkborg at kirk.de Thu Feb 23 14:53:24 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Thu, 23 Feb 2023 20:53:24 +0100 Subject: [Borgbackup] Restore In-Reply-To: <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> Message-ID: <7c690c16-ab01-0460-d2ea-58a28e213e22@kirk.de> Hi, Am 23.02.23 um 00:47 schrieb Thomas Waldmann: >>> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' > > That fs / device is not working correctly. > > That is an error "below" borg. I wonder why, because this only occurs with archives that were created when borg was terminated by kill command. Those that ended normally (rc=0 instead of rc=143) don't show this. Additionally, the file is readable without problems: > sfo2205:~ # cat /mnt/rs1219a/SRV/config.old > [repository] > version = 1 > segments_per_dir = 1000 > max_segment_size = 524288000 > append_only = 0 > storage_quota = 0 > additional_free_space = 0 > id = b784edb64520690cceba896f6809da733766ee870b11d55145e8383e072927b8 as well as the other one, without '.old': > sfo2205:~ # cat /mnt/rs1219a/SRV/config > [repository] > version = 1 > segments_per_dir = 1000 > max_segment_size = 524288000 > append_only = 0 > storage_quota = 0 > additional_free_space = 0 > id = b784edb64520690cceba896f6809da733766ee870b11d55145e8383e072927b8 What might have caused this? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tw at waldmann-edv.de Sun Feb 26 19:40:19 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 27 Feb 2023 01:40:19 +0100 Subject: [Borgbackup] borgbackup 2.0.0 beta5 released! Message-ID: <48d22336-f7fb-a73c-e6e3-700eb09448f8@waldmann-edv.de> borgbackup 2.0.0 beta5 was just released, please help testing! please read the changelog and docs. this is not for production and not directly compatible with old repos. https://github.com/borgbackup/borg/releases/tag/2.0.0b5 From tw at waldmann-edv.de Wed Mar 1 16:12:02 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 1 Mar 2023 22:12:02 +0100 Subject: [Borgbackup] Get Paid by Google to Work on Borg-related Open Source Projects this Summer Message-ID: See there: https://www.reddit.com/r/BorgBackup/comments/11ffz2v/get_paid_by_google_to_work_on_borgrelated_open/ From lazyvirus at gmx.com Thu Mar 2 18:21:55 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Fri, 3 Mar 2023 00:21:55 +0100 Subject: [Borgbackup] A small compression test Message-ID: <20230303002155.52ba1a2a@msi.defcon1.lan> Hi folks, I was reviewing old posts from this ML and found an answer about a disk that have had problems which trashed a BB repo from Thomas Waldmann, in which he was telling me than the 'zlib' compression type was too old for modern machines. Spoiler: he was right. As I had a bit of free time, I made some manual tests with programs available from the command line - as for any test, they're worth? what it's worth ;-p) They were conducted on a piece of an old XP 32 bit 13 GB VM file which was truncated to it's first 1.3 GB (IIRC, there's a lot of things in the beginning of an m$ "product"). So, if this ML accepts attachments (?) and that may help somebody to make a choice, here they are. Jean-Yves -------------- next part -------------- A non-text attachment was scrubbed... Name: BORBACKUP_COMPRESSION_TESTS.explic Type: application/octet-stream Size: 7051 bytes Desc: not available URL: From tschoening at am-soft.de Tue Mar 7 03:31:40 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Tue, 7 Mar 2023 09:31:40 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <20230303002155.52ba1a2a@msi.defcon1.lan> References: <20230303002155.52ba1a2a@msi.defcon1.lan> Message-ID: <154779022.20230307093140@am-soft.de> Guten Tag Bzzzz, am Freitag, 3. M?rz 2023 um 00:21 schrieben Sie: > So, if this ML accepts attachments (?) and that may help somebody to > make a choice, here they are. Hi everyone, thanks for sharing these numbers, found them pretty interesting and made me reconsider which compression mode to use. Tried with "zstd,3" now and see somewhat strange results with my two use-cases of backing up many infividual files of within a host vs. few large files of VM-images and database dumps. According to your numbers I would have expected increased backup times of about factor 4, but for some hosts I see factor 16! What makes me especially wonder is that things seem to scale with the number of files being backed up and not with the amount of actual data to store. The latter is MUCH more for my VM-images and database dumps, but their processing times mostly didn't increase at all. From my understanding, compression times shouldn't depend on the number of files or stuff, but only on the amount of data to be compressed and how good it can be compressed, shouldn't it? Beyond changing compression for new archives, I've recompressed all existing ones as well to test how much space is saved, how slow things are etc. Especially the repos containing many individual host files weren't that slow to recompress. > sudo borgmatic --config ~/.config/borgmatic.d/[...].yaml --verbosity 2 borg recreate --recompress Though, might that have introduced some problem? E.g. because creating new archives additionally scales with the compression mode of existing archives? From my understanding of how Borg works I would say no, because deduplication is independent of compression and compression really only applied when reading or writing actual chunks of data. The following is an example of processing times with new compression mode. All the mentioned files are pretty small, mostly text stuff and the multiple hours of this log excerpt is already FAR LONGER than the overall backup of that host took before changing to "zstd,3". :-) > borg create --exclude-from /tmp/tmp4fo42514 --compression zstd,3 > --numeric-ids --files-cache ctime,size --remote-path borg-1.2 > --umask 7 --list --filter AME- --debug --show-rc --umask 0007 > bts.ams-sbox:bak_borg/hosts/bts.ams::de.am-soft.potsdam.potsdam-{utcnow:%Y%m%dT%H%M%SZ} > /home/ams_d_bak_borg/.borgmatic > /mnt/ams_d_bak_borg/backup/ams.pdm.pdm/root_wo_dbs > [2023-03-06 11:22:49,502] INFO: M /[...]/home/jenkins/jobs/Libs Java AMS/scm-polling.log > [2023-03-06 11:26:04,195] INFO: M /[...]/home/jenkins/jobs/Libs Perl AMS/scm-polling.log > [2023-03-06 11:32:40,381] INFO: M /[...]/home/jenkins/jobs/Smart-Metering (Bin)/scm-polling.log > [2023-03-06 11:55:09,610] INFO: M /[...]/home/jenkins/jobs/Smart-Metering (Src)/scm-polling.log > [2023-03-06 11:55:09,782] INFO: Remote: check_free_space: required bytes 1035718058, free bytes 1639959844864 > [2023-03-06 11:55:10,259] INFO: security: saving state for c0bb9740b53cd69ac70b2414879f0d80280b3a1c08353e56693a1a6f578b7251 to /home/ams_d_bak_borg/.config/borg/security/c0bb9740b53cd69ac70b2414879f0d80280b3a1c08353e56693a1a6f578b7251 > [2023-03-06 11:55:10,259] INFO: security: current location ssh://bts.ams-sbox/./bak_borg/hosts/bts.ams > [2023-03-06 11:55:21,159] INFO: security: key type 0 > [2023-03-06 11:55:21,159] INFO: security: manifest timestamp 2023-03-06T10:55:09.625553 > [2023-03-06 11:55:21,159] INFO: Remote: Verified integrity of /home/bak_borg/hosts/bts.ams/index.59682 > [2023-03-06 11:55:21,159] INFO: Remote: Cleaned up 0 uncommitted segment files (== everything after segment 59682). > [2023-03-06 11:55:21,161] INFO: Remote: Verified integrity of /home/bak_borg/hosts/bts.ams/hints.59682 > [2023-03-06 11:58:43,238] INFO: M /[...]/home/jenkins/jobs/de.am_soft.docbeam.raw/scm-polling.log > [2023-03-06 12:03:14,950] INFO: M /[...]/home/jenkins/jobs/fwbuilder/scm-polling.log > [2023-03-06 12:08:14,870] INFO: M /[...]/home/jenkins/jobs/de.am_soft.docbeam.egvp_int/scm-polling.log > [2023-03-06 12:08:52,546] INFO: M /[...]/home/jenkins/jobs/de.am_soft.docbeam.printing.d/scm-polling.log > [2023-03-06 12:08:54,241] INFO: M /[...]/home/jenkins/logs/tasks/Workspace clean-up.log > [2023-03-06 12:08:54,285] INFO: M /[...]/home/jenkins/logs/tasks/Fingerprint cleanup.log.5 > [2023-03-06 12:08:54,329] INFO: M /[...]/home/jenkins/logs/tasks/Fingerprint cleanup.log.4 > [2023-03-06 12:08:54,373] INFO: M /[...]/home/jenkins/logs/tasks/Fingerprint cleanup.log.3 > [2023-03-06 12:08:54,417] INFO: M /[...]/home/jenkins/logs/tasks/Fingerprint cleanup.log.2 > [2023-03-06 12:08:54,461] INFO: M /[...]/home/jenkins/logs/tasks/Fingerprint cleanup.log.1 > [2023-03-06 12:08:54,505] INFO: M /[...]/home/jenkins/logs/tasks/Fingerprint cleanup.log > [2023-03-06 12:08:54,549] INFO: M /[...]/home/jenkins/logs/tasks/Periodic background build discarder.log.5 > [2023-03-06 12:08:54,597] INFO: M /[...]/home/jenkins/logs/tasks/Periodic background build discarder.log.4 > [2023-03-06 12:08:54,645] INFO: M /[...]/home/jenkins/logs/tasks/Periodic background build discarder.log.3 > [2023-03-06 12:08:54,650] INFO: M /[...]/home/jenkins/logs/tasks/Periodic background build discarder.log.2 > [2023-03-06 12:08:54,697] INFO: M /[...]/home/jenkins/logs/tasks/Periodic background build discarder.log.1 > [2023-03-06 12:08:54,745] INFO: M /[...]/home/jenkins/logs/tasks/Periodic background build discarder.log > [2023-03-06 12:10:35,270] INFO: M /[...]/home/jenkins/updates/default.json > [2023-03-06 12:10:35,348] INFO: M /[...]/home/jenkins/updates/hudson.tasks.Maven.MavenInstaller > [2023-03-06 12:10:35,397] INFO: M /[...]/home/jenkins/updates/hudson.tasks.Ant.AntInstaller > [2023-03-06 12:10:35,445] INFO: M /[...]/home/jenkins/updates/hudson.tools.JDKInstaller > [2023-03-06 12:10:36,130] INFO: M /[...]/home/jenkins/.owner > [2023-03-06 12:38:17,991] INFO: M /[...]/home/sm-mtg/sp1/de.am_soft.sm_mtg/de.am_soft.sm_mtg.backend.open_vpn/data/.svn/wc.db > [2023-03-06 12:38:18,166] INFO: Remote: check_free_space: required bytes 1035718138, free bytes 1639958956032 > [2023-03-06 12:38:18,895] INFO: security: saving state for c0bb9740b53cd69ac70b2414879f0d80280b3a1c08353e56693a1a6f578b7251 to /home/ams_d_bak_borg/.config/borg/security/c0bb9740b53cd69ac70b2414879f0d80280b3a1c08353e56693a1a6f578b7251 > [2023-03-06 12:38:18,895] INFO: security: current location ssh://bts.ams-sbox/./bak_borg/hosts/bts.ams > [2023-03-06 12:38:30,078] INFO: security: key type 0 > [2023-03-06 12:38:30,078] INFO: security: manifest timestamp 2023-03-06T11:38:18.014040 > [2023-03-06 12:38:30,078] INFO: Remote: Verified integrity of /home/bak_borg/hosts/bts.ams/index.59686 > [2023-03-06 12:38:30,078] INFO: Remote: Cleaned up 0 uncommitted segment files (== everything after segment 59686). > [2023-03-06 12:38:30,080] INFO: Remote: Verified integrity of /home/bak_borg/hosts/bts.ams/hints.59686 > [2023-03-06 12:38:30,129] INFO: M /[...]/home/sm-mtg/sp1/de.am_soft.sm_mtg/de.am_soft.sm_mtg.backend.open_vpn/data/.svn/wc.db-journal > [2023-03-06 12:38:30,497] INFO: M /[...]/home/sm-mtg/sp1/de.am_soft.sm_mtg/de.am_soft.sm_mtg.backend.open_vpn/data/IPC-CL/1.lock > [2023-03-06 15:49:35,014] INFO: M /[...]/tmp/ams_cookies/db_login/referenz/system/ZMAIWOVEQVVLWIFYUNSFRDUKCJTQMKWIITNCQGQDJVSQKHBGKD.pag > [2023-03-06 15:49:35,145] INFO: Remote: check_free_space: required bytes 1035718218, free bytes 1639958796800 > [2023-03-06 15:49:35,718] INFO: security: saving state for c0bb9740b53cd69ac70b2414879f0d80280b3a1c08353e56693a1a6f578b7251 to /home/ams_d_bak_borg/.config/borg/security/c0bb9740b53cd69ac70b2414879f0d80280b3a1c08353e56693a1a6f578b7251 > [2023-03-06 15:49:35,718] INFO: security: current location ssh://bts.ams-sbox/./bak_borg/hosts/bts.ams > [2023-03-06 15:49:47,080] INFO: security: key type 0 > [2023-03-06 15:49:47,081] INFO: security: manifest timestamp 2023-03-06T14:49:35.026277 > [2023-03-06 15:49:47,081] INFO: Remote: Verified integrity of /home/bak_borg/hosts/bts.ams/index.59690 > [2023-03-06 15:49:47,081] INFO: Remote: Cleaned up 0 uncommitted segment files (== everything after segment 59690). > [2023-03-06 15:49:47,082] INFO: Remote: Verified integrity of /home/bak_borg/hosts/bts.ams/hints.59690 > [2023-03-06 15:49:47,785] INFO: A /[...]/tmp/hsperfdata_tomcat/1024 > [2023-03-06 15:49:47,834] INFO: A /[...]/tmp/hsperfdata_root/1324 > [2023-03-06 15:49:48,329] INFO: A /[...]/tmp/hsperfdata_sm-mtg/1717 > [2023-03-06 15:49:48,377] INFO: A /[...]/tmp/hsperfdata_sm-mtg/1716 Does anyone have any idea what might be the problem here? Thanks! Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From bkborg at kirk.de Tue Mar 7 04:34:05 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Tue, 7 Mar 2023 10:34:05 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <154779022.20230307093140@am-soft.de> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> Message-ID: Hello, Am 07.03.23 um 09:31 schrieb Thorsten Sch?ning: > Guten Tag Bzzzz, > am Freitag, 3. M?rz 2023 um 00:21 schrieben Sie: > >> So, if this ML accepts attachments (?) and that may help somebody to >> make a choice, here they are. > > Hi everyone, > > thanks for sharing these numbers, found them pretty interesting and > made me reconsider which compression mode to use. Tried with "zstd,3" > now and see somewhat strange results with my two use-cases of backing > up many infividual files of within a host vs. few large files of > VM-images and database dumps. is it possible to change the compression mode using the same (large) repo? And, if it is possible, does it make sense? > According to your numbers I would have expected increased backup times > of about factor 4, but for some hosts I see factor 16! What makes me > especially wonder is that things seem to scale with the number of > files being backed up and not with the amount of actual data to store. > The latter is MUCH more for my VM-images and database dumps, but their > processing times mostly didn't increase at all. > > From my understanding, compression times shouldn't depend on the > number of files or stuff, but only on the amount of data to be > compressed and how good it can be compressed, shouldn't it? I am not sure if I understand you right. That's what I see here: I'm backing up a large amount of data, that will take several weeks. So, I interrupt the nightly backup every morning via kill command, so borg runs about 6 hours every night. On some days, it is possible to let borg run longer, say for 20 hours. While during normal nights (~6h) the throughput is about 11~14 GB/h compressed data, during the longer sessions (~20h) reaches 19~20 GB/h. That means e.g. 73 GB in 5:45h vs. 421 GB in 21:55h. -- Mit freundlichem Gru? Best regards ? Kirkorowicz From lazyvirus at gmx.com Tue Mar 7 07:47:40 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Tue, 7 Mar 2023 13:47:40 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <154779022.20230307093140@am-soft.de> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> Message-ID: <20230307134740.54493c99@msi.defcon1.lan> On Tue, 7 Mar 2023 09:31:40 +0100 Thorsten Sch?ning wrote: > Guten Tag Bzzzz, > am Freitag, 3. M?rz 2023 um 00:21 schrieben Sie: > > > So, if this ML accepts attachments (?) and that may help somebody to > > make a choice, here they are. > > Hi everyone, > > thanks for sharing these numbers, found them pretty interesting and > made me reconsider which compression mode to use. Tried with "zstd,3" > now and see somewhat strange results with my two use-cases of backing > up many infividual files of within a host vs. few large files of > VM-images and database dumps. > > According to your numbers I would have expected increased backup times > of about factor 4, but for some hosts I see factor 16! As said, these tests were conducted on the first 1.3 GB of a VBox VM using window$ XP, as this area is generally well filled. In my eyes, it represent a good example of (very) different data, some can compress very well, other absolutely not - so a discrepancy is logical with machines that store much more redundant files into a BB compressed repo. > What makes me > especially wonder is that things seem to scale with the number of > files being backed up and not with the amount of actual data to store. No, it scales because the compressed/stored data possess much more redundant parts. Compression is about scanning a file(s) multiple times, spot identical 'strings' in it/them, write a dictionary that holds the correspondance between these 'strings' and the special codes (much shorter) that'll replace them and replace each 'string' with special code into the file. > The latter is MUCH more for my VM-images and database dumps, but their > processing times mostly didn't increase at all. This probably means you either have a lot of empty/zeroed space in them and/or only a few data has changed - remember, large files are chunked, specifically to only backup new chunks (think ZFS snapshots), not the whole VM - the both of them do not represent much data from one backup to another and compress really fast with a very high ratio. [?] > Does anyone have any idea what might be the problem here? Thanks! Nope. Jean-Yves From tschoening at am-soft.de Tue Mar 7 08:18:35 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Tue, 7 Mar 2023 14:18:35 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> Message-ID: <333734883.20230307141835@am-soft.de> Guten Tag Boris Kirkorowicz, am Dienstag, 7. M?rz 2023 um 10:34 schrieben Sie: > is it possible to change the compression mode using the same (large) repo? Yes, you can change compression mode per invocation of "borg create". Compression is maintained per chunk of data, not per repo or archive. > If some specific chunk was once compressed and stored into the repo, > creating another backup that also uses this chunk will not change > the stored chunk. So if you use different compression specs for the > backups, whichever stores a chunk first determines its compression. https://manpages.debian.org/testing/borgbackup/borg-compression.1.en.html > [...]New compression settings will only be applied to new chunks, > not existing chunks. So it?s safe to change them. https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-safely-change-the-compression-level-or-algorithm > And, if it is possible, does it make sense? That depends on the actual data you have of course, compared to lz4 (re-)compresing with zstd might save you additional storage. > I am not sure if I understand you right.[...] After changing from lz4 to zstd, things are slower than expected for backing up many individual, but mostly small files compared to very few but large ones. I reverted some of the affected backups and am pretty interested to see backup times tomorrow. Though, I don't understand what I see right now and therefore asked for some advice. Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From tschoening at am-soft.de Tue Mar 7 08:24:00 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Tue, 7 Mar 2023 14:24:00 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <20230307134740.54493c99@msi.defcon1.lan> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> <20230307134740.54493c99@msi.defcon1.lan> Message-ID: <975039186.20230307142400@am-soft.de> Guten Tag Bzzzz, am Dienstag, 7. M?rz 2023 um 13:47 schrieben Sie: > As said, these tests were conducted on the first 1.3 GB of a VBox VM > using window$ XP, as this area is generally well filled. In my eyes, it > represent a good example of (very) different data, some can compress > very well, other absolutely not - so a discrepancy is logical with > machines that store much more redundant files into a BB compressed repo. That's exactly what I thought as well and why I considered chaning my compression settings. > No, it scales because the compressed/stored data possess much more > redundant parts. We are talking about two different things: With scaling I mean that processing times seem to increase with the number of processed files instead of the amount of processed data. The systems with many individually pretty small files perform worst compared to backing up e.g. my VMs. > This probably means you either have a lot of empty/zeroed space in them > and/or only a few data has changed[...] This doesn't apply here, the data in VMs and databases didn't change drastically from one day to another. Borg is scanning the whole file always as well and doesn't take differences between ZFS level snaps into account or stuff. Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From lazyvirus at gmx.com Tue Mar 7 08:42:17 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Tue, 7 Mar 2023 14:42:17 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <975039186.20230307142400@am-soft.de> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> <20230307134740.54493c99@msi.defcon1.lan> <975039186.20230307142400@am-soft.de> Message-ID: <20230307144217.12e36bab@msi.defcon1.lan> On Tue, 7 Mar 2023 14:24:00 +0100 Thorsten Sch?ning wrote: > Guten Tag Bzzzz, > am Dienstag, 7. M?rz 2023 um 13:47 schrieben Sie: > > No, it scales because the compressed/stored data possess much more > > redundant parts. > > We are talking about two different things: With scaling I mean that > processing times seem to increase with the number of processed files > instead of the amount of processed data. The systems with many > individually pretty small files perform worst compared to backing up > e.g. my VMs. Normal : it is single threaded _and_ you have a lot more files to scan, to compare to what's in the repo and, eventually, compress. > > This probably means you either have a lot of empty/zeroed space in > > them and/or only a few data has changed[...] > > This doesn't apply here, the data in VMs and databases didn't change > drastically from one day to another. Borg is scanning the whole file > always as well and doesn't take differences between ZFS level snaps > into account or stuff. I meant think about only add changed VM chunks to the repo - for the sake of the example, let's say your VM is chopped in 100 chunks, if the changes are located into chunks# 4, 34, 67, 68, 69, then BB will only store these chunks after comparing their checksums to those already stored in the repo - this is blazing fast. Jean-Yves From tschoening at am-soft.de Tue Mar 7 09:18:47 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Tue, 7 Mar 2023 15:18:47 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <20230307144217.12e36bab@msi.defcon1.lan> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> <20230307134740.54493c99@msi.defcon1.lan> <975039186.20230307142400@am-soft.de> <20230307144217.12e36bab@msi.defcon1.lan> Message-ID: <678094907.20230307151847@am-soft.de> Guten Tag Bzzzz, am Dienstag, 7. M?rz 2023 um 14:42 schrieben Sie: > Normal : it is single threaded _and_ you have a lot more files to scan, > to compare to what's in the repo and, eventually, compress. The only change I'm aware of was lz4 to zstd and that doesn't influence scan performance for changed files, that should be like before. It only influences CPU load and compression time of changed data. > I meant think about only add changed VM chunks to the repo[...] The changes per day to the VM images are larger than the changes to the individually backed up files. So if X GiB are pretty fast for VM-images and database dumps, I'm wondering why (X-Y) GiB of data is that slow when backing up individual files. That doesn't make too much sense. Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From tw at waldmann-edv.de Tue Mar 7 11:09:15 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 7 Mar 2023 17:09:15 +0100 Subject: [Borgbackup] Restore In-Reply-To: <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> Message-ID: <83d57f30-f222-07bd-3ebd-27da65ae4a54@waldmann-edv.de> In save_config, borg tries to save the repo config in the most secure and safe way: 1. ln config config.old # create a hardlink to old file contents 2. save -> config # save new file contents in a very safe way 3. secure_erase config.old # secure erase old file contents from disk 3 is done because there could be a repokey in the config and maybe you just have changed the key passphrase. Then you would not want the key protected by the old passphrase to be accessible via some "undelete" tool. And at 3. it ran into an issue on your system: >> ? File "/usr/lib64/python3.10/site-packages/borg/repository.py", line >> 332, in save_config >> ??? secure_erase(old_config_path, avoid_collateral_damage=True) >> ? File "/usr/lib64/python3.10/site-packages/borg/helpers/fs.py", line >> 199, in secure_erase >> ??? with open(path, 'r+b') as fd: >> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' As I already noted, an I/O error is something below borg. Often, it is a hardware issue, but it could be also a malfunctioning driver or filesystem. From tw at waldmann-edv.de Tue Mar 7 11:44:20 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 7 Mar 2023 17:44:20 +0100 Subject: [Borgbackup] Restore In-Reply-To: <7c690c16-ab01-0460-d2ea-58a28e213e22@kirk.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> <7c690c16-ab01-0460-d2ea-58a28e213e22@kirk.de> Message-ID: <05949e75-e639-9cd9-db90-6d68eb48d5c2@waldmann-edv.de> >>>> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' On what kind of filesystem and storage device is that file? Anything special in "dmesg" output (kernel log) after that? From lazyvirus at gmx.com Tue Mar 7 13:00:12 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Tue, 7 Mar 2023 19:00:12 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <678094907.20230307151847@am-soft.de> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> <20230307134740.54493c99@msi.defcon1.lan> <975039186.20230307142400@am-soft.de> <20230307144217.12e36bab@msi.defcon1.lan> <678094907.20230307151847@am-soft.de> Message-ID: <20230307190012.35769c91@msi.defcon1.lan> On Tue, 7 Mar 2023 15:18:47 +0100 Thorsten Sch?ning wrote: > Guten Tag Bzzzz, > am Dienstag, 7. M?rz 2023 um 14:42 schrieben Sie: > > > Normal : it is single threaded _and_ you have a lot more files to > > scan, to compare to what's in the repo and, eventually, compress. > > The only change I'm aware of was lz4 to zstd and that doesn't > influence scan performance for changed files, that should be like > before. It only influences CPU load and compression time of changed > data. It does, as you have more compressed files in a BB file, so checksums are read faster than with lz4 because they're more concentrated. > > I meant think about only add changed VM chunks to the repo[...] > > The changes per day to the VM images are larger than the changes to > the individually backed up files. So if X GiB are pretty fast for > VM-images and database dumps, I'm wondering why (X-Y) GiB of data is > that slow when backing up individual files. That doesn't make too much > sense. I reformulate to see if I understand correctly : * VM images & DB dumps are many GB of changed data and backup fast, * regular smaller files are not that often changed but backup slower. If I have to make a guess, I'd say that if a very few readings on either the client and the server, you have all what's needed for a VM/DB, when for regular files, that might not dwell into the same BB file and different areas on the HDD of the client, there's many more head movements (hence latency), plus BB have to calculate many more checksums when files are small than when they are made of big chunks. Jean-Yves From bkborg at kirk.de Tue Mar 7 13:25:37 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Tue, 7 Mar 2023 19:25:37 +0100 Subject: [Borgbackup] Restore In-Reply-To: <05949e75-e639-9cd9-db90-6d68eb48d5c2@waldmann-edv.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> <7c690c16-ab01-0460-d2ea-58a28e213e22@kirk.de> <05949e75-e639-9cd9-db90-6d68eb48d5c2@waldmann-edv.de> Message-ID: <16770842-c5ed-7d86-0667-5f56d246dae6@kirk.de> Hello, Am 07.03.23 um 17:44 schrieb Thomas Waldmann: >>>>> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' > > On what kind of filesystem and storage device is that file? it's ext4 running on an Synology NAS RS1219+, mounted via sshfs. > Anything special in "dmesg" output (kernel log) after that? After mounting or after borg run? -- Mit freundlichem Gru? Best regards ? Kirkorowicz From bkborg at kirk.de Tue Mar 7 14:42:22 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Tue, 7 Mar 2023 20:42:22 +0100 Subject: [Borgbackup] Restore In-Reply-To: <83d57f30-f222-07bd-3ebd-27da65ae4a54@waldmann-edv.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> <83d57f30-f222-07bd-3ebd-27da65ae4a54@waldmann-edv.de> Message-ID: <9a6c3811-9ed7-84e0-1204-9b7e5d59b2db@kirk.de> Hello, Am 07.03.23 um 17:09 schrieb Thomas Waldmann: > And at 3. it ran into an issue on your system: > >>> ? File "/usr/lib64/python3.10/site-packages/borg/repository.py", line >>> 332, in save_config >>> ??? secure_erase(old_config_path, avoid_collateral_damage=True) >>> ? File "/usr/lib64/python3.10/site-packages/borg/helpers/fs.py", line >>> 199, in secure_erase >>> ??? with open(path, 'r+b') as fd: >>> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' > As I already noted, an I/O error is something below borg. > > Often, it is a hardware issue, but it could be also a malfunctioning > driver or filesystem. I wonder why it only occurs when borg ended via SIGTERM. Just an idea, to find out if any race conditions could influence this: is it possible to insert little pauses between these three steps? If yes, I need detailed instructions how to do this, since I never wrote a single python line. -- Mit freundlichem Gru? Best regards ? Kirkorowicz From billk at iinet.net.au Wed Mar 8 00:03:59 2023 From: billk at iinet.net.au (William Kenworthy) Date: Wed, 8 Mar 2023 13:03:59 +0800 Subject: [Borgbackup] A small compression test In-Reply-To: <20230307190012.35769c91@msi.defcon1.lan> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> <20230307134740.54493c99@msi.defcon1.lan> <975039186.20230307142400@am-soft.de> <20230307144217.12e36bab@msi.defcon1.lan> <678094907.20230307151847@am-soft.de> <20230307190012.35769c91@msi.defcon1.lan> Message-ID: <164d9371-fb75-2773-4a8f-e5e4610ad822@iinet.net.au> From experience: ??? 1. borg repos on a network file system (moosefs in my case) can be very very slow ??? 2. Borg has to read a complete VM image before it can calculate checksums - and if you store the VM's on a network filesystem it is time consuming just to read 500Mb of data in one image let alone process it and then have to go on to do a number of other images. ??? 3. Consider if you can avoid large VM's and use the OS files natively on a filesystem/partition, or backup the inside of the VM rather than the image - the borg algorithms skip files they see as not having changed from metadata (but does do a safety recheck after a certain number of runs - see docs).? VM's by their nature have to be read in their entirety every time to figure out what has changed, even if its just one byte of data in it.? I have found that reading a VM images` contents a much faster operation after the first time.? If (as in my case) both the VM's and the repos are on a network filesystem, you will need to carefully consider where the work (reading files and calculating checksums) is to be done - reading multiple VM and storage images of many hundreds of megabytes will take time and cant be avoided.? The good news is borg is still faster than most other backup systems even in this scenario. ??? 4. Consider paralleling as much as possible - running borgbackup on multiple hosts pushing into individual repos at the same time takes only a little longer than doing 1 backup. e.g. doing it serially is 1+1+1+1 etc., while parallel would be something like 1.5 in total.? Note that in my case, this is also leveraging the internal parallelisation of moosefs running on a number of separate hosts. ** I found I reached the limits of my moosefs filesystem storing decades of email, hundreds of thousands of photos, borg repos and other files which it did quite well until I went too far for my hardware :(? Moving millions of smaller files in to loopback mounted images solved that problem, at the expense of blowing out a 15 minute backup sequence to many hours.? Backing up the files by reading into the image made quite a large timesaving. BillK On 8/3/23 02:00, Bzzzz wrote: > On Tue, 7 Mar 2023 15:18:47 +0100 > Thorsten Sch?ning wrote: > >> Guten Tag Bzzzz, >> am Dienstag, 7. M?rz 2023 um 14:42 schrieben Sie: >> >>> Normal : it is single threaded _and_ you have a lot more files to >>> scan, to compare to what's in the repo and, eventually, compress. >> The only change I'm aware of was lz4 to zstd and that doesn't >> influence scan performance for changed files, that should be like >> before. It only influences CPU load and compression time of changed >> data. > It does, as you have more compressed files in a BB file, so checksums > are read faster than with lz4 because they're more concentrated. > >>> I meant think about only add changed VM chunks to the repo[...] >> The changes per day to the VM images are larger than the changes to >> the individually backed up files. So if X GiB are pretty fast for >> VM-images and database dumps, I'm wondering why (X-Y) GiB of data is >> that slow when backing up individual files. That doesn't make too much >> sense. > I reformulate to see if I understand correctly : > * VM images & DB dumps are many GB of changed data and backup fast, > * regular smaller files are not that often changed but backup slower. > > If I have to make a guess, I'd say that if a very few readings on > either the client and the server, you have all what's needed for a > VM/DB, when for regular files, that might not dwell into the same BB > file and different areas on the HDD of the client, there's many more > head movements (hence latency), plus BB have to calculate many more > checksums when files are small than when they are made of big chunks. > > Jean-Yves > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From lazyvirus at gmx.com Wed Mar 8 00:19:22 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Wed, 8 Mar 2023 06:19:22 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <164d9371-fb75-2773-4a8f-e5e4610ad822@iinet.net.au> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> <20230307134740.54493c99@msi.defcon1.lan> <975039186.20230307142400@am-soft.de> <20230307144217.12e36bab@msi.defcon1.lan> <678094907.20230307151847@am-soft.de> <20230307190012.35769c91@msi.defcon1.lan> <164d9371-fb75-2773-4a8f-e5e4610ad822@iinet.net.au> Message-ID: <20230308061922.6b50ff49@msi.defcon1.lan> On Wed, 8 Mar 2023 13:03:59 +0800 William Kenworthy wrote: Thanks for the advices :) Jean-Yves > From experience: > > ??? 1. borg repos on a network file system (moosefs in my case) can > be very very slow > > ??? 2. Borg has to read a complete VM image before it can calculate > checksums - and if you store the VM's on a network filesystem it is > time consuming just to read 500Mb of data in one image let alone > process it and then have to go on to do a number of other images. > > ??? 3. Consider if you can avoid large VM's and use the OS files > natively on a filesystem/partition, or backup the inside of the VM > rather than the image - the borg algorithms skip files they see as > not having changed from metadata (but does do a safety recheck after > a certain number of runs - see docs).? VM's by their nature have to > be read in their entirety every time to figure out what has changed, > even if its just one byte of data in it.? I have found that reading a > VM images` contents a much faster operation after the first time.? If > (as in my case) both the VM's and the repos are on a network > filesystem, you will need to carefully consider where the work > (reading files and calculating checksums) is to be done - reading > multiple VM and storage images of many hundreds of megabytes will > take time and cant be avoided.? The good news is borg is still faster > than most other backup systems even in this scenario. > > ??? 4. Consider paralleling as much as possible - running borgbackup > on multiple hosts pushing into individual repos at the same time > takes only a little longer than doing 1 backup. e.g. doing it > serially is 1+1+1+1 etc., while parallel would be something like 1.5 > in total.? Note that in my case, this is also leveraging the internal > parallelisation of moosefs running on a number of separate hosts. > > ** I found I reached the limits of my moosefs filesystem storing > decades of email, hundreds of thousands of photos, borg repos and > other files which it did quite well until I went too far for my > hardware :(? Moving millions of smaller files in to loopback mounted > images solved that problem, at the expense of blowing out a 15 minute > backup sequence to many hours.? Backing up the files by reading into > the image made quite a large timesaving. > > BillK From tw at waldmann-edv.de Wed Mar 8 09:25:00 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 8 Mar 2023 15:25:00 +0100 Subject: [Borgbackup] Restore In-Reply-To: <16770842-c5ed-7d86-0667-5f56d246dae6@kirk.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> <7c690c16-ab01-0460-d2ea-58a28e213e22@kirk.de> <05949e75-e639-9cd9-db90-6d68eb48d5c2@waldmann-edv.de> <16770842-c5ed-7d86-0667-5f56d246dae6@kirk.de> Message-ID: <607fe004-8df3-1523-fa7c-9dd81ee1edce@waldmann-edv.de> > Am 07.03.23 um 17:44 schrieb Thomas Waldmann: >>>>>> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' >> >> On what kind of filesystem and storage device is that file? > > it's ext4 running on an Synology NAS RS1219+, mounted via sshfs. OK, than you likely have discovered a bug in sshfs (where it does not fully emulate a normal fs's behaviour). >> Anything special in "dmesg" output (kernel log) after that? > > After mounting or after borg run? I meant directly after you encountered the input/output error. But as sshfs is a FUSE (userspace) filesystem, I guess you won't find anything in the kernel logs. From tschoening at am-soft.de Fri Mar 10 10:25:11 2023 From: tschoening at am-soft.de (=?utf-8?Q?Thorsten_Sch=C3=B6ning?=) Date: Fri, 10 Mar 2023 16:25:11 +0100 Subject: [Borgbackup] A small compression test In-Reply-To: <154779022.20230307093140@am-soft.de> References: <20230303002155.52ba1a2a@msi.defcon1.lan> <154779022.20230307093140@am-soft.de> Message-ID: <1343748366.20230310162511@am-soft.de> Guten Tag Thorsten Sch?ning, am Dienstag, 7. M?rz 2023 um 09:31 schrieben Sie: > Does anyone have any idea what might be the problem here? Thanks! Pretty much as expected, changing the compression method wasn't the root cause for the bad performance, it was SSH/SFTP instead. The server in question hosts PROXMOX with various VMs and the contents of those VMs are backed up using SSHFS. Some of those VMs create their own private internal network with some publicly available firewall and sharing VMBR1 of PROXMOX. For some reason, ALL of the internal VMs at VMBR1 were backed up using the publicly available firewall as SSH jump host. This was fast enough in the past, but FAR slower now for some reason. After removing the jump host and accessing the internal VMs directly using their internal IPs backup performance is back to normal. The PROXMOX host backing up those VMs is part of the internal network anyway, so no need to use a SSH jump host at all. Though, the interesting thing is that the direct SSH connection to other hosts didn't become slower. Only the setup with having an additional jump host became far slower for some unknown reason. At the day I changed the compression method to ZSTD, the server needed to be restarted for some unknown reason after almost a year running. During that year updates have been applied by APT and services restarted and stuff, but e.g. no new kernel was ever in use. I guess something has changed somewhere to make the former setup unusable slow for some unknown reason. Mit freundlichen Gr??en Thorsten Sch?ning -- AM-SoFT IT-Service - Bitstore Hameln GmbH Mitglied der Bitstore Gruppe - Ihr Full-Service-Dienstleister f?r IT und TK E-Mail: Thorsten.Schoening at AM-SoFT.de Web: http://www.AM-SoFT.de/ Tel: +49 5151- 9468- 0 Tel: +49 5151- 9468-55 Mobil: +49 178-8 9468-04 AM-SoFT IT-Service - Bitstore Hameln GmbH, Brandenburger Str. 7c, 31789 Hameln AG Hannover HRB 221853 - Gesch?ftsf?hrer: Janine Galonska F?r R?ckfragen stehe ich Ihnen jederzeit zur Verf?gung. Mit freundlichen Gr??en, Thorsten Sch?ning Telefon: +49 5151 9468-55 Fax: E-Mail: TSchoening at am-soft.de AM-Soft IT-Service - Bitstore Hameln GmbH Brandenburger Stra?e 7c 31789 Hameln Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen und ist ausschliesslich f?r den Adressaten bestimmt. Jeglicher Zugriff auf diese E-Mail durch andere Personen als den Adressaten ist untersagt. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Sollten Sie nicht der f?r diese E-Mail bestimmte Adressat sein, ist Ihnen jede Ver?ffentlichung, Vervielf?ltigung oder Weitergabe wie auch das Ergreifen oder Unterlassen von Massnahmen im Vertrauen auf erlangte Information untersagt. This e-mail may contain confidential and/or privileged information and is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Hinweise zum Datenschutz: bitstore.group/datenschutz From bkborg at kirk.de Sat Mar 11 03:38:31 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Sat, 11 Mar 2023 09:38:31 +0100 Subject: [Borgbackup] Restore In-Reply-To: <607fe004-8df3-1523-fa7c-9dd81ee1edce@waldmann-edv.de> References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> <7c690c16-ab01-0460-d2ea-58a28e213e22@kirk.de> <05949e75-e639-9cd9-db90-6d68eb48d5c2@waldmann-edv.de> <16770842-c5ed-7d86-0667-5f56d246dae6@kirk.de> <607fe004-8df3-1523-fa7c-9dd81ee1edce@waldmann-edv.de> Message-ID: Hi, Am 08.03.23 um 15:25 schrieb Thomas Waldmann: >> Am 07.03.23 um 17:44 schrieb Thomas Waldmann: >>>>>>> OSError: [Errno 5] Input/output error: '/mnt/rs1219a/SRV/config.old' >>> >>> On what kind of filesystem and storage device is that file? >> >> it's ext4 running on an Synology NAS RS1219+, mounted via sshfs. > > OK, than you likely have discovered a bug in sshfs (where it does not > fully emulate a normal fs's behaviour). I remember similar incidents e.g. with configuring network interfaces that showed unexpected results when executed in a row within a script vs. launched one by one at the command line. The solution (AKA workaround) was just to insert a sleep command between the single commands. I'd like to try analogous steps here: it looks easier to achieve, and this incident only occurs with borg, if killed by SIGTERM, so far. I suspect that there could be a timing issue, so some little pauses between the single steps while closing the tasks and files could help to clarify this. I have no clue of coding in python, I even don't know if it needs a compiler or something like that to get it run. So I would appreciate detailed instructions on how to do that. -- Mit freundlichem Gru? Best regards ? Kirkorowicz From tw at waldmann-edv.de Sat Mar 11 19:05:36 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 12 Mar 2023 01:05:36 +0100 Subject: [Borgbackup] Restore In-Reply-To: References: <87v8kcm9ey.fsf@eps142.cdf.udc.es> <396c382e-3893-241f-191a-810616a66afe@kirk.de> <88428cb9-e72d-4a6c-06f6-764b0351bed2@waldmann-edv.de> <5d24462f-7c3d-8015-5512-a6b764258f75@kirk.de> <88af258e-fdc0-6f4b-4951-769aeff10727@kirk.de> <486fbc45-fd1e-b4f8-ea6f-5bfd694b0eec@waldmann-edv.de> <7c690c16-ab01-0460-d2ea-58a28e213e22@kirk.de> <05949e75-e639-9cd9-db90-6d68eb48d5c2@waldmann-edv.de> <16770842-c5ed-7d86-0667-5f56d246dae6@kirk.de> <607fe004-8df3-1523-fa7c-9dd81ee1edce@waldmann-edv.de> Message-ID: >>>>>>>> OSError: [Errno 5] Input/output error: >>>>>>>> '/mnt/rs1219a/SRV/config.old' >>>> >>>> On what kind of filesystem and storage device is that file? >>> >>> it's ext4 running on an Synology NAS RS1219+, mounted via sshfs. >> >> OK, than you likely have discovered a bug in sshfs (where it does not >> fully emulate a normal fs's behaviour). > > I remember similar incidents e.g. with configuring network interfaces > that showed unexpected results when executed in a row within a script > vs. launched one by one at the command line. The solution (AKA > workaround) was just to insert a sleep command between the single > commands. If the amount of required sleeping can have a safe and still rather short upper limit, that might be a solution. If that can't be determined, it will just break under different circumstances - then I'ld rather tend to say "it's not borg's problem if your fs does not work correctly". > I'd like to try analogous steps here: it looks easier to > achieve, and this incident only occurs with borg, if killed by SIGTERM, > so far. I suspect that there could be a timing issue, so some little > pauses between the single steps while closing the tasks and files could > help to clarify this. > > I have no clue of coding in python, I even don't know if it needs a > compiler or something like that to get it run. So I would appreciate > detailed instructions on how to do that. You can just edit the file you see in the traceback (and to make sure, remove the same_name.pyc file). From knoth at mpdl.mpg.de Tue Mar 21 06:39:28 2023 From: knoth at mpdl.mpg.de (Benjamin Knoth) Date: Tue, 21 Mar 2023 11:39:28 +0100 Subject: [Borgbackup] Borg-Backup failed with Command in authorized_keys behind a ssh-tunnel? Message-ID: Dear all, I setup a PoC for a pull backup with Borg Backup. In this example a client can only reach the backup server behind a proxy server. In this case the server where Borg Backup is running open a temporary ssh tunnel over a proxy server and start the pull backup on the client. After the backup is done, the ssh tunnel will closed. Everything runs in this scenario. For more security I created for any action an own ssh-key. In authorized_keys I also start to add the command for every action. Without command restriction the Borg backup is running successful but with command restriction it fails every time with following message. |Remote: ssh_exchange_identification: read: Connection reset by peer Connection closed by remote host. Is borg working on the server? | I tried different commands on authorized_keys without success. |# example from Borg website which works on a simple scenario where borg server and client can |||directly| reach the other. command="borg serve --append-only --restrict-to-repo ~/backup/",restrict ssh-... # to get the needed command, but no output command="/bin/echo You invoked: $ SSH_ORIGINAL_COMMAND",restrict ssh-... # and I also try to get the ssh command by a script, also without any record in log file command="/home/borg/logssh.sh",restrict ssh-.... | |$ cat logssh.sh #!/bin/sh if [ -n "$SSH_ORIGINAL_COMMAND" ] then echo "`/bin/date`: $SSH_ORIGINAL_COMMAND" >> $HOME/ssh-command-log exec $SSH_ORIGINAL_COMMAND fi | Without any restrictions to the key, the script can run the pull backup with Borg Backup successful, with command restriction it fails every time. Is there any possible solution to allow only to run the Borg Backup for this key behind the ssh tunnel or what's the correct command in this solution? Best regards Benjamin -- Benjamin Knoth -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From tw at waldmann-edv.de Thu Mar 23 18:35:23 2023 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 23 Mar 2023 23:35:23 +0100 Subject: [Borgbackup] borgbackup 1.2.4 released! Message-ID: borgbackup 1.2.4 was just released, see there: https://github.com/borgbackup/borg/releases/tag/1.2.4 #linux #macos #freebsd #netbsd #openbsd #openindiana From lazyvirus at gmx.com Fri Mar 24 05:21:54 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Fri, 24 Mar 2023 10:21:54 +0100 Subject: [Borgbackup] 1.2.4 something's strange Message-ID: <20230324102154.31934d13@msi.defcon1.lan> Hi, I upgraded BB to 1.2.4 following the full changelog (I formerly have had the latest 1.1.n stable, both installed with pip), but there is something strange (or not ?) I launched : borg info --show-rc --progress -v /my/repo but opposite to what the changelog says (can take significant time, but after that it will be fast), it took only seconds !? After that I found a ~/.cache/borg/nnn?nnn/pre12-meta file, but its length is only 2 and it contains : {}. Did I miss something and if so, how fix this problem ? Can I launch a backup now or must this problem be solved before ? Jean-Yves From lazyvirus at gmx.com Fri Mar 24 05:47:28 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Fri, 24 Mar 2023 10:47:28 +0100 Subject: [Borgbackup] 1.2.4 something's strange In-Reply-To: <20230324102154.31934d13@msi.defcon1.lan> References: <20230324102154.31934d13@msi.defcon1.lan> Message-ID: <20230324104728.7aba4ffa@msi.defcon1.lan> On Fri, 24 Mar 2023 10:21:54 +0100 Bzzzz wrote: I added : --first 7 (HD died 10 days ago), which triggered a scan of each of the 7 backups, but the pre12-meta file is unchanged (?) > Hi, > > I upgraded BB to 1.2.4 following the full changelog (I formerly have > had the latest 1.1.n stable, both installed with pip), but there is > something strange (or not ?) > > I launched : borg info --show-rc --progress -v /my/repo > > but opposite to what the changelog says (can take significant time, > but after that it will be fast), it took only seconds !? > > After that I found a ~/.cache/borg/nnn?nnn/pre12-meta file, but its > length is only 2 and it contains : {}. > > Did I miss something and if so, how fix this problem ? > > Can I launch a backup now or must this problem be solved before ? > > > Jean-Yves > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From lazyvirus at gmx.com Fri Mar 24 18:37:05 2023 From: lazyvirus at gmx.com (Bzzzz) Date: Fri, 24 Mar 2023 23:37:05 +0100 Subject: [Borgbackup] 1.2.4 something's strange [SOLVED] In-Reply-To: <20230324104728.7aba4ffa@msi.defcon1.lan> References: <20230324102154.31934d13@msi.defcon1.lan> <20230324104728.7aba4ffa@msi.defcon1.lan> Message-ID: <20230324233705.3e412b71@msi.defcon1.lan> On Fri, 24 Mar 2023 10:47:28 +0100 Bzzzz wrote: > On Fri, 24 Mar 2023 10:21:54 +0100 > Bzzzz wrote: > > I added : --first 7 (HD died 10 days ago), which triggered a scan of > each of the 7 backups, but the pre12-meta file is unchanged (?) Sooo, it seems that if you only have a few backups, even if they're large, 'borg info' is working very fast. The first backup w/ Ver. 1.2.4 worked like a charm and the second is on route :) Sorry for the noise. Jean-Yves From bkborg at kirk.de Fri Mar 31 05:36:14 2023 From: bkborg at kirk.de (Boris Kirkorowicz) Date: Fri, 31 Mar 2023 11:36:14 +0200 Subject: [Borgbackup] lzma, 3-2-1 rule Message-ID: Hi, in the german computer magazine c't, I found an article about borgbackup -very nice. There it is mentioned that the compression method lzma is deprecated and only left for compatibility reasons. This is new to me, maybe I overlooked something when choosing "-C lzma,9". My concern: my actual backup is running since February every night, and the estimated completion might be in June or July, assuming the large amount of data and the constantly slowing down backup rate of now ~5 GB/h compressed/deduplicated data. The backup is "somewhat" time-consuming, so it would hurt if lzma will be removed some day. So should I stop backing up using lzma and start again with an other compression method, or is it safe to continue? Second question: the article says that it is no good idea to fulfill the 3-2-1 rule by copying the borg repositories, since errors might be copied. This sounds plausible, but on the other hand it would multiply the time of backup. In my case it could take more than one year, so I wonder how to do this. Any advice how to handle this? -- Mit freundlichem Gru? Best regards ? Kirkorowicz