From MS at TechDesignPsych.com Sat Oct 3 15:12:55 2020 From: MS at TechDesignPsych.com (Michael Siepmann) Date: Sat, 3 Oct 2020 13:12:55 -0600 Subject: [Borgbackup] FileExistsError re nonce.tmp / nonce Message-ID: <1f0c55a3-4ad3-541e-67e3-308550b9202e@TechDesignPsych.com> I'd appreciate any help anyone can offer on this error I'm getting when trying to do a backup - with a script that was previously working fine. I'll paste the relevant part of the log below, but the main error seems to be this: FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' I tried deleting nonce and nonce.tmp before running the backup, but I still got this error. Here's the full log excerpt: === Creating archive at "/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}" Local Exception Traceback (most recent call last): ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4529, in main ??? exit_code = archiver.run(args) ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4461, in run ??? return set_ec(func(args)) ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 166, in wrapper ??? return method(self, args, repository=repository, **kwargs) ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 574, in do_create ??? create_inner(archive, cache) ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 537, in create_inner ??? read_special=args.read_special, dry_run=dry_run, st=st) ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process ??? read_special=read_special, dry_run=dry_run) ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process ??? read_special=read_special, dry_run=dry_run) ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process ??? read_special=read_special, dry_run=dry_run) ? [Previous line repeated 1 more time] ? File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 625, in _process ??? status = archive.process_file(path, st, cache) ? File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 1071, in process_file ??? self.chunk_file(item, cache, self.stats, backup_io_iter(self.chunker.chunkify(fd, fh))) ? File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 999, in chunk_file ??? item.chunks.append(chunk_processor(data)) ? File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 987, in chunk_processor ??? chunk_entry = cache.add_chunk(self.key.id_hash(data), data, stats, wait=False) ? File "/usr/lib64/python3.7/site-packages/borg/cache.py", line 897, in add_chunk ??? data = self.key.encrypt(chunk) ? File "/usr/lib64/python3.7/site-packages/borg/crypto/key.py", line 370, in encrypt ??? self.nonce_manager.ensure_reservation(num_aes_blocks(len(data))) ? File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 85, in ensure_reservation ??? self.commit_repo_nonce_reservation(reservation_end, repo_free_nonce) ? File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 48, in commit_repo_nonce_reservation ??? self.repository.commit_nonce_reservation(next_unreserved, start_nonce) ? File "/usr/lib64/python3.7/site-packages/borg/repository.py", line 346, in commit_nonce_reservation ??? fd.write(bin_to_hex(next_unreserved.to_bytes(8, byteorder='big'))) ? File "/usr/lib64/python3.7/site-packages/borg/platform/base.py", line 176, in __exit__ ??? os.replace(self.tmppath, self.path) FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' Platform: Linux personal 4.19.132-1.pvops.qubes.x86_64 #1 SMP Tue Jul 14 03:42:21 UTC 2020 x86_64 Linux: Fedora 30 Thirty Borg: 1.1.11? Python: CPython 3.7.7 msgpack: 0.5.6 PID: 25754? CWD: /home/user/Apps/ScriptsByMMS sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '--compression', 'zlib,5', '/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}', '/home', '--exclude', '/home/*/.cache', '--exclude', '/home/*/.local/share/Trash', '--exclude', '/home/user/Downloads/NOT backed up', '--exclude', '/home/user/Seafile/mra y', '--exclude', '/home/user/Seafile/snowdrift-design'] SSH_ORIGINAL_COMMAND: None === Thank you, Michael Siepmann -- Michael Siepmann, Ph.D. *The Tech Design Psychologist*? /Shaping technology to help people flourish/? 303-835-0501 ? TechDesignPsych.com ? OpenPGP: 6D65A4F7 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.boettcher at gmail.com Mon Oct 5 04:51:20 2020 From: matthias.boettcher at gmail.com (=?UTF-8?Q?Matthias_B=C3=B6ttcher?=) Date: Mon, 5 Oct 2020 10:51:20 +0200 Subject: [Borgbackup] FileExistsError re nonce.tmp / nonce In-Reply-To: <1f0c55a3-4ad3-541e-67e3-308550b9202e@TechDesignPsych.com> References: <1f0c55a3-4ad3-541e-67e3-308550b9202e@TechDesignPsych.com> Message-ID: Hello Michael, I guess you have mounted /mnt/synology/ as cifs on a Synology NAS. Please check the SMB-Settings on the NAS, especially the version settings of the protocol. I can't give you detailed information, because I'm not using a Synology NAS. HTH Matthias B?ttcher Am Sa., 3. Okt. 2020 um 21:42 Uhr schrieb Michael Siepmann via Borgbackup : > > I'd appreciate any help anyone can offer on this error I'm getting when trying to do a backup - with a script that was previously working fine. I'll paste the relevant part of the log below, but the main error seems to be this: > > FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' > > I tried deleting nonce and nonce.tmp before running the backup, but I still got this error. Here's the full log excerpt: > > === > > Creating archive at "/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}" > Local Exception > Traceback (most recent call last): > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4529, in main > exit_code = archiver.run(args) > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4461, in run > return set_ec(func(args)) > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 166, in wrapper > return method(self, args, repository=repository, **kwargs) > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 574, in do_create > create_inner(archive, cache) > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 537, in create_inner > read_special=args.read_special, dry_run=dry_run, st=st) > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > read_special=read_special, dry_run=dry_run) > [Previous line repeated 1 more time] > File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 625, in _process > status = archive.process_file(path, st, cache) > File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 1071, in process_file > self.chunk_file(item, cache, self.stats, backup_io_iter(self.chunker.chunkify(fd, fh))) > File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 999, in chunk_file > item.chunks.append(chunk_processor(data)) > File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 987, in chunk_processor > chunk_entry = cache.add_chunk(self.key.id_hash(data), data, stats, wait=False) > File "/usr/lib64/python3.7/site-packages/borg/cache.py", line 897, in add_chunk > data = self.key.encrypt(chunk) > File "/usr/lib64/python3.7/site-packages/borg/crypto/key.py", line 370, in encrypt > self.nonce_manager.ensure_reservation(num_aes_blocks(len(data))) > File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 85, in ensure_reservation > self.commit_repo_nonce_reservation(reservation_end, repo_free_nonce) > File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 48, in commit_repo_nonce_reservation > self.repository.commit_nonce_reservation(next_unreserved, start_nonce) > File "/usr/lib64/python3.7/site-packages/borg/repository.py", line 346, in commit_nonce_reservation > fd.write(bin_to_hex(next_unreserved.to_bytes(8, byteorder='big'))) > File "/usr/lib64/python3.7/site-packages/borg/platform/base.py", line 176, in __exit__ > os.replace(self.tmppath, self.path) > FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' > > Platform: Linux personal 4.19.132-1.pvops.qubes.x86_64 #1 SMP Tue Jul 14 03:42:21 UTC 2020 x86_64 > Linux: Fedora 30 Thirty > Borg: 1.1.11 Python: CPython 3.7.7 msgpack: 0.5.6 > PID: 25754 CWD: /home/user/Apps/ScriptsByMMS > sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '--compression', 'zlib,5', '/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}', '/home', '--exclude', '/home/*/.cache', '--exclude', '/home/*/.local/share/Trash', '--exclude', '/home/user/Downloads/NOT backed up', '--exclude', '/home/user/Seafile/mra > y', '--exclude', '/home/user/Seafile/snowdrift-design'] > SSH_ORIGINAL_COMMAND: None > > === > > Thank you, > > Michael Siepmann > > -- > > Michael Siepmann, Ph.D. > The Tech Design Psychologist? > Shaping technology to help people flourish? > 303-835-0501 TechDesignPsych.com OpenPGP: 6D65A4F7 > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Mon Oct 5 07:54:04 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 5 Oct 2020 13:54:04 +0200 Subject: [Borgbackup] borgbackup 1.2.0 alpha 9 released! Message-ID: borgbackup 1.2.0 alpha 9 released for testing! details please see there: https://github.com/borgbackup/borg/releases/tag/1.2.0a9 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From MatthiasPeterW at aol.com Mon Oct 5 16:26:40 2020 From: MatthiasPeterW at aol.com (Matthias Peter Walther) Date: Mon, 5 Oct 2020 22:26:40 +0200 Subject: [Borgbackup] Borg natively on Windows? References: Message-ID: Hello, personally I'm a Linux guy, but a lot of clients use Windows. I've seen that Borg is available for Windows with the Linux subsystem. But enabling that on a production server is not the best choice. Why does Borg need this Linux subsystem? It's python, it should be able to run natively on Windows as long as the dependencies are available? Best, Matthias From jolson at kth.se Mon Oct 5 16:48:20 2020 From: jolson at kth.se (Jonas Olson) Date: Mon, 5 Oct 2020 22:48:20 +0200 Subject: [Borgbackup] Borg natively on Windows? In-Reply-To: References: Message-ID: On 2020-10-05 22:26, Matthias Peter Walther via Borgbackup wrote: > personally I'm a Linux guy, but a lot of clients use Windows. I've seen > that Borg is available for Windows with the Linux subsystem. But > enabling that on a production server is not the best choice. > > Why does Borg need this Linux subsystem? It's python, it should be able > to run natively on Windows as long as the dependencies are available? There used to be a Borg package [0], for the Chocolatey package manager, that didn't require a Linux subsystem or even Cygwin (or maybe it was installed automatically as a dependency), but it was removed a few months ago. I guess it wasn't kept up to date. Does anyone here know if there is a new package coming up, and what package manager is preferred for Windows these days? I'm sure the package wasn't official or anything. Still, someone here might know something. Jonas Olson [0] From MS at TechDesignPsych.com Mon Oct 5 19:17:07 2020 From: MS at TechDesignPsych.com (Michael Siepmann) Date: Mon, 5 Oct 2020 17:17:07 -0600 Subject: [Borgbackup] FileExistsError re nonce.tmp / nonce In-Reply-To: References: <1f0c55a3-4ad3-541e-67e3-308550b9202e@TechDesignPsych.com> Message-ID: Hello Matthias, I appreciate your help. The Synology NAS has minimum and maximum options: SMB1, SMB2, SMB2 and Large MTU, SMB3, and is currently set to SMB1 for both minimum and maximum. My script mounts it with cifs version 1.0: "mount -t cifs -o vers=1.0" Best regards, Michael On 2020-10-05 02:51, Matthias B?ttcher wrote: > Hello Michael, > > I guess you have mounted /mnt/synology/ as cifs on a Synology NAS. > Please check the SMB-Settings on the NAS, especially the version > settings of the protocol. I can't give you detailed information, > because I'm not using a Synology NAS. > > HTH > Matthias B?ttcher > > Am Sa., 3. Okt. 2020 um 21:42 Uhr schrieb Michael Siepmann via > Borgbackup : >> I'd appreciate any help anyone can offer on this error I'm getting when trying to do a backup - with a script that was previously working fine. I'll paste the relevant part of the log below, but the main error seems to be this: >> >> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' >> >> I tried deleting nonce and nonce.tmp before running the backup, but I still got this error. Here's the full log excerpt: >> >> === >> >> Creating archive at "/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}" >> Local Exception >> Traceback (most recent call last): >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4529, in main >> exit_code = archiver.run(args) >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4461, in run >> return set_ec(func(args)) >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 166, in wrapper >> return method(self, args, repository=repository, **kwargs) >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 574, in do_create >> create_inner(archive, cache) >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 537, in create_inner >> read_special=args.read_special, dry_run=dry_run, st=st) >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process >> read_special=read_special, dry_run=dry_run) >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process >> read_special=read_special, dry_run=dry_run) >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process >> read_special=read_special, dry_run=dry_run) >> [Previous line repeated 1 more time] >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 625, in _process >> status = archive.process_file(path, st, cache) >> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 1071, in process_file >> self.chunk_file(item, cache, self.stats, backup_io_iter(self.chunker.chunkify(fd, fh))) >> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 999, in chunk_file >> item.chunks.append(chunk_processor(data)) >> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 987, in chunk_processor >> chunk_entry = cache.add_chunk(self.key.id_hash(data), data, stats, wait=False) >> File "/usr/lib64/python3.7/site-packages/borg/cache.py", line 897, in add_chunk >> data = self.key.encrypt(chunk) >> File "/usr/lib64/python3.7/site-packages/borg/crypto/key.py", line 370, in encrypt >> self.nonce_manager.ensure_reservation(num_aes_blocks(len(data))) >> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 85, in ensure_reservation >> self.commit_repo_nonce_reservation(reservation_end, repo_free_nonce) >> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 48, in commit_repo_nonce_reservation >> self.repository.commit_nonce_reservation(next_unreserved, start_nonce) >> File "/usr/lib64/python3.7/site-packages/borg/repository.py", line 346, in commit_nonce_reservation >> fd.write(bin_to_hex(next_unreserved.to_bytes(8, byteorder='big'))) >> File "/usr/lib64/python3.7/site-packages/borg/platform/base.py", line 176, in __exit__ >> os.replace(self.tmppath, self.path) >> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' >> >> Platform: Linux personal 4.19.132-1.pvops.qubes.x86_64 #1 SMP Tue Jul 14 03:42:21 UTC 2020 x86_64 >> Linux: Fedora 30 Thirty >> Borg: 1.1.11 Python: CPython 3.7.7 msgpack: 0.5.6 >> PID: 25754 CWD: /home/user/Apps/ScriptsByMMS >> sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '--compression', 'zlib,5', '/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}', '/home', '--exclude', '/home/*/.cache', '--exclude', '/home/*/.local/share/Trash', '--exclude', '/home/user/Downloads/NOT backed up', '--exclude', '/home/user/Seafile/mra >> y', '--exclude', '/home/user/Seafile/snowdrift-design'] >> SSH_ORIGINAL_COMMAND: None >> >> === >> >> Thank you, >> >> Michael Siepmann >> >> -- >> >> Michael Siepmann, Ph.D. >> The Tech Design Psychologist? >> Shaping technology to help people flourish? >> 303-835-0501 TechDesignPsych.com OpenPGP: 6D65A4F7 >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From matthias.boettcher at gmail.com Tue Oct 6 00:12:46 2020 From: matthias.boettcher at gmail.com (=?UTF-8?Q?Matthias_B=C3=B6ttcher?=) Date: Tue, 6 Oct 2020 06:12:46 +0200 Subject: [Borgbackup] FileExistsError re nonce.tmp / nonce In-Reply-To: References: <1f0c55a3-4ad3-541e-67e3-308550b9202e@TechDesignPsych.com> Message-ID: Hello Michael, please set the SMB protocol min and max to SMB3 and mount it with the same version. Best regards, Matthias Am Di., 6. Okt. 2020 um 01:17 Uhr schrieb Michael Siepmann via Borgbackup : > > Hello Matthias, > > I appreciate your help. The Synology NAS has minimum and maximum > options: SMB1, SMB2, SMB2 and Large MTU, SMB3, and is currently set to > SMB1 for both minimum and maximum. My script mounts it with cifs version > 1.0: "mount -t cifs -o vers=1.0" > > Best regards, > > Michael > > > On 2020-10-05 02:51, Matthias B?ttcher wrote: > > Hello Michael, > > > > I guess you have mounted /mnt/synology/ as cifs on a Synology NAS. > > Please check the SMB-Settings on the NAS, especially the version > > settings of the protocol. I can't give you detailed information, > > because I'm not using a Synology NAS. > > > > HTH > > Matthias B?ttcher > > > > Am Sa., 3. Okt. 2020 um 21:42 Uhr schrieb Michael Siepmann via > > Borgbackup : > >> I'd appreciate any help anyone can offer on this error I'm getting when trying to do a backup - with a script that was previously working fine. I'll paste the relevant part of the log below, but the main error seems to be this: > >> > >> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' > >> > >> I tried deleting nonce and nonce.tmp before running the backup, but I still got this error. Here's the full log excerpt: > >> > >> === > >> > >> Creating archive at "/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}" > >> Local Exception > >> Traceback (most recent call last): > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4529, in main > >> exit_code = archiver.run(args) > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4461, in run > >> return set_ec(func(args)) > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 166, in wrapper > >> return method(self, args, repository=repository, **kwargs) > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 574, in do_create > >> create_inner(archive, cache) > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 537, in create_inner > >> read_special=args.read_special, dry_run=dry_run, st=st) > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > >> read_special=read_special, dry_run=dry_run) > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > >> read_special=read_special, dry_run=dry_run) > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > >> read_special=read_special, dry_run=dry_run) > >> [Previous line repeated 1 more time] > >> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 625, in _process > >> status = archive.process_file(path, st, cache) > >> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 1071, in process_file > >> self.chunk_file(item, cache, self.stats, backup_io_iter(self.chunker.chunkify(fd, fh))) > >> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 999, in chunk_file > >> item.chunks.append(chunk_processor(data)) > >> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 987, in chunk_processor > >> chunk_entry = cache.add_chunk(self.key.id_hash(data), data, stats, wait=False) > >> File "/usr/lib64/python3.7/site-packages/borg/cache.py", line 897, in add_chunk > >> data = self.key.encrypt(chunk) > >> File "/usr/lib64/python3.7/site-packages/borg/crypto/key.py", line 370, in encrypt > >> self.nonce_manager.ensure_reservation(num_aes_blocks(len(data))) > >> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 85, in ensure_reservation > >> self.commit_repo_nonce_reservation(reservation_end, repo_free_nonce) > >> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 48, in commit_repo_nonce_reservation > >> self.repository.commit_nonce_reservation(next_unreserved, start_nonce) > >> File "/usr/lib64/python3.7/site-packages/borg/repository.py", line 346, in commit_nonce_reservation > >> fd.write(bin_to_hex(next_unreserved.to_bytes(8, byteorder='big'))) > >> File "/usr/lib64/python3.7/site-packages/borg/platform/base.py", line 176, in __exit__ > >> os.replace(self.tmppath, self.path) > >> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' > >> > >> Platform: Linux personal 4.19.132-1.pvops.qubes.x86_64 #1 SMP Tue Jul 14 03:42:21 UTC 2020 x86_64 > >> Linux: Fedora 30 Thirty > >> Borg: 1.1.11 Python: CPython 3.7.7 msgpack: 0.5.6 > >> PID: 25754 CWD: /home/user/Apps/ScriptsByMMS > >> sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '--compression', 'zlib,5', '/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}', '/home', '--exclude', '/home/*/.cache', '--exclude', '/home/*/.local/share/Trash', '--exclude', '/home/user/Downloads/NOT backed up', '--exclude', '/home/user/Seafile/mra > >> y', '--exclude', '/home/user/Seafile/snowdrift-design'] > >> SSH_ORIGINAL_COMMAND: None > >> > >> === > >> > >> Thank you, > >> > >> Michael Siepmann > >> > >> -- > >> > >> Michael Siepmann, Ph.D. > >> The Tech Design Psychologist? > >> Shaping technology to help people flourish? > >> 303-835-0501 TechDesignPsych.com OpenPGP: 6D65A4F7 > >> > >> > >> _______________________________________________ > >> Borgbackup mailing list > >> Borgbackup at python.org > >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From MS at TechDesignPsych.com Tue Oct 6 15:10:15 2020 From: MS at TechDesignPsych.com (Michael Siepmann) Date: Tue, 6 Oct 2020 13:10:15 -0600 Subject: [Borgbackup] FileExistsError re nonce.tmp / nonce In-Reply-To: References: <1f0c55a3-4ad3-541e-67e3-308550b9202e@TechDesignPsych.com> Message-ID: Hello Mathhias, The Synology NAS wouldn't let me set SMB3 as minimum, but I set it as maximum and set the script to use "vers=3.0". Unfortunately I still got the same error. Thanks, Michael On 2020-10-05 22:12, Matthias B?ttcher wrote: > Hello Michael, > > please set the SMB protocol min and max to SMB3 and mount it with the > same version. > > Best regards, > Matthias > > Am Di., 6. Okt. 2020 um 01:17 Uhr schrieb Michael Siepmann via > Borgbackup : >> Hello Matthias, >> >> I appreciate your help. The Synology NAS has minimum and maximum >> options: SMB1, SMB2, SMB2 and Large MTU, SMB3, and is currently set to >> SMB1 for both minimum and maximum. My script mounts it with cifs version >> 1.0: "mount -t cifs -o vers=1.0" >> >> Best regards, >> >> Michael >> >> >> On 2020-10-05 02:51, Matthias B?ttcher wrote: >>> Hello Michael, >>> >>> I guess you have mounted /mnt/synology/ as cifs on a Synology NAS. >>> Please check the SMB-Settings on the NAS, especially the version >>> settings of the protocol. I can't give you detailed information, >>> because I'm not using a Synology NAS. >>> >>> HTH >>> Matthias B?ttcher >>> >>> Am Sa., 3. Okt. 2020 um 21:42 Uhr schrieb Michael Siepmann via >>> Borgbackup : >>>> I'd appreciate any help anyone can offer on this error I'm getting when trying to do a backup - with a script that was previously working fine. I'll paste the relevant part of the log below, but the main error seems to be this: >>>> >>>> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' >>>> >>>> I tried deleting nonce and nonce.tmp before running the backup, but I still got this error. Here's the full log excerpt: >>>> >>>> === >>>> >>>> Creating archive at "/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}" >>>> Local Exception >>>> Traceback (most recent call last): >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4529, in main >>>> exit_code = archiver.run(args) >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4461, in run >>>> return set_ec(func(args)) >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 166, in wrapper >>>> return method(self, args, repository=repository, **kwargs) >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 574, in do_create >>>> create_inner(archive, cache) >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 537, in create_inner >>>> read_special=args.read_special, dry_run=dry_run, st=st) >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process >>>> read_special=read_special, dry_run=dry_run) >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process >>>> read_special=read_special, dry_run=dry_run) >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process >>>> read_special=read_special, dry_run=dry_run) >>>> [Previous line repeated 1 more time] >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 625, in _process >>>> status = archive.process_file(path, st, cache) >>>> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 1071, in process_file >>>> self.chunk_file(item, cache, self.stats, backup_io_iter(self.chunker.chunkify(fd, fh))) >>>> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 999, in chunk_file >>>> item.chunks.append(chunk_processor(data)) >>>> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 987, in chunk_processor >>>> chunk_entry = cache.add_chunk(self.key.id_hash(data), data, stats, wait=False) >>>> File "/usr/lib64/python3.7/site-packages/borg/cache.py", line 897, in add_chunk >>>> data = self.key.encrypt(chunk) >>>> File "/usr/lib64/python3.7/site-packages/borg/crypto/key.py", line 370, in encrypt >>>> self.nonce_manager.ensure_reservation(num_aes_blocks(len(data))) >>>> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 85, in ensure_reservation >>>> self.commit_repo_nonce_reservation(reservation_end, repo_free_nonce) >>>> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 48, in commit_repo_nonce_reservation >>>> self.repository.commit_nonce_reservation(next_unreserved, start_nonce) >>>> File "/usr/lib64/python3.7/site-packages/borg/repository.py", line 346, in commit_nonce_reservation >>>> fd.write(bin_to_hex(next_unreserved.to_bytes(8, byteorder='big'))) >>>> File "/usr/lib64/python3.7/site-packages/borg/platform/base.py", line 176, in __exit__ >>>> os.replace(self.tmppath, self.path) >>>> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' >>>> >>>> Platform: Linux personal 4.19.132-1.pvops.qubes.x86_64 #1 SMP Tue Jul 14 03:42:21 UTC 2020 x86_64 >>>> Linux: Fedora 30 Thirty >>>> Borg: 1.1.11 Python: CPython 3.7.7 msgpack: 0.5.6 >>>> PID: 25754 CWD: /home/user/Apps/ScriptsByMMS >>>> sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '--compression', 'zlib,5', '/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}', '/home', '--exclude', '/home/*/.cache', '--exclude', '/home/*/.local/share/Trash', '--exclude', '/home/user/Downloads/NOT backed up', '--exclude', '/home/user/Seafile/mra >>>> y', '--exclude', '/home/user/Seafile/snowdrift-design'] >>>> SSH_ORIGINAL_COMMAND: None >>>> >>>> === >>>> >>>> Thank you, >>>> >>>> Michael Siepmann >>>> >>>> -- >>>> >>>> Michael Siepmann, Ph.D. >>>> The Tech Design Psychologist? >>>> Shaping technology to help people flourish? >>>> 303-835-0501 TechDesignPsych.com OpenPGP: 6D65A4F7 >>>> >>>> >>>> _______________________________________________ >>>> Borgbackup mailing list >>>> Borgbackup at python.org >>>> https://mail.python.org/mailman/listinfo/borgbackup >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Tue Oct 6 18:30:19 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 7 Oct 2020 00:30:19 +0200 Subject: [Borgbackup] Release 1.1.14 Message-ID: Just released borgbackup 1.1.14 stable release! Details see there: https://github.com/borgbackup/borg/releases/tag/1.1.14 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From lazyvirus at gmx.com Wed Oct 7 11:07:01 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Wed, 7 Oct 2020 17:07:01 +0200 Subject: [Borgbackup] Retrieving files from another machine repo Message-ID: <20201007170701.460fffa0@msi.defcon1.lan> Hi listers, I've a fried machine and have to retrieve some files from it's BB repo. As I have a backup of ~/.config/borg/ and ~/.cache/borg/ from this fried machine, the easiest way I see to do that is to : * rename these 2 DIRs to d?_NORMAL * copy these 2 DIRs from my USB backup to their regular place * retrieve files from the repo * delete these 2 DIRs * move *_NORMAL to their regular names but may be there's an easier way to achieve that? Jean-Yves From felix.schwarz at oss.schwarz.eu Thu Oct 8 11:09:33 2020 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Thu, 8 Oct 2020 17:09:33 +0200 Subject: [Borgbackup] llfuse versions for 1.1.14 / Linux Message-ID: <2b6d4ccc-eb0b-746f-1bec-cbfb84fc1d55@oss.schwarz.eu> Hi, I just wanted to update borgbackup in Fedora and noticed the new llfuse version restrictions. llfuse >=1.3.4, <1.3.7; python_version <"3.9" # broken on py39 llfuse >=1.3.7, <2.0; python_version >="3.9" # broken on freebsd Fedora 33+ comes with Python 3.9. So this means we really need the latest llfuse? I'm a bit puzzled as I don't see any functional change in the llfuse github commits after 1.3.6. - Are these version restrictions only related to prebuilt llfuse release packages? (Fedora removes all pre-cythonized sources in its build process.) - Is there an easy way to check if my system's llfuse version is good enough? (e.g. "it compiles" -> fine, or some simple "borg mount" test) - Any github issues where I could learn more about what's wrong with llfuse versions? I'm a bit hesitant just to ignore the version restriction as we can not run borg's llfuse tests in Fedora's build infrastructure. Thank you, Felix From tw at waldmann-edv.de Thu Oct 8 12:50:47 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 8 Oct 2020 18:50:47 +0200 Subject: [Borgbackup] llfuse versions for 1.1.14 / Linux In-Reply-To: <2b6d4ccc-eb0b-746f-1bec-cbfb84fc1d55@oss.schwarz.eu> References: <2b6d4ccc-eb0b-746f-1bec-cbfb84fc1d55@oss.schwarz.eu> Message-ID: > Fedora 33+ comes with Python 3.9. So this means we really need the latest > llfuse? Either that (if you use a llfuse package with already pre-built C output from cython, iirc llfuse.c). If that llfuse.c was made by a too old cython, it won't compile for python 3.9. This is the case for the 1.3.6 llfuse package on pypi. The difference in the 1.3.7 package is that a more recent cython was used to generate llfuse.c, so it can be compiled for python 3.9. Alternative, you can do the cythonizing on your own, using a rather recent cython release. > (Fedora removes all pre-cythonized sources in its build process.) Then the older package might be ok also. > - Is there an easy way to check if my system's llfuse version is good enough? > (e.g. "it compiles" -> fine, or some simple "borg mount" test) The issues were at C compile time. > - Any github issues where I could learn more about what's wrong with llfuse > versions? Yes, see the borgbackup and llfuse-python issue tracker. I am currently working on fixing the compile issue on freebsd and release a fixed llfuse as 1.3.8 hopefully soon. That should hopefully run everywhere... -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Thu Oct 8 13:03:02 2020 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Thu, 8 Oct 2020 19:03:02 +0200 Subject: [Borgbackup] llfuse versions for 1.1.14 / Linux In-Reply-To: References: <2b6d4ccc-eb0b-746f-1bec-cbfb84fc1d55@oss.schwarz.eu> Message-ID: <1e383862-7bfc-4b04-9752-a28a6598f2b0@oss.schwarz.eu> Am 08.10.20 um 18:50 schrieb Thomas Waldmann: >> Fedora 33+ comes with Python 3.9. So this means we really need the latest >> llfuse? > > Either that (if you use a llfuse package with already pre-built C output > from cython, iirc llfuse.c). > > If that llfuse.c was made by a too old cython, it won't compile for > python 3.9. This is the case for the 1.3.6 llfuse package on pypi. > > The difference in the 1.3.7 package is that a more recent cython was > used to generate llfuse.c, so it can be compiled for python 3.9. Thank you very much. That means we can just strip the version requirements and call it a day :-) Felix From lazyvirus at gmx.com Thu Oct 8 13:10:11 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Thu, 8 Oct 2020 19:10:11 +0200 Subject: [Borgbackup] Retrieving files from another machine repo In-Reply-To: <20201007170701.460fffa0@msi.defcon1.lan> References: <20201007170701.460fffa0@msi.defcon1.lan> Message-ID: <20201008191011.213e59e1@msi.defcon1.lan> On Wed, 7 Oct 2020 17:07:01 +0200 Bzzzz wrote: I answer to myself (one is never so well served by himself and it appears that I also quite agree with myself ;-P) This mod'op is the right one as it took less than 10' to perform the other machine restore. JY > I've a fried machine and have to retrieve some files from it's BB repo. > > As I have a backup of ~/.config/borg/ and ~/.cache/borg/ from this > fried machine, the easiest way I see to do that is to : > * rename these 2 DIRs to d?_NORMAL > * copy these 2 DIRs from my USB backup to their regular place > * retrieve files from the repo > * delete these 2 DIRs > * move *_NORMAL to their regular names > > but may be there's an easier way to achieve that? From h.audeoud at gmail.com Thu Oct 8 13:26:00 2020 From: h.audeoud at gmail.com (=?UTF-8?Q?Henry-Joseph_Aud=c3=a9oud?=) Date: Thu, 8 Oct 2020 19:26:00 +0200 Subject: [Borgbackup] Backing up an encrypted directory Message-ID: Hi all, I recently set up an encrypted folder on my ext4 drive using fscrypt. As I did not unlocked it before starting the daily backup, it obviously failed on that files, with error like the following, one for each encrypted file: > /home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B: open: [Errno 126] Required key not available: '/home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B' > E /home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B OK, simplest fixup is: `fscrypt unlock && borg create <...>`. However, it is not that obvious for automatic backups. Did anyone already had (or have an idea on how to handle) that kind of situation? -- Henry-Joseph Aud?oud From lazyvirus at gmx.com Thu Oct 8 13:52:02 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Thu, 8 Oct 2020 19:52:02 +0200 Subject: [Borgbackup] Backing up an encrypted directory In-Reply-To: References: Message-ID: <20201008195202.799adbe9@msi.defcon1.lan> On Thu, 8 Oct 2020 19:26:00 +0200 Henry-Joseph Aud?oud wrote: > > /home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B: > > open: [Errno 126] Required key not available: > > '/home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B' > > E /home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B > Did anyone already had (or have an idea on how to handle) that kind of > situation? from: https://github.com/google/fscrypt (Sion: Enabling the PAM module): "Enabling the PAM module is needed for login passphrase-protected directories to be automatically unlocked when you log in, and for login passphrase-protected directories to remain accessible when you change your login passphrase." So, it seems your first solution, following this doc, is to keep users logged in when BB is run. Otherwise, you'll want to have a look at 'expect' use into a shell script, such as: https://stackoverflow.com/questions/4857702/how-to-provide-password-to-a-command-that-prompts-for-one-in-bash &|: https://unix.stackexchange.com/questions/199718/how-to-make-the-script-automated-to-take-password-on-its-own and add the wanted lines to your BB backup launch script with a control to make sure the partition's unlocked, which will fail the backup if it is not the case, e-mailing you in this matter. Jean-Yves From matthias.boettcher at gmail.com Fri Oct 9 03:41:05 2020 From: matthias.boettcher at gmail.com (=?UTF-8?Q?Matthias_B=C3=B6ttcher?=) Date: Fri, 9 Oct 2020 09:41:05 +0200 Subject: [Borgbackup] FileExistsError re nonce.tmp / nonce In-Reply-To: References: <1f0c55a3-4ad3-541e-67e3-308550b9202e@TechDesignPsych.com> Message-ID: Sorry Michael, I have no other clue. Setting the protocol version in a NAS helped me multiple times to fix strange errors in file operations on cifs mounts. Bye for now Matthias Am Di., 6. Okt. 2020 um 21:10 Uhr schrieb Michael Siepmann via Borgbackup : > > Hello Mathhias, > > The Synology NAS wouldn't let me set SMB3 as minimum, but I set it as > maximum and set the script to use "vers=3.0". Unfortunately I still got > the same error. > > Thanks, > > Michael > > > On 2020-10-05 22:12, Matthias B?ttcher wrote: > > > Hello Michael, > > > > please set the SMB protocol min and max to SMB3 and mount it with the > > same version. > > > > Best regards, > > Matthias > > > > Am Di., 6. Okt. 2020 um 01:17 Uhr schrieb Michael Siepmann via > > Borgbackup : > >> Hello Matthias, > >> > >> I appreciate your help. The Synology NAS has minimum and maximum > >> options: SMB1, SMB2, SMB2 and Large MTU, SMB3, and is currently set to > >> SMB1 for both minimum and maximum. My script mounts it with cifs version > >> 1.0: "mount -t cifs -o vers=1.0" > >> > >> Best regards, > >> > >> Michael > >> > >> > >> On 2020-10-05 02:51, Matthias B?ttcher wrote: > >>> Hello Michael, > >>> > >>> I guess you have mounted /mnt/synology/ as cifs on a Synology NAS. > >>> Please check the SMB-Settings on the NAS, especially the version > >>> settings of the protocol. I can't give you detailed information, > >>> because I'm not using a Synology NAS. > >>> > >>> HTH > >>> Matthias B?ttcher > >>> > >>> Am Sa., 3. Okt. 2020 um 21:42 Uhr schrieb Michael Siepmann via > >>> Borgbackup : > >>>> I'd appreciate any help anyone can offer on this error I'm getting when trying to do a backup - with a script that was previously working fine. I'll paste the relevant part of the log below, but the main error seems to be this: > >>>> > >>>> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' > >>>> > >>>> I tried deleting nonce and nonce.tmp before running the backup, but I still got this error. Here's the full log excerpt: > >>>> > >>>> === > >>>> > >>>> Creating archive at "/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}" > >>>> Local Exception > >>>> Traceback (most recent call last): > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4529, in main > >>>> exit_code = archiver.run(args) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 4461, in run > >>>> return set_ec(func(args)) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 166, in wrapper > >>>> return method(self, args, repository=repository, **kwargs) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 574, in do_create > >>>> create_inner(archive, cache) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 537, in create_inner > >>>> read_special=args.read_special, dry_run=dry_run, st=st) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > >>>> read_special=read_special, dry_run=dry_run) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > >>>> read_special=read_special, dry_run=dry_run) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 651, in _process > >>>> read_special=read_special, dry_run=dry_run) > >>>> [Previous line repeated 1 more time] > >>>> File "/usr/lib64/python3.7/site-packages/borg/archiver.py", line 625, in _process > >>>> status = archive.process_file(path, st, cache) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 1071, in process_file > >>>> self.chunk_file(item, cache, self.stats, backup_io_iter(self.chunker.chunkify(fd, fh))) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 999, in chunk_file > >>>> item.chunks.append(chunk_processor(data)) > >>>> File "/usr/lib64/python3.7/site-packages/borg/archive.py", line 987, in chunk_processor > >>>> chunk_entry = cache.add_chunk(self.key.id_hash(data), data, stats, wait=False) > >>>> File "/usr/lib64/python3.7/site-packages/borg/cache.py", line 897, in add_chunk > >>>> data = self.key.encrypt(chunk) > >>>> File "/usr/lib64/python3.7/site-packages/borg/crypto/key.py", line 370, in encrypt > >>>> self.nonce_manager.ensure_reservation(num_aes_blocks(len(data))) > >>>> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 85, in ensure_reservation > >>>> self.commit_repo_nonce_reservation(reservation_end, repo_free_nonce) > >>>> File "/usr/lib64/python3.7/site-packages/borg/crypto/nonces.py", line 48, in commit_repo_nonce_reservation > >>>> self.repository.commit_nonce_reservation(next_unreserved, start_nonce) > >>>> File "/usr/lib64/python3.7/site-packages/borg/repository.py", line 346, in commit_nonce_reservation > >>>> fd.write(bin_to_hex(next_unreserved.to_bytes(8, byteorder='big'))) > >>>> File "/usr/lib64/python3.7/site-packages/borg/platform/base.py", line 176, in __exit__ > >>>> os.replace(self.tmppath, self.path) > >>>> FileExistsError: [Errno 17] File exists: '/mnt/synology/Qubes-personal/nonce.tmp' -> '/mnt/synology/Qubes-personal/nonce' > >>>> > >>>> Platform: Linux personal 4.19.132-1.pvops.qubes.x86_64 #1 SMP Tue Jul 14 03:42:21 UTC 2020 x86_64 > >>>> Linux: Fedora 30 Thirty > >>>> Borg: 1.1.11 Python: CPython 3.7.7 msgpack: 0.5.6 > >>>> PID: 25754 CWD: /home/user/Apps/ScriptsByMMS > >>>> sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '--compression', 'zlib,5', '/mnt/synology/Qubes-personal::{hostname}-{now:%Y-%m-%d_T%H:%M}', '/home', '--exclude', '/home/*/.cache', '--exclude', '/home/*/.local/share/Trash', '--exclude', '/home/user/Downloads/NOT backed up', '--exclude', '/home/user/Seafile/mra > >>>> y', '--exclude', '/home/user/Seafile/snowdrift-design'] > >>>> SSH_ORIGINAL_COMMAND: None > >>>> > >>>> === > >>>> > >>>> Thank you, > >>>> > >>>> Michael Siepmann > >>>> > >>>> -- > >>>> > >>>> Michael Siepmann, Ph.D. > >>>> The Tech Design Psychologist? > >>>> Shaping technology to help people flourish? > >>>> 303-835-0501 TechDesignPsych.com OpenPGP: 6D65A4F7 > >>>> > >>>> > >>>> _______________________________________________ > >>>> Borgbackup mailing list > >>>> Borgbackup at python.org > >>>> https://mail.python.org/mailman/listinfo/borgbackup > >>> _______________________________________________ > >>> Borgbackup mailing list > >>> Borgbackup at python.org > >>> https://mail.python.org/mailman/listinfo/borgbackup > >> _______________________________________________ > >> Borgbackup mailing list > >> Borgbackup at python.org > >> https://mail.python.org/mailman/listinfo/borgbackup > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From h.audeoud at gmail.com Fri Oct 9 04:15:07 2020 From: h.audeoud at gmail.com (=?UTF-8?Q?Henry-Joseph_Aud=c3=a9oud?=) Date: Fri, 9 Oct 2020 10:15:07 +0200 Subject: [Borgbackup] Backing up an encrypted directory In-Reply-To: <20201008195202.799adbe9@msi.defcon1.lan> References: <20201008195202.799adbe9@msi.defcon1.lan> Message-ID: On 08/10/2020 19:52, Bzzzz wrote: > On Thu, 8 Oct 2020 19:26:00 +0200 > Henry-Joseph Aud?oud wrote: > >>> /home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B: >>> open: [Errno 126] Required key not available: >>> '/home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B' >>> E /home/[?]/Wj1p20gsTVAihqkKm5nI6sUQ77vXz5H8nTfhN2IAQav2i7qk+hPX9B > >> Did anyone already had (or have an idea on how to handle) that kind of >> situation? > > from: https://github.com/google/fscrypt (Sion: Enabling the PAM module): > "Enabling the PAM module is needed for login passphrase-protected > directories to be automatically unlocked when you log in, and for login > passphrase-protected directories to remain accessible when you change > your login passphrase." > > So, it seems your first solution, following this doc, is to keep users > logged in when BB is run. Indeed, I do not currently use PAM to unlock the directory. Maybe I should set it up. It would solve the problem, as I am always logged in when I do my backups. > Otherwise, you'll want to have a look at 'expect' use into a shell > script, such as: > https://stackoverflow.com/questions/4857702/how-to-provide-password-to-a-command-that-prompts-for-one-in-bash > &|: > https://unix.stackexchange.com/questions/199718/how-to-make-the-script-automated-to-take-password-on-its-own > > and add the wanted lines to your BB backup launch script with a control > to make sure the partition's unlocked, which will fail the backup if it > is not the case, e-mailing you in this matter. In a fully automatic scenario, this would raise the problem of storing the passphrase in the configuration, which may not be acceptable; but it should be usable in some scenarios. Thanks for your help! -- Henry-Joseph Aud?oud From grumpy at mailfence.com Thu Oct 29 06:26:38 2020 From: grumpy at mailfence.com (grumpy at mailfence.com) Date: Thu, 29 Oct 2020 05:26:38 -0500 (CDT) Subject: [Borgbackup] somethin got corrupted Message-ID: my machine hard locked last night after a reboot borg is acting up i have been doing backups to a repo on the same machine anyone got an idea how to repair this everything else "appears" to be ok Killed stale lock grumpy3.grumpy-net at 207458817794334.21305-0. Removed stale exclusive roster lock for host grumpy3.grumpy-net at 207458817794334 pid 21305 thread 0. Removed stale exclusive roster lock for host grumpy3.grumpy-net at 207458817794334 pid 21305 thread 0. Data integrity error: Segment entry checksum mismatch [segment 533, offset 32441627] Traceback (most recent call last): File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4455, in main exit_code = archiver.run(args) File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4387, in run return set_ec(func(args)) File "/usr/lib/python3/dist-packages/borg/archiver.py", line 141, in wrapper kwargs['manifest'], kwargs['key'] = Manifest.load(repository, compatibility) File "/usr/lib/python3/dist-packages/borg/helpers.py", line 330, in load cdata = repository.get(cls.MANIFEST_ID) File "/usr/lib/python3/dist-packages/borg/repository.py", line 1070, in get self.index = self.open_index(self.get_transaction_id()) File "/usr/lib/python3/dist-packages/borg/repository.py", line 376, in get_transaction_id self.check_transaction() File "/usr/lib/python3/dist-packages/borg/repository.py", line 373, in check_transaction self.replay_segments(replay_from, segments_transaction_id) File "/usr/lib/python3/dist-packages/borg/repository.py", line 812, in replay_segments self._update_index(segment, objects) File "/usr/lib/python3/dist-packages/borg/repository.py", line 822, in _update_index for tag, key, offset, size in objects: File "/usr/lib/python3/dist-packages/borg/repository.py", line 1353, in iter_objects read_data=read_data) File "/usr/lib/python3/dist-packages/borg/repository.py", line 1451, in _read segment, offset)) borg.helpers.IntegrityError: Data integrity error: Segment entry checksum mismatch [segment 533, offset 32441627] Platform: Linux grumpy3 4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64 Linux: debian 10.5 Borg: 1.1.9 Python: CPython 3.7.3 PID: 26086 CWD: /root sys.argv: ['/usr/bin/borg', 'create', '--verbose', '--filter', 'AME', '--list', '--stats', '--show-rc', '--compression', 'lz4', '--debug', '--critical', '--error', '--warning', '::grumpy3-{now}', '/'] SSH_ORIGINAL_COMMAND: None From jeffbrown.the at gmail.com Sun Nov 1 15:45:49 2020 From: jeffbrown.the at gmail.com (Jeffrey Brown) Date: Sun, 1 Nov 2020 15:45:49 -0500 Subject: [Borgbackup] Can I use the not-overwritten part of a dd-overwritten borg backup? Message-ID: Hello, There's an external hard drive where I back up lots of stuff using borg. It was around 54 GB last I checked. I accidentally overwrote the first chunk of that drive with NixOS 20.09 using dd. Most of the files I don't mind losing but a few are really important. (I'll use the cloud in the future.) Is it still possible to extract anything? Since I discovered the problem (and navigated to the disk in Dolphin, and stared in disbelief) the disk has been unplugged and untouched. Thanks you. -- Jeff Brown | Jeffrey Benjamin Brown LinkedIn | Github | Twitter | Facebook | very old Website -------------- next part -------------- An HTML attachment was scrubbed... URL: From ldl08 at gmx.net Sun Nov 1 15:52:49 2020 From: ldl08 at gmx.net (ldl08 at gmx.net) Date: Sun, 1 Nov 2020 21:52:49 +0100 Subject: [Borgbackup] Why does 'borg extract' overwrite identical (existing) files? Message-ID: Dear borg experts, I was playing around with my borg backup. I first did a 'borg create' and thus ended up with an up-to-date backup. Next I ran the 'borg extract' command on the latest archive (the one I just created) and expected nothing to happen, assuming that identical files, in identical locations would just be skipped. However, borg overwrites files -- which is superfluous to my mind, eating up bandwidth. May I kindly ask a short hint as to the motivation behind that design choice? Second, what is the suggested way around this behaviour? What I am looking for is sth. similar to rsync's approach with regard to avoiding the transfer of available files. Thanks for you kind support/help! David From tw at waldmann-edv.de Sun Nov 1 16:28:29 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 1 Nov 2020 22:28:29 +0100 Subject: [Borgbackup] Why does 'borg extract' overwrite identical (existing) files? In-Reply-To: References: Message-ID: > Next I ran the 'borg extract' command on the latest archive (the one I just created) and expected nothing to happen, Reality sometimes does not conform to personal expectation. :-) borg behaves like other unix archivers (think of tar x or unzip) and extracts to current directory (not: to original location). also it stores relative pathes into the archive (see borg list). There is nice documentation, please read it. > assuming that identical files, in identical locations would just be skipped. As the expectation from borg is that you extract into an empty directory, it has no complicated / complete mechanisms to update a non-empty directory structure. "extract" is NOT "sync". What is does is to create missing intermediate directories and kill all files which are "in the way", but not more than that. There is a ticket about the idea of a more advanced "extract", but it was not addressed yet. > May I kindly ask a short hint as to the motivation behind that design choice? Simplicity. extract is extract, not "sync to archive state". > What I am looking for is sth. similar to rsync's approach with regard to avoiding the transfer of available files. borg is not rsync, neither it is a sync tool in general - it's a backup tool. Locate that ticket for more details, it is not trivial. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sun Nov 1 16:40:09 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 1 Nov 2020 22:40:09 +0100 Subject: [Borgbackup] Can I use the not-overwritten part of a dd-overwritten borg backup? In-Reply-To: References: Message-ID: <0bdc5b87-e248-2aa1-358c-440a2a97f87c@waldmann-edv.de> > There's an external hard drive where I back up lots of stuff using borg. > It was around 54 GB last I checked. You mean you have (or had) a filesystem on that HDD with a borg repository? > I accidentally overwrote the first chunk of that drive with NixOS 20.09 > using dd. Not sure what you mean by "chunk" in that context. Did you write a NixOS ISO image to the HDD block device, so that you lost the first hundreds or thousands of MBs of the filesystem structure? > Most of the files I don't > mind losing but a few are really important. If you can recover a part of the borg repo, "borg check --repair" might be able to recover some stuff or not, that depends on what and how much is intact. Only try "borg check --repair" on a COPY of the repo, the warning it tells you is there for a reason. But before you can do that, you need to recover the borg repo files from the damaged file system, which is out of scope of borg support, but generic "i damaged my filesystem / i have a half broken hard disk" file recovery. Also, you still need to have the borg (encryption) key and the corresponding passphrase. See the docs about where the key is stored (also, borg has told you to make a backup of the key, you can also use that). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From lazyvirus at gmx.com Sun Nov 1 16:53:53 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Sun, 1 Nov 2020 22:53:53 +0100 Subject: [Borgbackup] Can I use the not-overwritten part of a dd-overwritten borg backup? In-Reply-To: References: Message-ID: <20201101225353.0480726b@msi.defcon1.lan> On Sun, 1 Nov 2020 15:45:49 -0500 Jeffrey Brown wrote: > Hello, Hi, > There's an external hard drive where I back up lots of stuff using > borg. It was around 54 GB last I checked. I accidentally overwrote the > first chunk of that drive with NixOS 20.09 using dd. Most of the files > I don't mind losing but a few are really important. (I'll use the > cloud in the future.) > > Is it still possible to extract anything? Duno, that is ZE question. > Since I discovered the > problem (and navigated to the disk in Dolphin, and stared in > disbelief) the disk has been unplugged and untouched. Good - first things first, reconnect this HD, do not mount it and make a dd copy of the partition on another disk on which you will work/experiment and reconstruct if something goes really bad. For tools, see about *forensic packages and/or https://www.system-rescue.org/ (live system CD with lots of tools integrated, included BB - if you use this one, be aware that the USB copy has changed, which wasn't reflected in the doc 4 months ago: you do not need the script anymore to do so, just dd the .iso file to the USB key). For text time, consider reading this: https://www.carbonite.com/blog/article/2016/01/what-is-3-2-1-backup/ and if you don't wanna go this far, at least a 2 backups strategy (or one backup and one full data copy on a reliable FS, such as ZFS) - also remember that backups are not repos (and that you can also goof with cloud or cloud can goof with you). Keeping non-encrypted copies of your disks partition tables is also a good idea and part of a good disaster strategy that you MUST put on paper and make running, at least in your mind, in a very calm environment, to see if there are no quirks in it - also note that there's not one strategy but several strategies to consider (what if the data center burns, what if main servers power lines are struck by a lightning, what if a rogue employee steals some disks, what if the network get stuffed with a ransomware or some other shit the same level of threat, etc). Of course, you want to also have uencrypted copies of each and every key+password from BB machines backups. That said, don't drink to much (strong) coffee and good luck. Jean-Yves From dassies at eml.cc Fri Nov 6 03:50:37 2020 From: dassies at eml.cc (Nils Blomqvist) Date: Fri, 06 Nov 2020 09:50:37 +0100 Subject: [Borgbackup] Issues understanding prune 'keep' rules Message-ID: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> Settings: keep-within 3d keep-daily 7 keep-weekly 4 keep-monthly 6 keep-yearly 7 Excerpt of first prune (24 Okt 2020 10:09:51): Keeping archive: backup-2020-07-29T19:17:17 Keeping archive: backup-2020-07-24T17:01:59 Keeping archive: backup-2020-07-19T08:10:21 Keeping archive: backup-2020-07-06T17:58:07 Pruning archive: backup-2020-07-05T08:12:26 Excerpt of second prune (5 Nov 2020 14:45:44): Keeping archive: backup-2020-07-29T19:17:17 Pruning archive: backup-2020-07-24T17:01:59 Pruning archive: backup-2020-07-19T08:10:21 Pruning archive: backup-2020-07-06T17:58:07 In the first prune, the latest four archives are kept. In the second, only the latest is kept. I don?t understand how the pruning rules are applied. From tw at waldmann-edv.de Fri Nov 6 07:49:12 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 6 Nov 2020 13:49:12 +0100 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> Message-ID: <944bc876-2696-ec19-7708-7e6bafdb759c@waldmann-edv.de> https://borgbackup.readthedocs.io/en/stable/usage/prune.html There is also an example in docs/misc/prune-example.txt. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From jdc at uwo.ca Fri Nov 6 09:24:04 2020 From: jdc at uwo.ca (Dan Christensen) Date: Fri, 06 Nov 2020 09:24:04 -0500 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> (Nils Blomqvist's message of "Fri, 06 Nov 2020 09:50:37 +0100") References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> Message-ID: <875z6ilg1n.fsf@uwo.ca> Presumably you have backups more recent than those in July, and the more recent ones are "using up" the keep rules. If you want help, you'll need to post a list of all recent archives up to the ones that you don't understand, and the exact borg command that you used. Dan On Nov 6, 2020, Nils Blomqvist wrote: > Settings: > > keep-within 3d > keep-daily 7 > keep-weekly 4 > keep-monthly 6 > keep-yearly 7 > > Excerpt of first prune (24 Okt 2020 10:09:51): > > Keeping archive: backup-2020-07-29T19:17:17 > Keeping archive: backup-2020-07-24T17:01:59 > Keeping archive: backup-2020-07-19T08:10:21 > Keeping archive: backup-2020-07-06T17:58:07 > Pruning archive: backup-2020-07-05T08:12:26 > > Excerpt of second prune (5 Nov 2020 14:45:44): > > Keeping archive: backup-2020-07-29T19:17:17 > Pruning archive: backup-2020-07-24T17:01:59 > Pruning archive: backup-2020-07-19T08:10:21 > Pruning archive: backup-2020-07-06T17:58:07 > > In the first prune, the latest four archives are kept. > In the second, only the latest is kept. > > I don?t understand how the pruning rules are applied. From tve at voneicken.com Sun Nov 8 21:23:44 2020 From: tve at voneicken.com (Thorsten von Eicken) Date: Mon, 9 Nov 2020 02:23:44 +0000 Subject: [Borgbackup] logging question Message-ID: <01000175aad0fc00-d6eb932a-c651-4733-a165-d461c4d6b9a4-000000@email.amazonses.com> Does borg server keep a log of commands somewhere, or is that something one can enable? I have a backup server to which many clients back up using borg serve running using ssh forced commands and it would be really nice if I could get a list of commands executed on the server, e.g., if borg serve logged each command to a file and perhaps also the result code. This would be in addition to the regular logging which happens on the clients. Maybe this is something I could create by exec'ing a shell script instead of the borg serve command, but I'm a bit weary of unexpected security implications. Thoughts? -TvE -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Mon Nov 9 11:46:20 2020 From: public at enkore.de (Marian Beermann) Date: Mon, 9 Nov 2020 17:46:20 +0100 Subject: [Borgbackup] logging question In-Reply-To: <01000175aad0fc00-d6eb932a-c651-4733-a165-d461c4d6b9a4-000000@email.amazonses.com> References: <01000175aad0fc00-d6eb932a-c651-4733-a165-d461c4d6b9a4-000000@email.amazonses.com> Message-ID: The server doesn't have access to this information (=the command line of the client or which files they process). The information you could tee(1) out of the borg serve process is considerably lower level than that and would need some non-trivial fidgeting to figure out what the client did (see internals docs for details, they're decent). Cheers, Marian > Does borg server keep a log of commands somewhere, or is that something > one can enable? > > I have a backup server to which many clients back up using borg serve > running using ssh forced commands and it would be really nice if I could > get a list of commands executed on the server, e.g., if borg serve > logged each command to a file and perhaps also the result code. This > would be in addition to the regular logging which happens on the > clients. Maybe this is something I could create by exec'ing a shell > script instead of the borg serve command, but I'm a bit weary of > unexpected security implications. > > Thoughts? > > -TvE > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From tve at voneicken.com Mon Nov 9 12:41:58 2020 From: tve at voneicken.com (Thorsten von Eicken) Date: Mon, 9 Nov 2020 17:41:58 +0000 Subject: [Borgbackup] logging question In-Reply-To: References: <01000175aad0fc00-d6eb932a-c651-4733-a165-d461c4d6b9a4-000000@email.amazonses.com> Message-ID: <01000175ae19a816-c9abacb0-b5ab-4aff-9544-41eb0dbc4e1c-000000@email.amazonses.com> I see, thanks for the response. BTW, I looked at the internals docs but didn't see any description of the RPC protocol (other than its security attributes). I'm probably not looking in the right place... Thanks! TvE On 11/9/20 8:46 AM, Marian Beermann wrote: > The server doesn't have access to this information (=the command line of > the client or which files they process). The information you could > tee(1) out of the borg serve process is considerably lower level than > that and would need some non-trivial fidgeting to figure out what the > client did (see internals docs for details, they're decent). > > Cheers, Marian > >> Does borg server keep a log of commands somewhere, or is that something >> one can enable? >> >> I have a backup server to which many clients back up using borg serve >> running using ssh forced commands and it would be really nice if I could >> get a list of commands executed on the server, e.g., if borg serve >> logged each command to a file and perhaps also the result code. This >> would be in addition to the regular logging which happens on the >> clients. Maybe this is something I could create by exec'ing a shell >> script instead of the borg serve command, but I'm a bit weary of >> unexpected security implications. >> >> Thoughts? >> >> -TvE >> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bryan at bryanfields.net Tue Nov 10 04:18:29 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 10 Nov 2020 04:18:29 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' Message-ID: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> So this is strange, I've been using borg for years now after moving from attic, largely without issues. Recently purge started to fail on one of my VM's backup jobs, however the backup runs fine. The errors are below, and I'm not familiar with the internals of borg to understand the error "'tuple' object has no attribute 'items'" I did STFL and was only able to find some references to a similar issue, https://mail.python.org/pipermail/borgbackup/2018q1/001004.html but there's not any follow up or solved issue. I'm using an offsite backup here via ssh, and don't believe there's any corruption as it's on ZFS with the backup provider. I can likely blow away the entire archive and start over, but I'd prefer to fix it rather than lose all my old data. > root at web1:~# $BORG prune -v --list --prefix '{hostname}-' --keep-daily=7 --keep-weekly=4 --keep-monthly=6 > Keeping archive: web1-2020-11-09_05:30 Mon, 2020-11-09 05:31:09 [fc69044c28023557b31276250026d144cf4fff24e137f7259b5221dbb559a2d1] > Keeping archive: web1-2020-11-08_05:30 Sun, 2020-11-08 05:31:15 [d3a5b96b0cdd849435e168c9d594b6bd405e8ac61d11bd82573557dae011a210] > Keeping archive: web1-2020-11-07_05:30 Sat, 2020-11-07 05:31:09 [9f01da7b3c65effaa216747f6f19b6e6e93d41dbb5f134b9c6ce068ac84fcfaa] > Keeping archive: web1-2020-11-06_05:30 Fri, 2020-11-06 05:31:06 [0bbd8e3618511e6cfbf947d2743758fc45dbfb2827af437db37d3e74aa7369d5] > Keeping archive: web1-2020-11-05_05:30 Thu, 2020-11-05 05:31:07 [0cdd3fe8e2a39a8b37059393170e25f62f45a38d241ee2343d053b561b24faf6] > Keeping archive: web1-2020-11-04_05:30 Wed, 2020-11-04 05:31:05 [9cf55b80cd4cbc31bf1d169c8ca583c1cda838cce231fbdf54e85b26e5ac83f6] > Keeping archive: web1-2020-11-03_05:30 Tue, 2020-11-03 05:31:06 [81f785f6a955cffcc6abba2f20f194e955212a78edfb93a10cdb915c150e345b] > Pruning archive: web1-2020-11-02_08:59 Mon, 2020-11-02 08:59:50 [dca63c90487dc9a8fa91b791de3fd5c32b86c8ccde3915f5c575845d5604839f] (1/19) > Pruning archive: web1-2020-11-02_05:31 Mon, 2020-11-02 05:31:18 [630d944e9c56289ec0ed72911eb46b45c2fd7c94f7a9e5d247b8bff573e3b840] (2/19) > Keeping archive: web1-2020-11-01_05:30 Sun, 2020-11-01 05:31:06 [e87336360e3b32393f75315036610727423d711051f4f27c88a401ab581ec148] > Keeping archive: web1-2020-10-31_05:30 Sat, 2020-10-31 05:31:07 [5d81b50fe74fb49cf8eb6bbefd6c447090c06d2e413a220573a543a51dfc8923] > Pruning archive: web1-2020-10-30_05:30 Fri, 2020-10-30 05:31:10 [b56bdfc76096a1e8c427d942be4b1d678b8fcc363f99996f38473739aa260610] (3/19) > Pruning archive: web1-2020-10-29_05:30 Thu, 2020-10-29 05:31:06 [611d0a1cb816665f2b69d51e931bb4feddfd3bf97ca005f04c52ff0cd17f9516] (4/19) > Pruning archive: web1-2020-10-28_05:30 Wed, 2020-10-28 05:31:07 [4a59f64df3fa87a864c7f7b573b5e31fc43a0fbf70a0321f2c61766d872cca45] (5/19) > Pruning archive: web1-2020-10-27_05:30 Tue, 2020-10-27 05:31:09 [86a693fff8bcea38a0d7e258311aeab28efbf14f96186d6f4356c5294fab30e2] (6/19) > Pruning archive: web1-2020-10-26_05:30 Mon, 2020-10-26 05:31:06 [403bc6cde2d2fd8c71fc776e29f01363eefe4fec62c01bb3b40d9bec51e21747] (7/19) > Keeping archive: web1-2020-10-25_05:30 Sun, 2020-10-25 05:31:08 [339bb4a29722607e6685a8dec1a2c525655414dd65f0be15996bd3d006cda0fc] > Pruning archive: web1-2020-10-24_05:30 Sat, 2020-10-24 05:31:04 [461ed01c2e70a07c83644d9534d2bc11963512a33ea54824cf16ccd54d214059] (8/19) > Pruning archive: web1-2020-10-23_05:30 Fri, 2020-10-23 05:31:05 [bc50078bba05f5db5b5502d0a6d408eb2706ec701e222e379d799b380c2424d9] (9/19) > Pruning archive: web1-2020-10-22_05:30 Thu, 2020-10-22 05:31:04 [45efbed6fa8401f42c43cdd27c73746fc7b7315aff1165cd5a709832d7f76699] (10/19) > Pruning archive: web1-2020-10-21_05:30 Wed, 2020-10-21 05:31:08 [0f5d7a7c3e5ce9e5dbeaff236d627e72f6e0babf39412114e29abcda050ff163] (11/19) > Pruning archive: web1-2020-10-20_05:30 Tue, 2020-10-20 05:31:06 [c5df8d5522fa2b152518d775aa38c7fc74625b6e25b1e061a99f27b37e0ee0b8] (12/19) > Pruning archive: web1-2020-10-19_05:30 Mon, 2020-10-19 05:31:07 [762dc3af96863e7b87616bb4d2efacf31dd502a68c34b4d98088724d7f2d6b5d] (13/19) > Keeping archive: web1-2020-10-18_05:30 Sun, 2020-10-18 05:31:06 [f9804dde3e87079850ff133dc06783125492d03d594d0d67c4d9d9b9396a49de] > Keeping archive: web1-2020-10-06_05:30 Tue, 2020-10-06 05:30:54 [3f47a63d36dd6bd034a63bdb40d90e7328d7d750c0d47f65539595c17f1fe767] > Pruning archive: web1-2020-10-05_05:30 Mon, 2020-10-05 05:31:07 [e9adb2c87f51b33e9071d8f4157ed94c090ed54bd8e783c02a36a0a2d5d117f2] (14/19) > Local Exception > Traceback (most recent call last): > File "/usr/local/lib/python3.6/dist-packages/borg/archiver.py", line 4591, in main > exit_code = archiver.run(args) > File "/usr/local/lib/python3.6/dist-packages/borg/archiver.py", line 4523, in run > return set_ec(func(args)) > File "/usr/local/lib/python3.6/dist-packages/borg/archiver.py", line 176, in wrapper > return method(self, args, repository=repository, **kwargs) > File "/usr/local/lib/python3.6/dist-packages/borg/archiver.py", line 1636, in do_prune > Archive(repository, key, manifest, archive.name, cache).delete(stats, forced=args.forced) > File "/usr/local/lib/python3.6/dist-packages/borg/archive.py", line 857, in delete > item = Item(internal_dict=item) > File "src/borg/item.pyx", line 46, in borg.item.PropDict.__init__ > File "src/borg/item.pyx", line 56, in borg.item.PropDict.update_internal > AttributeError: 'tuple' object has no attribute 'items' > > Platform: Linux web1.keekles.org 4.15.0-118-generic #119-Ubuntu SMP Tue Sep 8 12:30:01 UTC 2020 x86_64 > Linux: Ubuntu 18.04 bionic > Borg: 1.1.14 Python: CPython 3.6.9 msgpack: 0.5.6 > PID: 1696 CWD: /root > sys.argv: ['/usr/local/bin/borg', 'prune', '-v', '--list', '--prefix', '{hostname}-', '--keep-daily=7', '--keep-weekly=4', '--keep-monthly=6'] > SSH_ORIGINAL_COMMAND: None Thanks, -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From tw at waldmann-edv.de Tue Nov 10 04:52:17 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 10 Nov 2020 10:52:17 +0100 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> Message-ID: > The errors are below, and I'm not familiar with the internals of borg to > understand the error "'tuple' object has no attribute 'items'" It runs into that while trying to delete an archive from your repo. To delete it, it reads all item metadata from the archive because it needs to decrement content data chunk references. A filesystem item's metadata is represented as a python dictionary, but in your case there is a tuple (a different data type, which is totally unexpected). It could be that the msgpacked data is corrupted (specifically: the byte indicating the data type to create when unpacking) and then unpack creates a tuple instead of a dict. Is this an encrypted (or authenticated) repo or did you use "-e none" to switch off encryption/authentication? Did you use borg < 1.1.11 on this repository? If so, did you follow the advisory in the changelog when upgrading to >= 1.1.11? What borg version is running on the repository server side? > and don't believe there's any corruption as it's on ZFS with the backup provider. Corruption can happen at a lot of places, not just on storage. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From Bryan at bryanfields.net Tue Nov 10 06:06:05 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 10 Nov 2020 06:06:05 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> Message-ID: <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> On 11/10/20 4:52 AM, Thomas Waldmann wrote: >> The errors are below, and I'm not familiar with the internals of borg to >> understand the error "'tuple' object has no attribute 'items'" > > It runs into that while trying to delete an archive from your repo. > To delete it, it reads all item metadata from the archive because it > needs to decrement content data chunk references. > > A filesystem item's metadata is represented as a python dictionary, but > in your case there is a tuple (a different data type, which is totally > unexpected). > > It could be that the msgpacked data is corrupted (specifically: the byte > indicating the data type to create when unpacking) and then unpack > creates a tuple instead of a dict. Ok, I understand this a bit, I've messed with python to do unpacking of binary data and the unpack functions. > Is this an encrypted (or authenticated) repo or did you use "-e none" to > switch off encryption/authentication? This is an encrypted repo. > Did you use borg < 1.1.11 on this repository? If so, did you follow the > advisory in the changelog when upgrading to >= 1.1.11? I'm not sure what the prior version was. I did a pip upgrade of it to the latest before posting here. Is there a way to see what it was written to with? > What borg version is running on the repository server side? root at web1:~# ssh .rsync.net -t /usr/local/bin/borg1/borg1 -V borg1 1.1.14 >> and don't believe there's any corruption as it's on ZFS with the backup provider. > > Corruption can happen at a lot of places, not just on storage. Understood, it's possible, maybe not likely. Thank you, -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From tw at waldmann-edv.de Tue Nov 10 06:27:07 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 10 Nov 2020 12:27:07 +0100 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> Message-ID: > This is an encrypted repo. OK, in that case, because we use authenticated encryption, there should be no undetected corruption until the authentication is finished: read from repo (storage -> borg -> sshd -> network -> ssh -> borg) authenticate (hmac-sha256 or blake2b) decrypt decompress unpack msgpacked data (here it crashes for you) So, because authenticate was successful (otherwise you would have seen an IntegrityError exception), "valid" data was in memory. So, if that blows up right afterwards, that means a RAM or CPU data corruption issue. Alternatively, the corruption maybe could have also happened at borg create time after the pack, but before the authentication step: pack compress encrypt authenticate write to repo (...) So, I'ld bet you had / have a RAM issue on your backup client. memtest86+ results? >> Did you use borg < 1.1.11 on this repository? If so, did you follow the >> advisory in the changelog when upgrading to >= 1.1.11? > > I'm not sure what the prior version was. I did a pip upgrade of it to the > latest before posting here. Is there a way to see what it was written to with? Don't think so. But you could just follow the advisory now, if unsure. If your repo index is corrupt and points to the wrong place in the segment files, borg might read invalid data and unpacking that might also lead to unexpected tuples... >> What borg version is running on the repository server side? > > root at web1:~# ssh .rsync.net -t /usr/local/bin/borg1/borg1 -V > borg1 1.1.14 Good! (i was fearing you use the 0.29 stoneage version) -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From Bryan at bryanfields.net Tue Nov 10 06:38:34 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 10 Nov 2020 06:38:34 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> Message-ID: On 11/10/20 6:27 AM, Thomas Waldmann wrote: >> This is an encrypted repo. > > OK, in that case, because we use authenticated encryption, there should > be no undetected corruption until the authentication is finished: > > read from repo (storage -> borg -> sshd -> network -> ssh -> borg) > authenticate (hmac-sha256 or blake2b) > decrypt > decompress > unpack msgpacked data (here it crashes for you) > > So, because authenticate was successful (otherwise you would have seen > an IntegrityError exception), "valid" data was in memory. > > So, if that blows up right afterwards, that means a RAM or CPU data > corruption issue. > > Alternatively, the corruption maybe could have also happened at borg > create time after the pack, but before the authentication step: > > pack > compress > encrypt > authenticate > write to repo (...) Ok, I understand the order here now. > So, I'ld bet you had / have a RAM issue on your backup client. > > memtest86+ results? It's a VM on server with ECC ram and no ECC errors logged in syslog/observium. Granted how parity works, it's possible, but unlikely. Actually, I did a migration to the other hypervisor for the VM. Same issue on a different hypervisor. I doubt it's the hardware, at least on my end. > But you could just follow the advisory now, if unsure. I am doing so. This is the only server giving issues, the others have been fine after moving to 1.1.14. > If your repo index is corrupt and points to the wrong place in the > segment files, borg might read invalid data and unpacking that might > also lead to unexpected tuples... Ok I have done this and get the below, but haven't run --repair yet as it gave a rather unnerving message about am I really, really sure, here be dragons, etc. > root at web1:~# borg check -v > Remote: Starting repository check > Remote: Starting repository index check > Remote: Index object count match. > Remote: Completed repository check, no problems found. > Starting archive consistency check... > Analyzing archive web1-2020-04-30_05:30 (1/37) > Analyzing archive web1-2020-05-31_05:30 (2/37) > Analyzing archive web1-2020-06-30_05:30 (3/37) > Analyzing archive web1-2020-07-31_05:30 (4/37) > Analyzing archive web1-2020-08-31_05:30 (5/37) > Analyzing archive web1-2020-09-13_05:30 (6/37) > Analyzing archive web1-2020-09-20_05:30 (7/37) > Analyzing archive web1-2020-09-27_05:30 (8/37) > Analyzing archive web1-2020-09-30_05:30 (9/37) > Analyzing archive web1-2020-10-04_05:30 (10/37) > Analyzing archive web1-2020-10-05_05:30 (11/37) > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Analyzing archive web1-2020-10-06_05:30 (12/37) > Analyzing archive web1-2020-10-18_05:30 (13/37) > Analyzing archive web1-2020-10-19_05:30 (14/37) > Analyzing archive web1-2020-10-20_05:30 (15/37) > Analyzing archive web1-2020-10-21_05:30 (16/37) > Analyzing archive web1-2020-10-22_05:30 (17/37) > Analyzing archive web1-2020-10-23_05:30 (18/37) > Analyzing archive web1-2020-10-24_05:30 (19/37) > Analyzing archive web1-2020-10-25_05:30 (20/37) > Analyzing archive web1-2020-10-26_05:30 (21/37) > Analyzing archive web1-2020-10-27_05:30 (22/37) > Analyzing archive web1-2020-10-28_05:30 (23/37) > Analyzing archive web1-2020-10-29_05:30 (24/37) > Analyzing archive web1-2020-10-30_05:30 (25/37) > Analyzing archive web1-2020-10-31_05:30 (26/37) > Analyzing archive web1-2020-11-01_05:30 (27/37) > Analyzing archive web1-2020-11-02_05:31 (28/37) > Analyzing archive web1-2020-11-02_08:59 (29/37) > Analyzing archive web1-2020-11-03_05:30 (30/37) > Analyzing archive web1-2020-11-04_05:30 (31/37) > Analyzing archive web1-2020-11-05_05:30 (32/37) > Analyzing archive web1-2020-11-06_05:30 (33/37) > Analyzing archive web1-2020-11-07_05:30 (34/37) > Analyzing archive web1-2020-11-08_05:30 (35/37) > Analyzing archive web1-2020-11-09_05:30 (36/37) > Analyzing archive web1-2020-11-10_05:30 (37/37) > Archive consistency check complete, problems found. >>> What borg version is running on the repository server side? >> >> root at web1:~# ssh .rsync.net -t /usr/local/bin/borg1/borg1 -V >> borg1 1.1.14 > > Good! (i was fearing you use the 0.29 stoneage version) lol, ran into that issue like 3 years ago :) Thanks, -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From lazyvirus at gmx.com Tue Nov 10 06:53:31 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Tue, 10 Nov 2020 12:53:31 +0100 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> Message-ID: <20201110125331.26c9b8ff@msi.defcon1.lan> On Tue, 10 Nov 2020 06:38:34 -0500 Bryan Fields wrote: > > So, if that blows up right afterwards, that means a RAM or CPU data > > corruption issue. ? > It's a VM on server with ECC ram and no ECC errors logged in > syslog/observium. Granted how parity works, it's possible, but > unlikely. > > Actually, I did a migration to the other hypervisor for the VM. Same > issue on a different hypervisor. I doubt it's the hardware, at least > on my end. If it can help OP ; in my experience, electrical shocks can lead to RAM/CPU/MB trashing, especially lightnings when you dwell far from town, even with so called surge protectors - worse, trashed RAM, even ECC, isn't always uncovered by memtest86+ in this case. We even had a customer whose MB+RAM+CPU was trashed all together but without any visible sign until it crashed badly after 10'~15' running - repair came effective from iterations in changing HW. Jean-Yves From tw at waldmann-edv.de Tue Nov 10 06:56:09 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 10 Nov 2020 12:56:09 +0100 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> Message-ID: <77dc267c-328c-4fa0-0857-12275ac3542d@waldmann-edv.de> >> So, I'ld bet you had / have a RAM issue on your backup client. >> >> memtest86+ results? > > It's a VM on server with ECC ram and no ECC errors logged in syslog/observium. > Granted how parity works, it's possible, but unlikely. Agreed, possible but unlikely. >> But you could just follow the advisory now, if unsure. > > I am doing so. This is the only server giving issues, the others have been > fine after moving to 1.1.14. >> If your repo index is corrupt and points to the wrong place in the >> segment files, borg might read invalid data and unpacking that might >> also lead to unexpected tuples... > > > Ok I have done this and get the below, but haven't run --repair yet as it gave > a rather unnerving message about am I really, really sure, here be dragons, etc. We've recently changed that message from "experimental" (which it is not really any more) to "potentially dangerous" (which is true although it usually works). But if you can't afford to lose the repo, make a copy of it. Advisory to follow: https://borgbackup.readthedocs.io/en/stable/changes.html#pre-1-1-11-potential-index-corruption-data-loss-issue >> root at web1:~# borg check -v >> Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] That chunks seems broken. Or the index points to the wrong place for that chunk. Guess we could add the check emitting this message also at the place where it crashed for you. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From Bryan at bryanfields.net Tue Nov 10 09:55:41 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 10 Nov 2020 09:55:41 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <77dc267c-328c-4fa0-0857-12275ac3542d@waldmann-edv.de> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <77dc267c-328c-4fa0-0857-12275ac3542d@waldmann-edv.de> Message-ID: <64ded339-d7cc-ca2f-4251-9c546dfb25e2@bryanfields.net> On 11/10/20 6:56 AM, Thomas Waldmann wrote: > But if you can't afford to lose the repo, make a copy of it. > > Advisory to follow: > > https://borgbackup.readthedocs.io/en/stable/changes.html#pre-1-1-11-potential-index-corruption-data-loss-issue > >>> root at web1:~# borg check -v >>> Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > That chunks seems broken. Or the index points to the wrong place for > that chunk. > > Guess we could add the check emitting this message also at the place > where it crashed for you. I did the repair as in the advisory and it cleaned it up with the one archive that was failing. I ran my normal script to be sure and it's working again. Thank you -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From tw at waldmann-edv.de Tue Nov 10 09:57:53 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 10 Nov 2020 15:57:53 +0100 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <64ded339-d7cc-ca2f-4251-9c546dfb25e2@bryanfields.net> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <77dc267c-328c-4fa0-0857-12275ac3542d@waldmann-edv.de> <64ded339-d7cc-ca2f-4251-9c546dfb25e2@bryanfields.net> Message-ID: <6b0f75d4-146c-679a-85a0-429e0d215eda@waldmann-edv.de> > I did the repair as in the advisory and it cleaned it up with the one archive > that was failing. I ran my normal script to be sure and it's working again. Did the error just go away via borg check --repair or did it kill the erroneous chunk / archive? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From Bryan at bryanfields.net Tue Nov 10 14:32:56 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 10 Nov 2020 14:32:56 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <6b0f75d4-146c-679a-85a0-429e0d215eda@waldmann-edv.de> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <77dc267c-328c-4fa0-0857-12275ac3542d@waldmann-edv.de> <64ded339-d7cc-ca2f-4251-9c546dfb25e2@bryanfields.net> <6b0f75d4-146c-679a-85a0-429e0d215eda@waldmann-edv.de> Message-ID: <5bff37b0-0019-c22e-5009-b8af86f0acc1@bryanfields.net> On 11/10/20 9:57 AM, Thomas Waldmann wrote: > Did the error just go away via borg check --repair or did it kill the > erroneous chunk / archive? Not sure, I don't have that terminal open and it was an old to be pruned archive anyways so it's a moot point. I do understand your wanting to know to be able to fix it, sorry I didn't note it. Thank you, -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From Bryan at bryanfields.net Tue Nov 10 14:46:22 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 10 Nov 2020 14:46:22 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <20201110125331.26c9b8ff@msi.defcon1.lan> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <20201110125331.26c9b8ff@msi.defcon1.lan> Message-ID: On 11/10/20 6:53 AM, Bzzzz wrote: > If it can help OP ; in my experience, electrical shocks can lead to > RAM/CPU/MB trashing, especially lightnings when you dwell far from > town, even with so called surge protectors - worse, trashed RAM, even > ECC, isn't always uncovered by memtest86+ in this case. Yea, that would be a surge. Grounding is very important, you want to have everything tied to a single point ground or utilize a halo type ground inside your room. I'm in St Petersburg, and we are well known for the lightning storms here. The other issue is all cables into and out of the room should be surge protected and grounded the same as well. This way during a surge, or strike event your systems all rise and fall in potential together. Things like bend radius of wire, and loops externally of cable come into play. a .01mH inductor cut with several million volts across it couples a nice surge into things. That said, I have a .2 Ohm @200 kHz ground here at home, measured with a ground tester, and my server having issues is in a co-lo facility inside a concrete building with a steel structure and redundant filtered 208v power off a HVDC plant. Some men are Catholics, some are Muslims; my religion is grounding :) > We even had a customer whose MB+RAM+CPU was trashed all together but > without any visible sign until it crashed badly after 10'~15' running - > repair came effective from iterations in changing HW. Interestingly enough, I went though a major issue with a SCSI controller in a server here. Only caught it with ZFS during checks where it was hitting the serial bus heavily. I could rebuild/build the array fine, but two disks kept dying during scrubs. I swapped the cages, the disks, the cables and then finally the controllers. Problem was the controller. It was missing some 0402 sized caps on the serial lines on the underside of the board. Unsure if it was defective or broken during install, but the caps filtered the noise off the bus, and during busy operations they allowed enough corruption of the signal that ZFS would fail the disks. -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From lazyvirus at gmx.com Tue Nov 10 15:04:21 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Tue, 10 Nov 2020 21:04:21 +0100 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <20201110125331.26c9b8ff@msi.defcon1.lan> Message-ID: <20201110210421.50d05224@msi.defcon1.lan> On Tue, 10 Nov 2020 14:46:22 -0500 Bryan Fields wrote: > Things like bend radius of wire, and loops externally of cable come > into play. a .01mH inductor cut with several million volts across it > couples a nice surge into things. > > That said, I have a .2 Ohm @200 kHz ground here at home, measured with > a ground tester, Good thing, too many people test their ground connection with a simple multimeter, ignoring they can still be killed this way. > and my server having issues is in a co-lo facility > inside a concrete building with a steel structure and redundant > filtered 208v power off a HVDC plant. > > Some men are Catholics, some are Muslims; my religion is grounding :) Unfortunate children of yours ;-p) > > We even had a customer whose MB+RAM+CPU was trashed all together but > > without any visible sign until it crashed badly after 10'~15' > > running - repair came effective from iterations in changing HW. > > Interestingly enough, I went though a major issue with a SCSI > controller in a server here. Only caught it with ZFS during checks > where it was hitting the serial bus heavily. I could rebuild/build > the array fine, but two disks kept dying during scrubs. I swapped the > cages, the disks, the cables and then finally the controllers. > Problem was the controller. It was missing some 0402 sized caps on > the serial lines on the underside of the board. Unsure if it was > defective or broken during install, but the caps filtered the noise > off the bus, and during busy operations they allowed enough corruption > of the signal that ZFS would fail the disks. Interesting, I take notice and will snap both sides of the next controllers in high-resolution, just in case - thanks. Jean-Yves From Bryan at bryanfields.net Tue Nov 10 20:31:18 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 10 Nov 2020 20:31:18 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <20201110210421.50d05224@msi.defcon1.lan> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <20201110125331.26c9b8ff@msi.defcon1.lan> <20201110210421.50d05224@msi.defcon1.lan> Message-ID: <360e3fe1-310b-3d60-3a95-a113bea2ac4f@bryanfields.net> On 11/10/20 3:04 PM, Bzzzz wrote: >> Some men are Catholics, some are Muslims; my religion is grounding :) > Unfortunate children of yours ;-p) It's served me well https://i.imgur.com/Z12jh94.jpg That center building is one of my sites (borg's running in the radio room) getting a direct hit to our antenna. It received no damage, other than the top point of the aluminum mast cap was slightly melted. I have a strike counter up there, it's hit 5+ times a year. -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From Bryan at bryanfields.net Wed Nov 11 00:19:54 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Wed, 11 Nov 2020 00:19:54 -0500 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <5bff37b0-0019-c22e-5009-b8af86f0acc1@bryanfields.net> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <77dc267c-328c-4fa0-0857-12275ac3542d@waldmann-edv.de> <64ded339-d7cc-ca2f-4251-9c546dfb25e2@bryanfields.net> <6b0f75d4-146c-679a-85a0-429e0d215eda@waldmann-edv.de> <5bff37b0-0019-c22e-5009-b8af86f0acc1@bryanfields.net> Message-ID: On 11/10/20 2:32 PM, Bryan Fields wrote: > On 11/10/20 9:57 AM, Thomas Waldmann wrote: >> Did the error just go away via borg check --repair or did it kill the >> erroneous chunk / archive? > > Not sure, I don't have that terminal open and it was an old to be pruned > archive anyways so it's a moot point. I do understand your wanting to know to > be able to fix it, sorry I didn't note it. I was able to recover the output from screen! > root at web1:~# borg check -v --repair > This is a potentially dangerous function. > check --repair might lead to data loss (for kinds of corruption it is not > capable of dealing with). BE VERY CAREFUL! > > Type 'YES' if you understand this and want to continue: YES > Remote: Starting repository check > > > > Remote: Starting repository index check > Remote: Completed repository check, no problems found. > Starting archive consistency check... > Analyzing archive web1-2020-04-30_05:30 (1/37) > Analyzing archive web1-2020-05-31_05:30 (2/37) > Analyzing archive web1-2020-06-30_05:30 (3/37) > Analyzing archive web1-2020-07-31_05:30 (4/37) > Analyzing archive web1-2020-08-31_05:30 (5/37) > Analyzing archive web1-2020-09-13_05:30 (6/37) > Analyzing archive web1-2020-09-20_05:30 (7/37) > Analyzing archive web1-2020-09-27_05:30 (8/37) > Analyzing archive web1-2020-09-30_05:30 (9/37) > Analyzing archive web1-2020-10-04_05:30 (10/37) > Analyzing archive web1-2020-10-05_05:30 (11/37) > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Did not get expected metadata dict when unpacking item metadata (not a dictionary) [chunk: 000385_2c270a4e6dd4f70477ff2e74be5d6961f763ddb8e17222ab7db2a33d56cea59e] > Analyzing archive web1-2020-10-06_05:30 (12/37) > Analyzing archive web1-2020-10-18_05:30 (13/37) > Analyzing archive web1-2020-10-19_05:30 (14/37) > Analyzing archive web1-2020-10-20_05:30 (15/37) > Analyzing archive web1-2020-10-21_05:30 (16/37) > Analyzing archive web1-2020-10-22_05:30 (17/37) > Analyzing archive web1-2020-10-23_05:30 (18/37) > Analyzing archive web1-2020-10-24_05:30 (19/37) > Analyzing archive web1-2020-10-25_05:30 (20/37) > Analyzing archive web1-2020-10-26_05:30 (21/37) > Analyzing archive web1-2020-10-27_05:30 (22/37) > Analyzing archive web1-2020-10-28_05:30 (23/37) > Analyzing archive web1-2020-10-29_05:30 (24/37) > Analyzing archive web1-2020-10-30_05:30 (25/37) > Analyzing archive web1-2020-10-31_05:30 (26/37) > Analyzing archive web1-2020-11-01_05:30 (27/37) > Analyzing archive web1-2020-11-02_05:31 (28/37) > Analyzing archive web1-2020-11-02_08:59 (29/37) > Analyzing archive web1-2020-11-03_05:30 (30/37) > Analyzing archive web1-2020-11-04_05:30 (31/37) > Analyzing archive web1-2020-11-05_05:30 (32/37) > Analyzing archive web1-2020-11-06_05:30 (33/37) > Analyzing archive web1-2020-11-07_05:30 (34/37) > Analyzing archive web1-2020-11-08_05:30 (35/37) > Analyzing archive web1-2020-11-09_05:30 (36/37) > Analyzing archive web1-2020-11-10_05:30 (37/37) > Deleting 0 orphaned and 39 superseded objects... > Finished deleting orphaned/superseded objects. > Writing Manifest. > Committing repo (may take a while, due to compact_segments)... > Finished committing repo. > Archive consistency check complete, problems found. > root at web1:~# After this the web1-2020-10-05 archive was still there and running borg prune removed it: > Pruning archive: web1-2020-10-05_05:30 Mon, 2020-10-05 05:31:07 [c4cf46ab907e60c24fa33772269795bb47465b985640c00ff5a3bf686f3acb44] (15/20) So I'd say it it looks like it fixed the archive and then prune was able to remove it as it was old. Hope this helps, -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From lazyvirus at gmx.com Wed Nov 11 04:25:55 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Wed, 11 Nov 2020 10:25:55 +0100 Subject: [Borgbackup] borg prune crashing: AttributeError: 'tuple' object has no attribute 'items' In-Reply-To: <360e3fe1-310b-3d60-3a95-a113bea2ac4f@bryanfields.net> References: <8571662f-f12c-8f87-7964-2bf000143499@bryanfields.net> <1fa5fa1d-3d67-f312-218a-07678f119c5a@bryanfields.net> <20201110125331.26c9b8ff@msi.defcon1.lan> <20201110210421.50d05224@msi.defcon1.lan> <360e3fe1-310b-3d60-3a95-a113bea2ac4f@bryanfields.net> Message-ID: <20201111102555.018b9133@msi.defcon1.lan> On Tue, 10 Nov 2020 20:31:18 -0500 Bryan Fields wrote: > On 11/10/20 3:04 PM, Bzzzz wrote: > >> Some men are Catholics, some are Muslims; my religion is > >> grounding :) > > Unfortunate children of yours ;-p) > > It's served me well > > https://i.imgur.com/Z12jh94.jpg Impressive picture by the size of its lightnings! > That center building is one of my sites (borg's running in the radio > room) getting a direct hit to our antenna. It received no damage, > other than the top point of the aluminum mast cap was slightly > melted. I have a strike counter up there, it's hit 5+ times a year. Nice validation of protections :) Jean-Yves From eric at in3x.io Wed Nov 11 22:47:37 2020 From: eric at in3x.io (Eric S. Johansson) Date: Thu, 12 Nov 2020 05:47:37 +0200 (EET) Subject: [Borgbackup] Determining size of archive Message-ID: <1874816414.11241.1605152857189.JavaMail.zimbra@in3x.io> I need to determine how effective the duplication compression will be on a particular data set. According to documentation a dry-run doesn't for the duplication or compression. What's the best way to check its effectiveness? -- Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lazyvirus at gmx.com Thu Nov 12 01:07:54 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Thu, 12 Nov 2020 07:07:54 +0100 Subject: [Borgbackup] Determining size of archive In-Reply-To: <1874816414.11241.1605152857189.JavaMail.zimbra@in3x.io> References: <1874816414.11241.1605152857189.JavaMail.zimbra@in3x.io> Message-ID: <20201112070754.1446f29d@msi.defcon1.lan> On Thu, 12 Nov 2020 05:47:37 +0200 (EET) "Eric S. Johansson" wrote: > I need to determine how effective the duplication compression will be > on a particular data set. According to documentation a dry-run doesn't > for the duplication or compression. What's the best way to check its > effectiveness? Hmm, I would make a test on a representative sample coming from your data set and extrapolate for the whole - you might have an error margin, but it is much faster to test several comp/decompression methods/parms. You might need something like that to help you choose your own sample : #!/bin/sh usage () { echo echo "You're doing it wrong!" echo echo "Usage: `basename $0` " echo } if [ ! "$1" ]; then usage exit 1 fi clear echo "Count and sort files by their size from directory: $1" echo "============================================================================================" echo "ie: 128 ? 383 < 256 >>> Means there are 383 files of size [128-256[ BYTES" echo "============================================================================================" echo " [Lower limit] Nb of files ]Higher limit[" # Do not work from the command line => from a script only ! find $1 -type f -print0 | xargs -0 ls -l | awk '{size[int(log($5)/log(2))]++}END{for (i in size) { printf("%'"'"'15.f", 2^i) ; printf(" ? %'"'"'15.f", size[i]) ; printf(" < %'"'"'15.f\n", 2^(i+1)) } }' | sort -n exit 0 Jean-Yves From tw at waldmann-edv.de Sun Nov 15 08:01:18 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 15 Nov 2020 14:01:18 +0100 Subject: [Borgbackup] Determining size of archive In-Reply-To: <1874816414.11241.1605152857189.JavaMail.zimbra@in3x.io> References: <1874816414.11241.1605152857189.JavaMail.zimbra@in3x.io> Message-ID: On 11/12/20 4:47 AM, Eric S. Johansson wrote: > I need to determine how effective the duplication compression will be on > a particular data set. According to documentation a dry-run doesn't for > the duplication or compression. What's the best way to check its > effectiveness? By running a real backup with --stats (and maybe also --list). As already mentioned, you could try to make a representative subset if your dataset is too large for experiments. Note: if --dry-run were extended to do all what is needed to compute dedup statistics, it would be rather expensive already. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dassies at eml.cc Mon Nov 16 07:18:14 2020 From: dassies at eml.cc (Nils Blomqvist) Date: Mon, 16 Nov 2020 13:18:14 +0100 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <87tuu24ha0.fsf@uwo.ca> References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> Message-ID: <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> Accidentally sent my reply - see below - to Dan instead of to the list. So here it is. Also including Dan?s reply. On 6 Nov 2020, at 16:49, Dan Christensen wrote: > Nils, > > Please keep the discussion on list. And you should always cut-and-paste > your command line, in case we see something that you do not. > > In this case, the archives kept seem to exactly match your prune rules, > so you should also explain why you think there is a problem. > > Dan > > On Nov 6, 2020, Nils Blomqvist wrote: > >> Thanks Dan. The `prune` command has nothing significant other than >> the rules in my previous e-mail (except ?list). >> >> The excerpts include the entire listings now. >> >> Excerpt of first prune (24 Okt 2020 10:09:51): >> >> Keeping archive: backup-2020-10-24T10:09:21 >> Keeping archive: backup-2020-10-21T12:45:19 >> Keeping archive: backup-2020-10-19T18:46:34 >> Keeping archive: backup-2020-10-06T16:32:43 >> Keeping archive: backup-2020-08-22T10:34:31 >> Keeping archive: backup-2020-08-13T18:50:32 >> Keeping archive: backup-2020-08-07T09:45:07 >> Keeping archive: backup-2020-08-04T18:05:27 >> Keeping archive: backup-2020-08-03T17:17:32 >> Keeping archive: backup-2020-08-02T11:29:23 >> Keeping archive: backup-2020-07-29T19:17:17 >> Keeping archive: backup-2020-07-24T17:01:59 >> Keeping archive: backup-2020-07-19T08:10:21 >> Keeping archive: backup-2020-07-06T17:58:07 >> Pruning archive: backup-2020-07-05T08:12:26 >> >> And the second again (5 Nov 2020 14:45:44): >> >> Keeping archive: backup-2020-11-05T14:45:15 >> Keeping archive: backup-2020-10-29T13:40:32 >> Keeping archive: backup-2020-10-28T16:39:12 >> Keeping archive: backup-2020-10-27T15:59:16 >> Keeping archive: backup-2020-10-26T14:14:48 >> Keeping archive: backup-2020-10-24T10:09:21 >> Keeping archive: backup-2020-10-21T12:45:19 >> Keeping archive: backup-2020-10-19T18:46:34 >> Keeping archive: backup-2020-10-06T16:32:43 >> Keeping archive: backup-2020-08-22T10:34:31 >> Keeping archive: backup-2020-08-13T18:50:32 >> Keeping archive: backup-2020-08-07T09:45:07 >> Pruning archive: backup-2020-08-02T11:29:23 >> Keeping archive: backup-2020-07-29T19:17:17 >> Pruning archive: backup-2020-07-24T17:01:59 >> Pruning archive: backup-2020-07-19T08:10:21 >> Pruning archive: backup-2020-07-06T17:58:07 >> >> Nils >> >> On 6 Nov 2020, at 15:24, Dan Christensen wrote: >> >>> Presumably you have backups more recent than those in July, and the more >>> recent ones are "using up" the keep rules. If you want help, you'll >>> need to post a list of all recent archives up to the ones that you don't >>> understand, and the exact borg command that you used. >>> >>> Dan >>> >>> On Nov 6, 2020, Nils Blomqvist wrote: >>> >>>> Settings: >>>> >>>> keep-within 3d >>>> keep-daily 7 >>>> keep-weekly 4 >>>> keep-monthly 6 >>>> keep-yearly 7 >>>> >>>> Excerpt of first prune (24 Okt 2020 10:09:51): >>>> >>>> Keeping archive: backup-2020-07-29T19:17:17 >>>> Keeping archive: backup-2020-07-24T17:01:59 >>>> Keeping archive: backup-2020-07-19T08:10:21 >>>> Keeping archive: backup-2020-07-06T17:58:07 >>>> Pruning archive: backup-2020-07-05T08:12:26 >>>> >>>> Excerpt of second prune (5 Nov 2020 14:45:44): >>>> >>>> Keeping archive: backup-2020-07-29T19:17:17 >>>> Pruning archive: backup-2020-07-24T17:01:59 >>>> Pruning archive: backup-2020-07-19T08:10:21 >>>> Pruning archive: backup-2020-07-06T17:58:07 >>>> >>>> In the first prune, the latest four archives are kept. >>>> In the second, only the latest is kept. >>>> >>>> I don?t understand how the pruning rules are applied. >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup From dassies at eml.cc Wed Nov 18 02:12:18 2020 From: dassies at eml.cc (Nils Blomqvist) Date: Wed, 18 Nov 2020 08:12:18 +0100 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> Message-ID: <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> Still trying to make sense of how the prune rules are applied to my backups. I added labels to the right to indicate which ?keep? rule I think is matched, but I?m just guessing. Looking for some help here. The rules again, for reference: > keep-within 3d > keep-daily 7 > keep-weekly 4 > keep-monthly 6 > keep-yearly 7 Prune run (on 2020-10-24, after the latest backup): Keeping archive: backup-2020-10-24T10:09:21 within 3d Keeping archive: backup-2020-10-21T12:45:19 daily 7 Keeping archive: backup-2020-10-19T18:46:34 daily 7 Keeping archive: backup-2020-10-06T16:32:43 daily 7 Keeping archive: backup-2020-08-22T10:34:31 daily 7 Keeping archive: backup-2020-08-13T18:50:32 daily 7 Keeping archive: backup-2020-08-07T09:45:07 daily 7 Keeping archive: backup-2020-08-04T18:05:27 daily 7 Keeping archive: backup-2020-08-03T17:17:32 weekly 4 Keeping archive: backup-2020-08-02T11:29:23 monthly 6 Keeping archive: backup-2020-07-29T19:17:17 ? Keeping archive: backup-2020-07-24T17:01:59 ? Keeping archive: backup-2020-07-19T08:10:21 ? Keeping archive: backup-2020-07-06T17:58:07 ? Pruning archive: backup-2020-07-05T08:12:26 ? Nils On 16 Nov 2020, at 13:18, Nils Blomqvist wrote: > Accidentally sent my reply - see below - to Dan instead of to the > list. So > here it is. Also including Dan?s reply. > > On 6 Nov 2020, at 16:49, Dan Christensen wrote: > >> Nils, >> >> Please keep the discussion on list. And you should always >> cut-and-paste >> your command line, in case we see something that you do not. >> >> In this case, the archives kept seem to exactly match your prune >> rules, >> so you should also explain why you think there is a problem. >> >> Dan >> >> On Nov 6, 2020, Nils Blomqvist wrote: >> >>> Thanks Dan. The `prune` command has nothing significant other than >>> the rules in my previous e-mail (except ?list). >>> >>> The excerpts include the entire listings now. >>> >>> Excerpt of first prune (24 Okt 2020 10:09:51): >>> >>> Keeping archive: backup-2020-10-24T10:09:21 >>> Keeping archive: backup-2020-10-21T12:45:19 >>> Keeping archive: backup-2020-10-19T18:46:34 >>> Keeping archive: backup-2020-10-06T16:32:43 >>> Keeping archive: backup-2020-08-22T10:34:31 >>> Keeping archive: backup-2020-08-13T18:50:32 >>> Keeping archive: backup-2020-08-07T09:45:07 >>> Keeping archive: backup-2020-08-04T18:05:27 >>> Keeping archive: backup-2020-08-03T17:17:32 >>> Keeping archive: backup-2020-08-02T11:29:23 >>> Keeping archive: backup-2020-07-29T19:17:17 >>> Keeping archive: backup-2020-07-24T17:01:59 >>> Keeping archive: backup-2020-07-19T08:10:21 >>> Keeping archive: backup-2020-07-06T17:58:07 >>> Pruning archive: backup-2020-07-05T08:12:26 >>> >>> And the second again (5 Nov 2020 14:45:44): >>> >>> Keeping archive: backup-2020-11-05T14:45:15 >>> Keeping archive: backup-2020-10-29T13:40:32 >>> Keeping archive: backup-2020-10-28T16:39:12 >>> Keeping archive: backup-2020-10-27T15:59:16 >>> Keeping archive: backup-2020-10-26T14:14:48 >>> Keeping archive: backup-2020-10-24T10:09:21 >>> Keeping archive: backup-2020-10-21T12:45:19 >>> Keeping archive: backup-2020-10-19T18:46:34 >>> Keeping archive: backup-2020-10-06T16:32:43 >>> Keeping archive: backup-2020-08-22T10:34:31 >>> Keeping archive: backup-2020-08-13T18:50:32 >>> Keeping archive: backup-2020-08-07T09:45:07 >>> Pruning archive: backup-2020-08-02T11:29:23 >>> Keeping archive: backup-2020-07-29T19:17:17 >>> Pruning archive: backup-2020-07-24T17:01:59 >>> Pruning archive: backup-2020-07-19T08:10:21 >>> Pruning archive: backup-2020-07-06T17:58:07 >>> >>> Nils >>> >>> On 6 Nov 2020, at 15:24, Dan Christensen wrote: >>> >>>> Presumably you have backups more recent than those in July, and the >>>> more >>>> recent ones are "using up" the keep rules. If you want help, >>>> you'll >>>> need to post a list of all recent archives up to the ones that you >>>> don't >>>> understand, and the exact borg command that you used. >>>> >>>> Dan >>>> >>>> On Nov 6, 2020, Nils Blomqvist wrote: >>>> >>>>> Settings: >>>>> >>>>> keep-within 3d >>>>> keep-daily 7 >>>>> keep-weekly 4 >>>>> keep-monthly 6 >>>>> keep-yearly 7 >>>>> >>>>> Excerpt of first prune (24 Okt 2020 10:09:51): >>>>> >>>>> Keeping archive: backup-2020-07-29T19:17:17 >>>>> Keeping archive: backup-2020-07-24T17:01:59 >>>>> Keeping archive: backup-2020-07-19T08:10:21 >>>>> Keeping archive: backup-2020-07-06T17:58:07 >>>>> Pruning archive: backup-2020-07-05T08:12:26 >>>>> >>>>> Excerpt of second prune (5 Nov 2020 14:45:44): >>>>> >>>>> Keeping archive: backup-2020-07-29T19:17:17 >>>>> Pruning archive: backup-2020-07-24T17:01:59 >>>>> Pruning archive: backup-2020-07-19T08:10:21 >>>>> Pruning archive: backup-2020-07-06T17:58:07 >>>>> >>>>> In the first prune, the latest four archives are kept. >>>>> In the second, only the latest is kept. >>>>> >>>>> I don?t understand how the pruning rules are applied. >>>> _______________________________________________ >>>> Borgbackup mailing list >>>> Borgbackup at python.org >>>> https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From jdc at uwo.ca Wed Nov 18 09:15:59 2020 From: jdc at uwo.ca (Dan Christensen) Date: Wed, 18 Nov 2020 09:15:59 -0500 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> (Nils Blomqvist's message of "Wed, 18 Nov 2020 08:12:18 +0100") References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> Message-ID: <87y2iy699c.fsf@uwo.ca> Nils, You should show the exact command line you used, as well as the precise time you ran it. You should also include your borg version (borg -V). Your time zone might also be relevant. But I'll work with what you sent and put my notes in the right-hand column. On Nov 18, 2020, Nils Blomqvist wrote: > keep-within 3d > keep-daily 7 > keep-weekly 4 > keep-monthly 6 > keep-yearly 7 > > Prune run (on 2020-10-24, after the latest backup): The behaviour depends on the exact time, but I'll assume it was immediately after the last backup. If you had run it a few hours later, the within 3d rule would not have caught the second archive. Keeping archive: backup-2020-10-24T10:09:21 within 3d Keeping archive: backup-2020-10-21T12:45:19 within 3d = 72 hours Keeping archive: backup-2020-10-19T18:46:34 daily 7 Keeping archive: backup-2020-10-06T16:32:43 daily 7 Keeping archive: backup-2020-08-22T10:34:31 daily 7 Keeping archive: backup-2020-08-13T18:50:32 daily 7 Keeping archive: backup-2020-08-07T09:45:07 daily 7 Keeping archive: backup-2020-08-04T18:05:27 daily 7 Keeping archive: backup-2020-08-03T17:17:32 daily 7 Keeping archive: backup-2020-08-02T11:29:23 weekly 4 [Aug 2 was a Sunday] Keeping archive: backup-2020-07-29T19:17:17 monthly 6 Keeping archive: backup-2020-07-24T17:01:59 weekly 4 Keeping archive: backup-2020-07-19T08:10:21 weekly 4 [July 19 was a Sunday] Keeping archive: backup-2020-07-06T17:58:07 weekly 4 Pruning archive: backup-2020-07-05T08:12:26 pruned I was surprised at first that the July 29 archive was kept, but since it's the last one in July and wasn't kept by the weekly rule, the monthly rule catches it. (Note that July 29 and Aug 2 are in the same week, since weeks go from Monday to Sunday.) The July 5 backup is the last in its week, but the weekly rules are used up. And it's not the last in its month or year. So it is pruned. The output of "borg help prune" contains the details and there is also help here: https://borgbackup.readthedocs.io/en/stable/usage/prune.html It would probably be good to extend that example to include the situation that appeared above... Everything also looks exactly right for the second run you showed: > And the second again (5 Nov 2020 14:45:44): Keeping archive: backup-2020-11-05T14:45:15 within 3d Keeping archive: backup-2020-10-29T13:40:32 daily 7 Keeping archive: backup-2020-10-28T16:39:12 daily 7 Keeping archive: backup-2020-10-27T15:59:16 daily 7 Keeping archive: backup-2020-10-26T14:14:48 daily 7 Keeping archive: backup-2020-10-24T10:09:21 daily 7 Keeping archive: backup-2020-10-21T12:45:19 daily 7 Keeping archive: backup-2020-10-19T18:46:34 daily 7 Keeping archive: backup-2020-10-06T16:32:43 weekly 4 Keeping archive: backup-2020-08-22T10:34:31 weekly 4 Keeping archive: backup-2020-08-13T18:50:32 weekly 4 Keeping archive: backup-2020-08-07T09:45:07 weekly 4 Pruning archive: backup-2020-08-02T11:29:23 pruned Keeping archive: backup-2020-07-29T19:17:17 monthly Pruning archive: backup-2020-07-24T17:01:59 pruned Pruning archive: backup-2020-07-19T08:10:21 pruned Pruning archive: backup-2020-07-06T17:58:07 pruned Aug 2 pruned, since weekly rules are used up, and it's not the last in its month or year. Same for the other three. Dan From l0f4r0 at tuta.io Wed Nov 18 17:52:49 2020 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Wed, 18 Nov 2020 23:52:49 +0100 (CET) Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <87y2iy699c.fsf@uwo.ca> References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> <87y2iy699c.fsf@uwo.ca> Message-ID: Hi, 18 nov. 2020 ? 15:15 de jdc at uwo.ca: > On Nov 18, 2020, Nils Blomqvist wrote: > >> keep-within 3d >> keep-daily 7 >> keep-weekly 4 >> keep-monthly 6 >> keep-yearly 7 >> >> Prune run (on 2020-10-24, after the latest backup): >> > The behaviour depends on the exact time, but I'll assume it was > immediately after the last backup. If you had run it a few hours > later, the within 3d rule would not have caught the second archive. > > Keeping archive: backup-2020-10-24T10:09:21 within 3d > Keeping archive: backup-2020-10-21T12:45:19 within 3d = 72 hours > Keeping archive: backup-2020-10-19T18:46:34 daily 7 > Keeping archive: backup-2020-10-06T16:32:43 daily 7 > Keeping archive: backup-2020-08-22T10:34:31 daily 7 > Keeping archive: backup-2020-08-13T18:50:32 daily 7 > Keeping archive: backup-2020-08-07T09:45:07 daily 7 > Keeping archive: backup-2020-08-04T18:05:27 daily 7 > Keeping archive: backup-2020-08-03T17:17:32 daily 7 > Keeping archive: backup-2020-08-02T11:29:23 weekly 4 [Aug 2 was a Sunday] > Keeping archive: backup-2020-07-29T19:17:17 monthly 6 > Keeping archive: backup-2020-07-24T17:01:59 weekly 4 > Keeping archive: backup-2020-07-19T08:10:21 weekly 4 [July 19 was a Sunday] > Keeping archive: backup-2020-07-06T17:58:07 weekly 4 > Pruning archive: backup-2020-07-05T08:12:26 pruned > > I was surprised at first that the July 29 archive was kept, but since > it's the last one in July and wasn't kept by the weekly rule, the > monthly rule catches it. (Note that July 29 and Aug 2 are in the same > week, since weeks go from Monday to Sunday.) > > The July 5 backup is the last in its week, but the weekly rules > are used up. And it's not the last in its month or year. So it > is pruned. > > The output of "borg help prune" contains the details and there is > also help here: > > https://borgbackup.readthedocs.io/en/stable/usage/prune.html > > It would probably be good to extend that example to include the > situation that appeared above... > Wow, really interesting. I confess I didn't understand it that way... (I) For me, keep-daily/weekly/monthly/yearly were not supposed to interfere/interlace with each other. I mean, I thought keep-daily rule is applied, then keep-weekly, keep-monthly... (II) But you seem to explain that if a rule is not satisfied, the next ones are evaluated in order to keep archives if applicable, and then the previous rule continues if the conditions are still satisfied and so on... In other words, rules are applied cyclically with precedence until they are used up. Do other people confirm prune works the (II) way? NB: The illustrated example on?https://borgbackup.readthedocs.io/en/stable/usage/prune.html is great but is actually to simple to be sure how it works exactly ;) Thanks & Best regards, l0f4r0 From jdc at uwo.ca Wed Nov 18 18:47:17 2020 From: jdc at uwo.ca (Dan Christensen) Date: Wed, 18 Nov 2020 18:47:17 -0500 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: (l0f4r's message of "Wed, 18 Nov 2020 23:52:49 +0100 (CET)") References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> <87y2iy699c.fsf@uwo.ca> Message-ID: <87d00a448q.fsf@uwo.ca> On Nov 18, 2020, l0f4r0--- via Borgbackup wrote: > (I) For me, keep-daily/weekly/monthly/yearly were not supposed to > interfere/interlace with each other. I mean, I thought keep-daily rule > is applied, then keep-weekly, keep-monthly... > > (II) But you seem to explain that if a rule is not satisfied, the next > ones are evaluated in order to keep archives if applicable, and then > the previous rule continues if the conditions are still satisfied and > so on... > In other words, rules are applied cyclically with precedence until they are used up. > > Do other people confirm prune works the (II) way? The rules are applied strictly in order. E.g. in the example in the previous message, when the weekly rule runs, it examines each backup that is the last in its week, and takes the most recent ones that haven't already been kept because of an earlier rule. Then, the monthly rule runs, and considers each backup that is the last in its month, and keeps any that weren't matched by an earlier rule like the weekly rule. In the above example, that causes an interleaving, because the last backup in a month is not necessarily the last backup in a week. This interleaving only happens with weekly backups. All the other ones nest in the expected way, I believe. (In any case, I don't think it matters too much. Borg is so space efficient that you should just keep lots of history and not worry too much about which ones get pruned. But it's fun to think about.) BTW, I think the development version shows which rules cause each archive to be kept. I agree that the example in the web docs would be better if it had --keep-weekly 4 in it, to illustrate this. Dan From lazyvirus at gmx.com Wed Nov 18 19:00:56 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Thu, 19 Nov 2020 01:00:56 +0100 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <87d00a448q.fsf@uwo.ca> References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> <87y2iy699c.fsf@uwo.ca> <87d00a448q.fsf@uwo.ca> Message-ID: <20201119010056.23743825@msi.defcon1.lan> On Wed, 18 Nov 2020 18:47:17 -0500 Dan Christensen wrote: > (In any case, I don't think it matters too much. Borg is so space > efficient that you should just keep lots of history and not worry > too much about which ones get pruned. But it's fun to think about.) Agreed, at home I keep 3 whole months (sometimes with 2 or 3 backups per days, sometimes only once a week - coming to an average of one/day) of whole systems (for a very easy _tested_ reconstruction) - for 5 machines, the overhead is only around 180~220 GB on a total of 1.3 TB. Jean-Yves From dassies at eml.cc Sat Nov 21 07:51:22 2020 From: dassies at eml.cc (Nils Blomqvist) Date: Sat, 21 Nov 2020 13:51:22 +0100 Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <87y2iy699c.fsf@uwo.ca> References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> <87y2iy699c.fsf@uwo.ca> Message-ID: <9974C06B-05E7-409A-80DA-E0FA999CBA68@eml.cc> On 18 Nov 2020, at 15:15, Dan Christensen wrote: > Nils, > > You should show the exact command line you used, as well as the precise > time you ran it. You should also include your borg version (borg -V). > Your time zone might also be relevant. But I'll work with what you sent > and put my notes in the right-hand column. Thank you for the help. > BTW, I think the development version shows which rules cause each > archive to be kept. Interesting! I will have to take a look. It would be handy to have a switch to enable this in the regular release. Nils From jasper at knockaert.nl Tue Dec 1 07:02:58 2020 From: jasper at knockaert.nl (Jasper Knockaert) Date: Tue, 01 Dec 2020 13:02:58 +0100 Subject: [Borgbackup] zstd compression Message-ID: Hi I was looking at the new zstd compression option. I am now using lzma,6. I think zstd,19 should deliver a similar compression ratio. So the only question seems to be if zstd is any faster. That could be the case if it runs in multithread mode. However, the default for zstd seems to be to run as a single thread. There seems to be little evidence that single-threaded zstd compression is any faster than lzma for a similar compression ratio... (I'm avoiding 20+ modes for zstd as they use a lot more memory.) Best Jasper From tw at waldmann-edv.de Tue Dec 1 08:20:50 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 1 Dec 2020 14:20:50 +0100 Subject: [Borgbackup] zstd compression / multithreading In-Reply-To: References: Message-ID: <520d86ea-4674-a943-e83b-a4ffdceb05dd@waldmann-edv.de> Hi Jasper, > I was looking at the new zstd compression option. I am now using lzma,6. > I think zstd,19 should deliver a similar compression ratio. So the only > question seems to be if zstd is any faster. That could be the case if it > runs in multithread mode. borg does not run any compressor code in multithreaded mode. The general issue with that is that we do not have much data per compressor call: - for small files that only result in 1 chunk, it is the file size (could be e.g. 1kB or 100kB) - for larger files that result in multiple chunks, it is usually about the target chunk size (e.g. 2MB) Compression algorithms usually take a while until they get really good (until they have built up their internal compression dictionaries), thus they want rather large input sizes for good compression. If we would use compressors in the compressor-internal multithreading mode, the compressor would usually just split the input data into N smaller pieces - making compression worse. Additionally to that, there would be overhead for thread creation and teardown, which would happen per compressor call (== rather often). That's why we don't do that. We have a github ticket about how multithreading should be done. There have been some experiments on MT, but before the final implementation can be started, we first need to improve the crypto. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From jasper at knockaert.nl Tue Dec 1 10:00:00 2020 From: jasper at knockaert.nl (Jasper Knockaert) Date: Tue, 01 Dec 2020 16:00:00 +0100 Subject: [Borgbackup] zstd compression / multithreading In-Reply-To: <520d86ea-4674-a943-e83b-a4ffdceb05dd@waldmann-edv.de> References: <520d86ea-4674-a943-e83b-a4ffdceb05dd@waldmann-edv.de> Message-ID: <2BC7968A-E6BC-448F-976C-5ABAAB1C934A@knockaert.nl> Hi Thomas On 1 Dec 2020, at 14:20, Thomas Waldmann wrote: > Hi Jasper, > >> I was looking at the new zstd compression option. I am now using >> lzma,6. I think zstd,19 should deliver a similar compression ratio. >> So the only question seems to be if zstd is any faster. That could be >> the case if it runs in multithread mode. > > borg does not run any compressor code in multithreaded mode. > > The general issue with that is that we do not have much data per > compressor call: > > - for small files that only result in 1 chunk, it is the file size > (could be e.g. 1kB or 100kB) > > - for larger files that result in multiple chunks, it is usually about > the target chunk size (e.g. 2MB) > > Compression algorithms usually take a while until they get really good > (until they have built up their internal compression dictionaries), > thus they want rather large input sizes for good compression. > > If we would use compressors in the compressor-internal multithreading > mode, the compressor would usually just split the input data into N > smaller pieces - making compression worse. > > Additionally to that, there would be overhead for thread creation and > teardown, which would happen per compressor call (== rather often). > > That's why we don't do that. > > We have a github ticket about how multithreading should be done. There > have been some experiments on MT, but before the final implementation > can be started, we first need to improve the crypto. Thanks for the reaction. To wrap up, the use of zstd seems not to be where high compression ratios are applied, as this is something that lzma already does with similar efficiency. Best Jasper From tw at waldmann-edv.de Sun Dec 6 18:52:40 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 7 Dec 2020 00:52:40 +0100 Subject: [Borgbackup] borgbackup release 1.2.0b1 Message-ID: Please help testing: After some alpha releases, released borgbackup 1.2.0b1: https://github.com/borgbackup/borg/releases/tag/1.2.0b1 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From nospam.list at unclassified.de Tue Dec 15 16:06:29 2020 From: nospam.list at unclassified.de (Yves Goergen) Date: Tue, 15 Dec 2020 22:06:29 +0100 Subject: [Borgbackup] Excludes are ignored Message-ID: Hello, I've got a problem with excluding files in a borg backup. The version is 1.1.14 on Ubuntu Linux (multiple versions). The command looks like this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude /mnt/backup_snapshot/lost+found --exclude /mnt/backup_snapshot/tmp --exclude /mnt/backup_snapshot/var/mail Or this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude lost+found --exclude tmp --exclude var/mail But all these excluded files are completely /included/ in the backup. The /mnt/backup_snapshot directory is a LVM snapshot mount directory, it's a snapshot copy of /. I've seen the documentation pages about create and patterns , but the patterns text is too much internal-dev talk for me, I don't know many of the words you use there. So in plain English, what should I provide in the --exclude option? An absolute path to where borg will read the file from? A relative path, relative to where I tell it to start reading from? Or what? Using a separate exclude file is not an option for me because all parameters to borg are specified in a bash script. -Yves -------------- next part -------------- An HTML attachment was scrubbed... URL: From lazyvirus at gmx.com Tue Dec 15 16:54:10 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Tue, 15 Dec 2020 22:54:10 +0100 Subject: [Borgbackup] Excludes are ignored In-Reply-To: References: Message-ID: <20201215225410.75af2a10@msi.defcon1.lan> On Tue, 15 Dec 2020 22:06:29 +0100 Yves Goergen wrote: > Hello, Biscotte, > I've got a problem with excluding files in a borg backup. The version > is 1.1.14 on Ubuntu Linux (multiple versions). > > The command looks like this: > > borg create > ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' > /mnt/backup_snapshot? --exclude /mnt/backup_snapshot/lost+found > --exclude /mnt/backup_snapshot/tmp > --exclude /mnt/backup_snapshot/var/mail > > Or this: > > borg create > ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' > /mnt/backup_snapshot? --exclude lost+found --exclude tmp --exclude > var/mail > > But all these excluded files are completely /included/ in the backup. > The /mnt/backup_snapshot directory is a LVM snapshot mount directory, > it's a snapshot copy of /. I store all exclusions in, using the switch: --exclude-from that is working well - it needs a little trick to also avoid keeping hidden files (/.?* termination) Here's a sample of it: # FROM : https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-help-patterns # NB...:Default style is: fm:/path/bla - fair enough for my needs. # /lost+found/* /lost+found/.?* /BACKUP/* /BACKUP/.?* /bin/lost+found/* /bin/lost+found/.?* /BORG/* /BORG/.?* /boot/lost+found/* /boot/lost+found/.?* /dev/* /dev/.?* [?] /home/*/.cache/chromium/* /home/*/.cache/mozilla/firefox/* /home/*/.claws-mail/tmp/* /home/*/.googleearth/Cache/* [?] /NFS/*/* /NFS/*/.?* /proc/* /run/* /srv/* /sys/* Jean-Yves From l0f4r0 at tuta.io Tue Dec 15 18:31:39 2020 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Wed, 16 Dec 2020 00:31:39 +0100 (CET) Subject: [Borgbackup] Excludes are ignored In-Reply-To: References: Message-ID: Hi, I really can't tell but you should at least quote your exclude patterns like this: --exclude '/mnt/backup_snapshot/var/mail' As a last resort, you can try other kinds of patterns like: sh:, re: and pp: (default is fm: for --exclude and --exclude-from) to see if that helps. Best regards, l0f4r0 From fabio.pedretti at unibs.it Wed Dec 16 01:55:57 2020 From: fabio.pedretti at unibs.it (Fabio Pedretti) Date: Wed, 16 Dec 2020 07:55:57 +0100 Subject: [Borgbackup] Excludes are ignored In-Reply-To: References: Message-ID: Try replacing --exclude var/mail with --exclude '*/var/mail' Il mar 15 dic 2020, 22:30 Yves Goergen ha scritto: > Hello, > > I've got a problem with excluding files in a borg backup. The version is > 1.1.14 on Ubuntu Linux (multiple versions). > > The command looks like this: > > borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' > /mnt/backup_snapshot --exclude /mnt/backup_snapshot/lost+found --exclude > /mnt/backup_snapshot/tmp --exclude /mnt/backup_snapshot/var/mail > > Or this: > > borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' > /mnt/backup_snapshot --exclude lost+found --exclude tmp --exclude var/mail > > But all these excluded files are completely *included* in the backup. The > /mnt/backup_snapshot directory is a LVM snapshot mount directory, it's a > snapshot copy of /. > > I've seen the documentation pages about create > and > patterns , > but the patterns text is too much internal-dev talk for me, I don't know > many of the words you use there. > > So in plain English, what should I provide in the --exclude option? An > absolute path to where borg will read the file from? A relative path, > relative to where I tell it to start reading from? Or what? > > Using a separate exclude file is not an option for me because all > parameters to borg are specified in a bash script. > > -Yves > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Informativa sulla Privacy: http://www.unibs.it/node/8155 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nospam.list at unclassified.de Wed Dec 16 12:21:41 2020 From: nospam.list at unclassified.de (Yves Goergen) Date: Wed, 16 Dec 2020 18:21:41 +0100 Subject: [Borgbackup] Excludes are ignored In-Reply-To: References: Message-ID: <42d43df7-218e-a669-cd86-fcf59b6f4af9@unclassified.de> That doesn't help either. And I didn't want to exclude *all* "tmp" directories (or files) everywhere, just in the root directory. Any other possitilities to make the borg --exclude parameter do what it's supposed to do? -Yves -------- Urspr?ngliche Nachricht -------- Von: Fabio Pedretti Gesendet: Mittwoch, 16. Dezember 2020, 07:55 MEZ Betreff: [Borgbackup] Excludes are ignored Try replacing --exclude var/mail with --exclude '*/var/mail' Il mar 15 dic 2020, 22:30 Yves Goergen > ha scritto: Hello, I've got a problem with excluding files in a borg backup. The version is 1.1.14 on Ubuntu Linux (multiple versions). The command looks like this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude /mnt/backup_snapshot/lost+found --exclude /mnt/backup_snapshot/tmp --exclude /mnt/backup_snapshot/var/mail Or this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude lost+found --exclude tmp --exclude var/mail But all these excluded files are completely /included/ in the backup. The /mnt/backup_snapshot directory is a LVM snapshot mount directory, it's a snapshot copy of /. I've seen the documentation pages about create and patterns , but the patterns text is too much internal-dev talk for me, I don't know many of the words you use there. So in plain English, what should I provide in the --exclude option? An absolute path to where borg will read the file from? A relative path, relative to where I tell it to start reading from? Or what? Using a separate exclude file is not an option for me because all parameters to borg are specified in a bash script. -Yves _______________________________________________ Borgbackup mailing list Borgbackup at python.org https://mail.python.org/mailman/listinfo/borgbackup Informativa sulla Privacy: http://www.unibs.it/node/8155 From nospam.list at unclassified.de Wed Dec 16 12:29:27 2020 From: nospam.list at unclassified.de (Yves Goergen) Date: Wed, 16 Dec 2020 18:29:27 +0100 Subject: [Borgbackup] Excludes are ignored In-Reply-To: References: Message-ID: <9317b241-fd21-992e-f94b-0f9545d2119d@unclassified.de> I've tested some more. And the result is that borg includes files in the backup that are not included in the --list option. When I do this: > borg create --list --dry-run $REPOSITORY /home/me --exclude '/home/me/BufrReader/archive' 2>&1 |less Then no files within /home/me/BufrReader/archive/ appear in the list. But when I do this: > borg create -v --stats --progress $REPOSITORY /home/me --exclude '/home/me/BufrReader/archive' Then I see all those files showing up. And that's many of them (thousands). Is there any difference between lists/dry-run and the real operation? -Yves -------- Urspr?ngliche Nachricht -------- Von: Fabio Pedretti Gesendet: Mittwoch, 16. Dezember 2020, 07:55 MEZ Betreff: [Borgbackup] Excludes are ignored Try replacing --exclude var/mail with --exclude '*/var/mail' Il mar 15 dic 2020, 22:30 Yves Goergen > ha scritto: Hello, I've got a problem with excluding files in a borg backup. The version is 1.1.14 on Ubuntu Linux (multiple versions). The command looks like this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude /mnt/backup_snapshot/lost+found --exclude /mnt/backup_snapshot/tmp --exclude /mnt/backup_snapshot/var/mail Or this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude lost+found --exclude tmp --exclude var/mail But all these excluded files are completely /included/ in the backup. The /mnt/backup_snapshot directory is a LVM snapshot mount directory, it's a snapshot copy of /. I've seen the documentation pages about create and patterns , but the patterns text is too much internal-dev talk for me, I don't know many of the words you use there. So in plain English, what should I provide in the --exclude option? An absolute path to where borg will read the file from? A relative path, relative to where I tell it to start reading from? Or what? Using a separate exclude file is not an option for me because all parameters to borg are specified in a bash script. -Yves _______________________________________________ Borgbackup mailing list Borgbackup at python.org https://mail.python.org/mailman/listinfo/borgbackup Informativa sulla Privacy: http://www.unibs.it/node/8155 From nospam.list at unclassified.de Thu Dec 17 11:06:23 2020 From: nospam.list at unclassified.de (Yves Goergen) Date: Thu, 17 Dec 2020 17:06:23 +0100 Subject: [Borgbackup] Excludes are ignored In-Reply-To: References: Message-ID: That didn't work. Whatever I tried, nothing was excluded in the actual backup. Until borg has that feature, I found a workaround that doesn't need borg backup to support excludes. Since I always create an LVM snapshot, I delete all files to be excluded from that snapshot before running borg. So borg will never see what it should not include. This also has the benefit that the display of the data size to read is more accurate (df -h /mnt/snapshot) because it also only sees the files to include and nothing else. BTW, can somebody please configure the mailing list so that I also receive a copy of what I send to it? Mailing lists used to do that in the past. Without these messages, my thread tree is very incomplete and it's hard to follow conversations. -Yves -------- Urspr?ngliche Nachricht -------- Von: Fabio Pedretti Gesendet: Mittwoch, 16. Dezember 2020, 07:55 MEZ Betreff: [Borgbackup] Excludes are ignored Try replacing --exclude var/mail with --exclude '*/var/mail' Il mar 15 dic 2020, 22:30 Yves Goergen > ha scritto: Hello, I've got a problem with excluding files in a borg backup. The version is 1.1.14 on Ubuntu Linux (multiple versions). The command looks like this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude /mnt/backup_snapshot/lost+found --exclude /mnt/backup_snapshot/tmp --exclude /mnt/backup_snapshot/var/mail Or this: borg create ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' /mnt/backup_snapshot? --exclude lost+found --exclude tmp --exclude var/mail But all these excluded files are completely /included/ in the backup. The /mnt/backup_snapshot directory is a LVM snapshot mount directory, it's a snapshot copy of /. I've seen the documentation pages about create and patterns , but the patterns text is too much internal-dev talk for me, I don't know many of the words you use there. So in plain English, what should I provide in the --exclude option? An absolute path to where borg will read the file from? A relative path, relative to where I tell it to start reading from? Or what? Using a separate exclude file is not an option for me because all parameters to borg are specified in a bash script. -Yves _______________________________________________ Borgbackup mailing list Borgbackup at python.org https://mail.python.org/mailman/listinfo/borgbackup Informativa sulla Privacy: http://www.unibs.it/node/8155 From lazyvirus at gmx.com Thu Dec 17 12:10:47 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Thu, 17 Dec 2020 18:10:47 +0100 Subject: [Borgbackup] Excludes are ignored In-Reply-To: References: Message-ID: <20201217181047.35a04cf8@msi.defcon1.lan> On Thu, 17 Dec 2020 17:06:23 +0100 Yves Goergen wrote: > That didn't work. Whatever I tried, nothing was excluded in the actual > backup. Did you try with : --exclude-from and the associated file ? containing rules within such as : /home/*/myproblematicdir/* /home/*/myproblematicdir/.?* > BTW, can somebody please configure the mailing list so that I also > receive a copy of what I send to it? Mailing lists used to do that in > the past. Without these messages, my thread tree is very incomplete > and it's hard to follow conversations. I use a local copy to my own SMTP into claws-mail : Bcc: local_user_name at localhost (or local_user_name at smtp.my.private.domain, YMMV) - this is the easiest way I found to do that. Jean-Yves From public at enkore.de Thu Dec 17 13:12:51 2020 From: public at enkore.de (Marian Beermann) Date: Thu, 17 Dec 2020 19:12:51 +0100 Subject: [Borgbackup] Excludes are ignored In-Reply-To: <9317b241-fd21-992e-f94b-0f9545d2119d@unclassified.de> References: <9317b241-fd21-992e-f94b-0f9545d2119d@unclassified.de> Message-ID: Hi Yves, this very much sounds like a bug, since --exclude generally works as you used it here and there should be no difference in the files borg processes between --dry-run and not--dry-run. If you have a GitHub account, please report this as a bug; If you don't have a GitHub account, I'll create one. Cheers, Marian Am 16.12.20 um 18:29 schrieb Yves Goergen: > I've tested some more. And the result is that borg includes files in the > backup that are not included in the --list option. > > When I do this: > >> borg create --list --dry-run $REPOSITORY /home/me --exclude >> '/home/me/BufrReader/archive' 2>&1 |less > > Then no files within /home/me/BufrReader/archive/ appear in the list. > But when I do this: > >> borg create -v --stats --progress $REPOSITORY /home/me --exclude >> '/home/me/BufrReader/archive' > > Then I see all those files showing up. And that's many of them > (thousands). Is there any difference between lists/dry-run and the real > operation? > > -Yves > > > -------- Urspr?ngliche Nachricht -------- > Von: Fabio Pedretti > Gesendet: Mittwoch, 16. Dezember 2020, 07:55 MEZ > Betreff: [Borgbackup] Excludes are ignored > > Try replacing > --exclude var/mail > with > --exclude '*/var/mail' > > Il mar 15 dic 2020, 22:30 Yves Goergen > ha scritto: > > ??? Hello, > > ??? I've got a problem with excluding files in a borg backup. The > ??? version is 1.1.14 on Ubuntu Linux (multiple versions). > > ??? The command looks like this: > > ??? borg create > ??? ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' > ??? /mnt/backup_snapshot? --exclude /mnt/backup_snapshot/lost+found > ??? --exclude /mnt/backup_snapshot/tmp --exclude > ??? /mnt/backup_snapshot/var/mail > > ??? Or this: > > ??? borg create > ??? ssh://user at your-storagebox.de:23/./backup-name::'{now:%Y-%m-%dT%H-%M}' > ??? /mnt/backup_snapshot? --exclude lost+found --exclude tmp --exclude > ??? var/mail > > ??? But all these excluded files are completely /included/ in the > ??? backup. The /mnt/backup_snapshot directory is a LVM snapshot mount > ??? directory, it's a snapshot copy of /. > > ??? I've seen the documentation pages about create > ??? and > ??? patterns > ??? , but > ??? the patterns text is too much internal-dev talk for me, I don't know > ??? many of the words you use there. > > ??? So in plain English, what should I provide in the --exclude option? > ??? An absolute path to where borg will read the file from? A relative > ??? path, relative to where I tell it to start reading from? Or what? > > ??? Using a separate exclude file is not an option for me because all > ??? parameters to borg are specified in a bash script. > > ??? -Yves > ??? _______________________________________________ > ??? Borgbackup mailing list > ??? Borgbackup at python.org > ??? https://mail.python.org/mailman/listinfo/borgbackup > > > > Informativa sulla Privacy: http://www.unibs.it/node/8155 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Fri Dec 25 04:44:49 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 25 Dec 2020 10:44:49 +0100 Subject: [Borgbackup] borgbackup 1.1.15 released! Message-ID: <37bd92f4-515f-64ca-2295-b9f05ceb5707@waldmann-edv.de> borgbackup 1.1.15 was just released! https://github.com/borgbackup/borg/releases/tag/1.1.15 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From l0f4r0 at tuta.io Tue Dec 29 05:40:25 2020 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Tue, 29 Dec 2020 11:40:25 +0100 (CET) Subject: [Borgbackup] Issues understanding prune 'keep' rules In-Reply-To: <87d00a448q.fsf@uwo.ca> References: <84646DC4-8681-4E24-BF30-A59459EB2E59@eml.cc> <875z6ilg1n.fsf@uwo.ca> <305065CE-F819-4691-805D-9B904E60BB83@eml.cc> <87tuu24ha0.fsf@uwo.ca> <2A4AF76D-3DCF-4F2C-9222-9CA26F2671FA@eml.cc> <0A45F4F4-B67E-462E-94F1-3626DF091F2E@eml.cc> <87y2iy699c.fsf@uwo.ca> <87d00a448q.fsf@uwo.ca> Message-ID: Hi, Sorry for the late reply. 19 nov. 2020 ? 00:47 de jdc at uwo.ca: > On Nov 18, 2020, l0f4r0--- via Borgbackup wrote: > >> (I) For me, keep-daily/weekly/monthly/yearly were not supposed to >> interfere/interlace with each other. I mean, I thought keep-daily rule >> is applied, then keep-weekly, keep-monthly... >> >> (II) But you seem to explain that if a rule is not satisfied, the next >> ones are evaluated in order to keep archives if applicable, and then >> the previous rule continues if the conditions are still satisfied and >> so on... >> In other words, rules are applied cyclically with precedence until they are used up. >> > > The rules are applied strictly in order. E.g. in the example in the > previous message, when the weekly rule runs, it examines each backup > that is the last in its week, and takes the most recent ones that > haven't already been kept because of an earlier rule. > > Then, the monthly rule runs, and considers each backup that is the > last in its month, and keeps any that weren't matched by an earlier > rule like the weekly rule. > > In the above example, that causes an interleaving, because the last > backup in a month is not necessarily the last backup in a week. > > This interleaving only happens with weekly backups. All the other > ones nest in the expected way, I believe. > Thanks for your explanations. I understand everything you said except the weekly/monthly part (you must be right though as you have succeeded in explaining the whole OP results...). For me, there is still a contradiction between "The rules are applied strictly in order" &? "I was surprised at first that the July 29 archive was kept, but since it's the last one in July and wasn't kept by the weekly rule, the monthly rule catches it." If rules were really applied in order, monthly rules should not be evaluated before every weekly rules have been applied... > (In any case, I don't think it matters too much. Borg is so space > efficient that you should just keep lots of history and not worry > too much about which ones get pruned. But it's fun to think about.) > +1 > BTW, I think the development version shows which rules cause each > archive to be kept. > > I agree that the example in the web docs would be better if it > had --keep-weekly 4 in it, to illustrate this. > Yes, definitely but it seems to be on purpose. 21 nov. 2020 ? 13:51 de dassies at eml.cc: > On 18 Nov 2020, Dan Christensen wrote: > >> BTW, I think the development version shows which rules cause each >> archive to be kept. >> > Interesting! I will have to take a look. It would be handy to have a > switch to enable this in the regular release. > I was not aware but I confirm it would be nice and prevent some questioning ;p Best regards, l0f4r0